People who have been fascinated since their youth with the idea that a certain thing is going to break the world and stand to make a lot of money building it will usually
- Build it as fast as they can.
- Try to make it so their thing breaks the world before somebody else's thing does, so they have more leverage later.
- When their thing doesn't break the world, keep more or less subtly kicking it under the table until it does.
Everybody pays lip service to AI safety, but the concerning thing is that a lot of very influential people have a deep belief on the inevitability of a Singularity event that's based on a profound misunderstanding of Vernor Vinge's point and the non-linearity of interesting problems, and regularly build end-of-the-world luxury bunkers and expensive tabula rasa designer microstates.
If it happens, my bet for the first AI-related disaster will not happen because some software "takes control" of a system and does something destructive: it'll be because somebody put that AI in a system (or was paid handsomely for doing it), got tired of waiting for AGI to happen so they can have early stage equity on our new software overlords, and started removing guardrails and pushing buttons.