The one AI-related existential risk I'm worried about

2023-11-23

People who have been fascinated since their youth with the idea that a certain thing is going to break the world and stand to make a lot of money building it will usually

Everybody pays lip service to AI safety, but the concerning thing is that a lot of very influential people have a deep belief on the inevitability of a Singularity event that's based on a profound misunderstanding of Vernor Vinge's point and the non-linearity of interesting problems, and regularly build end-of-the-world luxury bunkers and expensive tabula rasa designer microstates.

If it happens, my bet for the first AI-related disaster will not happen because some software "takes control" of a system and does something destructive: it'll be because somebody put that AI in a system (or was paid handsomely for doing it), got tired of waiting for AGI to happen so they can have early stage equity on our new software overlords, and started removing guardrails and pushing buttons.