AI has obvious applications in insurance, but as AI starts becoming ubiquitous, it also begins to be something you insure, whether you know it or not, which implies some unique challenges to insurers' practices.
For example, as you shift from a traditional car fleet to one of self-driven cars
- You are no longer insuring a million slightly different drivers, but a single (software) one, getting exposed to much more concentrated risks.
- You are no longer insuring against risks that change slowly over time. Human drivers don't have their risk profiles drastically changed by not always extensively tested wireless upgrades literally overnight.
- Good luck trying to audit complex closed-source black-box software. It's mostly an unsolved problem at scale even with full access to the source code.
This isn't an existential risk for the industry - any highly regulated business that profits from information and computation differentials is in a very good place to take advantage of the growth of AI technologies. But it might not be quite the same.
The good news is that these are becoming civilization-wide issues, impacting aspects of security, social information flows, ethics, and many others, so there are enormous pressures (and huge opportunities) to figure them out.