The predictable disaster of "predicting" crime

2024-07-29

The Argentinean Ministerio de Seguridad just announced the creation of the Unidad de Inteligencia Artificial Aplicada a la Seguridad (UIAAS) (Applied Security AI Unit) with a chaotically mixed bag of functions, technologies, and goals, from "using robots to disarm bombs" to real time camera footage analysis to the bizarre "identify and compare images in physical or virtual media."

The announcement mentions USA, China, UK, Israel, France, Singapur, and India as "pioneer countries in the use of AI in government and Security Forces"; leaving aside — but not because it's not the most important aspect of it — how many of those countries aren't democracies and how many of those uses of AI are used to break reasonable baselines for civil and human rights, a large majority of the "uses for AI" claimed that aren't just "this uses software and if you call software AI it sounds cooler" just don't work.

Independent research literature is filled with analysis showing that the stated goal of using automated learning algorithms to analyze historical crime data and in that way predict future crimes and help prevent them is bunk. The concept, twenty-two years after Minority Report, is almost self-evident, but the practice of it is wrecked with the failure of in-house attempts and laughably self-fulfilling commercial offerings. Process large volumes of data from diverse sources to extract useful information and create suspect profiles or identify links between different cases is a mixture of the obvious ("check if this corpse matches a missing person description elsewhere") and the dangerously over-promise of "create suspect profiles."

There are one or two specific things mentioned that are both realistic and reasonable, but the package as a whole is neither. Empirically, there's nowhere in the world where this sort of Magic Security AI Wand isn't a mixture of overpriced "algorithms" that don't work and mass data capture that doesn't help preventing or solving the sort of crimes civil society cares about but it's definitely useful for the sort of surveillance governments do.

As somebody who has been working with AI since the AI winter of "It's never going to be useful for anything," the contemporary hype cycle of "It can do absolutely everything, don't think too hard about it" is, in a way, progress, but also a bubble that will explode, hopefully leaving behind useful advances and a basic foundation for its still underestimated long-term impacts. But the technologically unsupported use of AI in something as sensitive as security issues opens up the possibility of enormous harm — the least of it the waste of limited resources — with not even the questionable upside of it doing what it pretends to do.

Talking about analyzing historical data to predict crime is a red flag about any understanding of technological capabilities not being driven by hype and marketing, but mentioning China and India as pioneers in the use of AI in security is a much different sort of red flag.