The politics of crime-fighting software

2018-07-19

Call it machine learning, Artificial Intelligence, or simply computational intelligence: countries are rushing to apply new technologies to combat crimes, but how they do so — and even what counts as crime — varies among them, and says much about their societies, priorities, and future.

One one extreme of the arc of possibilities there's China. Ruling a single-party state, the overriding focus of the Communist Party is to prevent and, if needs be, manage popular dissatisfaction; to keep the "mandate of Heaven", in the traditional terms. China boasts a tradition of internal surveillance and bureaucratic record-keeping that's arguably much longer and more intense than that of Western cultures; despite the relatively small size of Imperial bureaucracy compared with the size of its territory and population, the ambition (and partial success) of its data-intensive political tradition was rarely matched by contemporary states.

This tradition made a natural fit for the country a style of surveillance that mixes ubiquitous visible surveillance with an almost equally ubiquitous and algorithmic approach to modulating behavior that influence everything from jaywalking to political criticism, leveraging a combination of tens (soon to be hundreds) of millions of cameras linked to facial recognition algorithms and central databases, social media monitoring, and detailed feedback mechanisms of reward and punishment that go beyond a crime/non-crime binary — witness the quantitatively graduated "social credit" schemes, as well as the immediate shaming mechanisms of putting your name and face on large screens as soon as you commit a minor infraction. The Chinese government is deploying scalable computational intelligence in a way that mirrors its political traditions and contemporary goals; "crime", in this framework, is any potentially disruptive behavior, and therefore everything is not just up for surveillance, but also a legitimate locus of control.

A much different political tradition is that of the United States. In some senses it's one of extreme distrust of government capabilities; a paradigmatic case is the way the government's own gun ownership database is forbidden, by law, to be stored in an easily searchable electronic database — a pretty unique and symptomatic case of willful stupidity, or, if you will, tactical ignorance. There are similar phenomena in the strong, and occasionally successful, support for reduced monitoring of global climate patterns, vulnerabilities in voting machines, or some forms of crime; clearly, the intensity and pattern of deployment of computational intelligence for law enforcement in the United States is at least partly modulated by a desire to keep some state capabilities strictly limited — legislation by software deficiency.

The exception is whatever can be construed as an issue of "national security", an extremely pliable notion at best. American political traditions give carte blanche to the US government on matters of counter-terrorism, the military, etc (including things like immigration and internal surveillance when construed as national security matters); this has lead to a somewhat schizophrenic situation in which large and sophisticated internal data acquisition capabilities are used to pursuit low-frequency terrorism events, but the more bureaucratic processes of regular law enforcement works on much more primitive principles.

It's not, it must be noted, a matter of rejecting automation as such. Speed cameras — the first "robot cops" in human history — were quickly and enthusiastically adopted by US police forces, becoming sometimes an important component of their budgets. On the other hand, databases recording civil asset forfeitures are notoriously primitive and fragile, even in police forces with budgets and equipment matching those of some national militarizes.

Tactical heterogeneity, then — not only in China and the United States, but across both the developed and the developing worlds — is the defining pattern of deployment of computational intelligence in crime enforcement; being a new technology with somewhat nebulous possibilities, its usage reflects cultural expectations and para-legal strategies as much as it does technical concerns. Governments in the developing world are generally more limited by budgetary matters — usually deploying computational intelligence either against politically unprotected taxpaying sectors, or in matters of internal political security, rather than in more mundane law enforcement activities, but this also reflects long-standing (and politically self-sustaining) patterns of investment and sub-investment, as much as it does stringent budgetary constraints.

This is nowhere more clear than in the most speculative and science-fictional aspects of computational law enforcement. In the US context this is often framed as the Minority Report-style prediction of future crimes — the idea that enough data capture and analytical power will make it possible to predict, and therefore interdict, high-profile crimes without, paradoxically, having to deal with the contextual conditions impacting its frequency and scale. This is a concept with roots in intelligence analysis — made infinitely more salient by the 9/11 attacks — and it contrasts with the dual imagery of ubiquitous behavior control — reinforcement learning, rather than predictive algorithms — in Chinese approaches to computational law enforcement.

A third paradigm, one not tied to specific governments but rather to global civil society, is that of data journalism. In this model, leaked databases or open data repositories are mined by assemblages of journalists, domain experts, and data scientists, with the goal of finding, clarifying, and exposing malfeasance. It's not, needless to say, a replacement to law enforcement — it's a variant of journalism rather than one of policing — but it illustrates how computational intelligence has a potentially ubiquitous role in the law enforcement activities, even when hampered by the non-governmental nature of the actors deploying it.

William Gibson once noted, in a pithy observation likely to echo down the decades as ever-more appropriate, that the future was here, just not evenly distributed. This applies very well to the use of computational intelligence for law enforcement in particular, and state capabilities in general: few countries in the world are too poor not to use it anywhere, and none, no matter how rich, is using it in a widespread manner. The intensity and pattern of its deployment mirrors and amplifies similar local patterns of investment (and strategic sub-investment) in cognitive state capabilities. How could they not?

We're more likely to see the direction of influence change over the very long term; technologies of information processing do change cultures, including concepts of what states are and can and should do, but not right away. We have still to know what computers will do to the law, its enforcement, and its breaking.

(Based on an interview with Radio DelSol.)