AIs are scapegoats and it’s us who end up bleeding

No AI built today has anything remotely approaching consciousness or personhood; the unstable mess of a Windows operating system is structurally more life-like than any of them. So why the increasing focus on robot ethics?

Imagine you’re a big tech company or an advanced military. Your reputation as an ethical agent is rather awful, and with good reason. When you aren’t actively doing shady things, you are playing fast and loose with the human consequences of whatever else you’re doing. After all, “move fast and break things” is both the business model and the tactical doctrine. You aren’t planning to stop — only losers pay attention to negative externalities — but it would be nice to shift the discussion as much as possible away from anything that might get you in real trouble.

Enter the AIs. Besides their undeniable (and still mostly underexploited) usefulness, they do things that people tend to feel only people — in a metaphysically loose sense — can do: play chess, aim guns, drive cars. We have decades of intense cultural discussions about good and evil robots, and thousands of years of speculation about good and evil non-human entities in general. Take a program that can beat any human at Go, call it an Artificial Intelligence, and the discussion will quickly center on how to make sure the next version doesn’t decide to take over the world and kill us all, which is as likely as the photo app in a phone falling in love with somebody and deliberately making their boyfriend look uglier.

It’s not that software sufficiently complex and independent to become a moral agent can’t or won’t be built; I’m not sure either way, and it would certainly be such an unprecedented event threshold that it merits research of its engineering, ethical, and philosophical aspects. But what we are building now are at best disjoint bundles of superhumanly complex reflexes, not minds in anything but the most abstract mathematical sense.

The reason, deliberate or not, why so many companies, militaries, and international organizations are focusing so much on the problem of ethical robots is that they make fascinating and psychologically credible patsies. It’s a PR miracle: Cars hitting pedestrians aren’t cases of under-tested engineering rushed to market to compete in a hundred-billion-dollars market. Automated weapons killing civilians aren’t a deliberate sidestepping of basic rules of engagement. Systematically biased law enforcement software aren’t built by companies and agencies ignorant of, indifferent to, or just complicit in structural inequalities and abuse.

No. They are ethical lapses from robots gone evil, racist algorithms, computers with ill-intent. Failures in the difficult and philosophically complex moral engineering for which their builders (or “creators”, which sounds more metaphysically evocative) deserve some, but not all, of the blame. It’s money laundering for blood, done through code rather than shell companies.

To be fair, most philosophers and engineers working on AI ethics don’t think in terms of “guilty” robots, and are simply working in the legitimate and difficult problem of making sure AIs are used ethically. But the political and cultural framing of the issue is certainly biased in a direction closer to sci-fi animism than to product liability laws. The Trolley Problem has become the ethical equivalent of Schrodinger’s Cat, the field’s thought experiment everybody is most familiar with, while most deaths related to self-driven cars are simply the result humans making very mundane and questionable choices regarding investment in product testing and safety features. How will an autonomous weapon ever kill a civilian, if not because the military that deployed it was either careless or uncaring in its design and testing?

Might as well blame a poorly built bridge for falling and killing people. It’s of course easier for us to assign intentions to something that moves on its own, plays a game, or talks than to a bridge, but that’s a psychological illusion, not a defensible ethical assessment.

As Ted Chiang famously noted, Silicon Valley — and its global metastasis in politics and finance — projects its own philosophical foundations and instincts onto the technology it builds. They expect software will do what they would, which is a terrifying idea if you know as well as they do what that is. But we’re also falling for our own form of projection, tempted by generations of sci-fi into wondering about rogue robot soldiers instead of unaccountable military deployments of poorly-tested machines. “Autonomous,” for machines, is an engineering term, not a philosophical one. The confusion is a convenient one, though, if you want to be able to disavow their actions.

To think of hardware as somehow haunted by its own intent is an aesthetically fascinating exercise, as well as a good way to remind yourself that any object with a computer inside enacts, if often badly, hidden agendas belonging to multiple at best partially aligned actors: the companies that built or operate it, the law enforcement agencies piggybacking on them, the different hackers who gained access to either the object or the company, etc. But a metaphor isn’t a valid guide for ethical (or, for that matter, legal or political) evaluations. People program machines, even if we call those programs AIs and the programming is driven by data sets as much as by design. And other people order them to do it, pay for it, and profit from it.

People, ultimately, decide. This isn’t a normative statement — “people should always decide” — but rather an empirical description of how things currently are: people do decide, and sometimes that decision is to not invest enough time and money in making sure their products and weapons do no harm. Under any reasonable legal or ethical system, it’s not the machine, but them, who is to blame.