Time for a post-Musk public understanding of "the algorithm"

2023-02-14

Talking about "the algorithm" has always been a convenient error. Sci-fi precedents (and a certain psychological tendency to animism in the presence of sufficiently complex behavior) has naturalized a way of thinking where not only the actions of software systems and the hardware their control, but also the ethical responsibility for the consequences of those actions, is assigned to "the algorithm" of "the AI" as if they were independent agents. The extreme version of this is to assign personhood or at least self-awareness to something like ChatGPT, but even less radical positions lead to looking at, for example, casualties from self-driving software bugs as "lapses from the AI" rather than criminally undertested product engineering greedily released prematurely in under-regulated markets.

Every applied technologist, public policy analyst, and activist (and every corporation, deep inside) knows that an AI is a product, a service, something they build and run, in some senses more complex than a train, a microwave, or a pacemaker, but no less under their control and responsibility. Not wanting to invest enough money and time to properly safeguard your machine learning software infrastructure doesn't make it an ethical actor. Journalists should know this as well, and most, perhaps do, but headlines about what the AI did drive more traffic than corporate malfeasance.

Fortunately, Elon Musk's semi-accidental purchase and subsequent torching of Twitter is making the corporate- and individual-driven nature of all AI behavior impossible to ignore: Elon Musk didn't like that not enough people were seeing his tweets, so he simply fired people until they changed the software so they would. That was it. That's always it. AIs, algorithms, platforms, always do what the people in charge of the companies employing the people building and running them want them to do or don't care enough about them not doing to spend the money and time to make sure they don't.

Talking about racist facial recognition systems, sexist chatbots, or unethical self-driving cars can be useful from an anthropological or statistical sense, but it's a very dangerous framing in terms of politics, regulation, and public discourse. AIs are just software. Software is built and run by companies and people.

Next time you read somebody blaming an algorithm for anything, remember that there's always an Elon Musk who ordered somebody to write it to do that (or at least didn't order them to make sure they wouldn't do that); the focus on AI ethics as the ethics of AIs as actors is a misdirection extremely convenient to those who own and benefit from them. It's up to everybody else to push back.

None

None