When devops involves monitoring for excess suicides

There is strong observational evidence of prolonged social network usage being correlated with depression and suicide — enough for companies like Facebook to deploy tools to attempt to predict and preempt possible cases of self-harm. But taken in isolation, these measures are akin to soda companies sponsoring bicycle races. For social networks, massive online games, and other business models predicated on algorithmic engagement maximization, the things that make them potentially dangerous to psychological health — the fostering and maintenance of compulsive behaviors, the systemic exposure to material engineered to be emotionally upsetting — are the very things that make them work as businesses.

Developers, and particularly those involved in advertising algorithms, content engineering, game design, etc, have in this a role ethically similar to that of, say, scientists designing new chemical sweeteners for a food company. It’s not enough for a new compound to have an addictive taste and be cheap to produce — it has to be safe, and it’s part of the scientist and the company’s responsibility to make sure it is. If algorithms can affect human behavior — and we know they do — and if they can do so in deleterious way — and we also know this to be true — then developers have a responsibility to account for this possibility not just as a theoretical concern, but as a metric to monitor as closely as possible.

Software development and monitoring practices are the sharp end of corporate values for technology companies. You can tell what a company really values by noting what will force an automated rollback of new code. For many companies this is some version of “blowing up,” for others it’s a performance regression, and for the most sophisticated, a negative change in a business metric. But any new deployment of, e.g., Facebook’s feed algorithms or content filtering tools has the potential of causing a huge amount of psychological and political distress, or worse. So their deployment tools have to automatically monitor and react to not just the impact of new code on metrics like resource usage, user interface latencies, or revenue per impression, but also the psychological well-being of those users exposed to the newest version of the code.

I don’t know whether companies like Facebook treat those metrics as first-order data input to software engineering decisions; perhaps they do, or are beginning to. The ethical argument for doing so is quite clear, and, if nothing else, it should be a natural first step in any goodwill PR campaign.