AI, negative externalities, and using economics to stop the Office Apocalypse

Here’s a simple way to think about (some of) the costs and benefits of generative AI:

  • You have a new technology (an AI) that lets you create much cheaply something valuable to you (content).
  • That technology sometimes has bad impacts (wrong facts, systemic racism, political disinformation, spam) but on who aren’t you.

Economists have a very handy term for this kind of pattern: negative externalities, and they have observed that anything that benefits the doer and passes costs to somebody else tends to be done a lot, so it’s a very common phenomenon. Besides unimaginative and unsporting (not to say unethical) this is economically inefficient. The latter sometimes can spur action in the form of taxation or other cost-shifting methods to make companies internalize the costs and therefore empathize through being forced to wear at least one of the other person’s shoes.

So we can expect from basic economics that whoever

  • Benefits from producing and sharing content,
  • but doesn’t particularly suffer if it’s bland, repetitive, factually suspicious, regurgitated pap, or actually harmful,
  • is going to do produce and share a lot more of it.

If you own a content factory, this is great news (well, not really: this is a race to the bottom in more than one sense — but that’s a separate post). If you consume content in some way or form, you’re living downriver from a whole new set of large-scale toxic-dumping factories and the EPA consists of two part-time students and somebody angling for a post in a factory board. Congratulations.

But although most of the discussion is usually around what this means for social networks and other large-scale arenas — see the paragraph above for a technical description of the scenario — I don’t think it’s been sufficiently appreciated what easy access to this technology in office apps from Microsoft, Google, and such will mean from the already challenging informational environment inside companies.

As of this week, whoever

  • Thinks their career benefits from long emails that didn’t need to exist and even longer presentations that measurable lower everybody’s life force,
  • but isn’t concerned with or punished for (or aware of) — in other words, bears the cost of — said emails and presentations being rephrased banalities, logical non sequiturs, or unsourced information more convenient than credible,
  • is going to do a lot more of it.

Now, if in your environment nobody writes anything that’s not of intrinsic value in its novelty, depth of analysis, or facility of form, then these tools are of clear value: that’s the “automated first draft” theory of generative AI usefulness. But if you happen to work in one of those unfortunate, freakishly rare but sadly, if implausibly, real workplaces where the bull quotient of anything that crosses your inbox, Slack, or any other river content-bearing contrivance can be almost arbitrarily high without the person committing them suffering from any repercussion…

There’s going to be more. And that’s going to have a cost: when easy rewrites of commonplaces are faster to do than proper analysis, you’ll have more of the former, you’ll spend more time reading them, and more of your decisions and your company’s decisions will be based on them. You’ll have to spend more time wading through “content” to get increasingly less information, and if your immediate intuition is to think “that’s ok, I’ll use AIs to summarize everything,” then, well, that’s both of horrifyingly Kafkian elegance and very unlikely to be at all useful. “Nothing comes from nothing,” said Parmenides; Garbage In, Garbage Out is the less metaphysical version in computer science.

The key point is that this isn’t a technological problem. It’s not a bug in AI. It’s rather the unavoidable outcome of certain organizational environments: if the value of content (for the producer) is more elastic to quantity than to quality and you have a new technology that allows lower-cost low-quality content, then you’re going to have much more low-quality content, and there isn’t much that you can do about that.

So how do you prevent generative AI from significantly worsening the quality of your organization’s decision-making? (Note that the same argument, and the same strategy, are also relevant whenever you get a drop in the time cost of making slides, graphs, presentations, etc.) Borrowing from economists’ approaches for other negative externality problems like pollution, one possible suggestion would be some sort of cap-and-trade (maybe without the “trading” part). Basically, the idea is that the cost for others of reading your emails, slides, etc, is mostly proportional to how many you make; capping them at some level would at least put an upper limit on this cost — one that will certainly be below what’s now going to be technically feasible for somebody with PowerPoint, a generative AI, and a captive audience for their two-hours Zoom presentation.

The optimal strategy, of course, would be not to cap corporate content but to simply judge them based on how much they contribute to the collective intellectual activity that in many senses is the organization: seventy slides can be better than five if they are the right seventy slides. But we wouldn’t be in this problem if companies found this easy to do or easy to know that they aren’t doing right. In a complex situation like this, pricing based on external costs — at the limit, *capping* how many words and slides people can inflict on their colleagues per month — might not necessarily lead to the optimal outcome, but it’s easy to implement and, against the coming supply shock of bad content, necessary.