Organizational hollowing and the lethal 80% AI

There’s a way to frame the ongoing disasters at Boeing, Twitter, and newsrooms in general as coming from a quirk in the accounting of brains.

  • Deep expertise (in safety engineering, social network moderation, journalism, etc) is difficult to measure: it lives in cultures and relationships as much as in HR systems and mission-vision-and-values statements.
  • It’s glaringly and obviously expensive.
  • When you replace experts and the way they do things with cheaper people and tools — directly or hidden behind software — things often seem to work quite well!

And then they don’t.

Expertise isn’t about what happens 80 or 99% of the time. We live in societies with extraordinary intellectual resources, not the least of them being able to just look things up on the Internet, so reasonably experienced people and more or less capable software can deal with most of the challenges and opportunities organizations have to face on a regular basis.

Sooner or later, though, there always comes an unexpected opportunity or an unprecedented challenge: there’s a huge attack against your social network or a tricky but important story that requires an experienced journalist to research. If an organization has in-house expertise it can handle them to their advantage. If they don’t, opportunities aren’t exploited and challenges are unmet (the former is a competitive drag, the latter might or might not be survivable). It’s not something you can buy or build quickly! Both Facebook and Apple have money and the desire to build a VR/AR headset: Apple has decades of deep expertise in that sort of hardware.

Still, the risk and temptations are obvious. Unprecedented challenges and unexpected opportunities are unavoidable but infrequent. If the people running an organization don’t have a clear view of what constitutes deep expertise in their field — which by definition can’t be just measured during normal operations — or if their incentives are strongly aligned toward the short-term, then the rational thing to do is to do away with expertise.

  • Fire experts and/or remove apparently useless process crud.
  • Instant lower costs.
  • Nothing bad seems to happen right away?
  • Profit!

We are currently going through a layoffs season in tech; it’s unlikely that all organizations are hollowing out their deep expertise bench, but it’s almost always part of what happens. Experts are expensive and not obviously more productive than their almost-peers in a normal context, so even if you’d rather keep experts around, absent a clear understanding at the managerial level of what that looks like — and more people are adept at gaming ritualized evaluation systems than they are at probably any other skill in the business world — companies are likely to end up with a significantly poorer in-house expertise pool.

What makes this round interestingly more dangerous is new and easy access to “80% AIs.” We have AIs that make experts better, we have AIs that make non-experts also better, and we have AIs that do on their own maybe 80% of what needs to be done. What we don’t have are AIs that either replace experts or can make non-experts replace them. But the narrative is that we do, so many companies across multiple industries are either replacing expertise networks with AIs (or AI-assisted cheaper employees) or just building competitors from scratch along the same lines.

  • Notice that the tech environment is bad.
  • Get rid of expensive people and systems that don’t seem to be specially productive.
  • Hire AI people/subscribe to AI systems/etc to replace them.
  • Seems thing to more or less work?
  • Profit!
  • (Plus it feels cool.)

Sooner or later, though:

  • There’s a new market or tech opportunity that the AI doesn’t know what to do with and neither do the systems and people you left in place, so your competitors take advantage of it and you don’t.
  • There’s a weird huge problem out of left field, etc, etc.

You can think of deep expertise as a combination of a bank’s reserve capital and an investment fund’s war chest. On their own, for the short term, they aren’t efficient: it’s not where you’d want to have resources parked. They only become useful, and sometimes an existential necessity, when the world goes sideways in unexpected ways… which is about every couple of years these days. The more cut-throat, competitive, and unpredictable you think your market is, the more you need to have that “inefficient” reserve of people and systems with more expertise/creativity/knowledge/resources than you’re using daily, because the nature of such a market implies that you will need them for a rare opportunity or an existential threat, and that when you do it’ll be too late to build them up.

“AI” isn’t a single thing. It can be used to amplify experts as much as to automate some of the routine activities; no organization can hope to become and remain competitive without having a solid understanding how to leverage these capabilities and the will to do it. It’s not a replacement for in-house expertise: if anything, it makes it more important than ever. Confusing the two is a very fun and easy way to have a good couple of quarters before crashing and burning or simply failing to make the cut.