Paying for your own commodification

2024-06-06

Using AI in a corporate context is not without risks. It's common to discuss what happens when AIs work badly: perhaps it's even more important to consider what happens when they work as advertised.

The key economic characteristic of Large Language Models (LLMs) is that they are expensive to build but cheap and easy to use. If you have a business use case for it then most of your competitors do too. In the short term they offer clear advantages in cost and scalability, but with the unavoidable strategic cost of removing your competitive advantage in whatever sort of cognitive work you outsourced to them. Having better prompts and libraries is a paper-thin alternative to the moat of superior expertise.

The obvious impact to competitiveness comes from outsourcing specific acts of thinking — essentially, any question you ask an LLM is also one that your competitors can ask — but this also erodes a company's meta-cognitive capabilities. It's already hard enough to build internal, proprietary, nuanced, valuable knowledge; building and deploying piggybacking somebody else's LLM is nearly impossible even with the use of reference documents and long contexts. Passing private data to an LLM together with a question is to base your advantage on the data, not the ability to understand and use it. That data can be a competitive advantage is a contemporary commonplace - yet having more data and less expertise using it is not a winning position against the alternative.

It's a matter of process but also of incentives. The more you use your chatbots for customer support the less money you spend on learning more about customer support interactions (don't think "metrics" - which everybody has and processes with the same tools - think about the domain-specific knowledge that separates an experienced high-end lawyer from a run-of-the-mill one). The more you use LLMs to write code, or the more you use them to analyze your data or to write C-level reports and analysis, building deep knowledge in those areas becomes more difficult and harder to justify in the short term.

This article by Dan Davies describes very well a more general version of this phenomenon in the context of regulatory bodies, itself a domain highly exposed to competitive pressures. Quoting him:

Talking in general terms, there’s what cyberneticians and information theorists call a problem of “transduction”. When something is placed outside organisational boundaries, that has an immediate and profound effect on the organisation’s ability to have knowledge about it. The information is no longer just there, it has to be collected via a conscious effort and decisions have to be taken about how much resources to spend on this, what to observe and how to format it. Effectively, even in the best case, when you privatise something you’ve put a massive information-reducing filter between the public sector bodies responsible for it, and the actual activity. And unfortunately the worst case is much more common – that would be the case where nobody recognised that there would be a problem of this kind, so everything is left to ignorance, the information processing system of last resort.

The focus of my article is on private companies, but it's also to be expected that the systematic large-scale use of AI in the public sector to lower costs will, if not carefully managed, accelerate this process of cognitive hollowing out (not always by accident).

Now, it's true that neither states nor companies need to be experts on everything. Cognitive resources are as finite as anything else, and if you don't need to know more than the contextual baseline about something, it doesn't make sense to learn more, much less to spend time, money, and people pushing the frontiers of knowledge in your context and field.

Yet every company operating in a reasonably open market needs to be an expert on something. There are other forms of competitive advantage, to be sure, but one of the few constants in business history is that none of them is invulnerable to the danger of somebody coming along with significantly more knowledge about what you're doing than you do.

AIs are necessary. Even LLMs are useful, as long as you understand the difference. But the critical meta-expertise to use them — not just effectively but also avoiding strategic self-sabotage — is to understand your own cognitive architecture, which forms of expertise and knowledge are competitive advantages and which aren't, and use appropriate, often opposite strategies for each. Building up and expanding competitive expertise in a world with AIs requires vastly different technologies and organizational structures than deploying generalized tools for cost-saving; it's not a matter of resources as much of, well, expertise. (I wrote a bit more about this here.

One reading of this article is as a skeptical warning: most companies are rushing to deploy AIs both where they are useful and where they aren't, but also in places where they are useful now at the cost of existential risks to the company's competitive position.

It's also the description of a huge opportunity. AIs — as a general technology of cognitive enhancement — are not just technologies of expertise outsourcing but also of expertise creation and of the exponentially more valuable skill of knowing how to create expertise. The more companies give up on critical expertise, the more valuable cutting-edge proprietary expertise becomes to those that retain, enhance, and deploy it. Companies that outsource expertise — and therefore ensure they will be average at best — will find themselves at risk of the most dangerous thing that can happen to somebody in the twenty-first century: to be out-thought in their own field.