Incentives matter. Technologies don't get themselves adopted: particular people in their separate roles make concrete decisions at different times guided by their own individual incentives. It's not enough to consider the technology itself. To understand why institutions adopt LLMs, and what the short- and long-term consequences might be, we need to think in terms of how it fits with their existing strategies and incentives.
As LLMs are evaluated not just assisting but also potentially replacing humans we need a common way to talk about both (leaving aside the ethical and social consequences of taking this fungibility too literally). Oversimplifying a bit less than most analysis, we can think of an institution —most commonly in this context a business— as a set of assets providing two different things:
- Fluency: the ability to communicate and convince internally and externally using natural language, videos, etc.
- Expertise: the ability to understand the world in a reliable and effective way.
Different roles have different requirements: actors run on fluency, but you better hope whoever designed your car brakes had expertise. An engineer who's not fluent will give a bad presentation for a good product; a charismatic speaker who's a bad engineer might get you to buy a dangerous one.
LLMs are uncannily fluent but they lack expertise. This is a controversial statement in some forums. In fact the mainstream, although slowly changing, opinion among investors, managers, and the general public is that they are at worst experts who need some light oversight, and at best on their way to far surpassing the human limits of expertise.
A full discussion of why I and others consider this is wrong is beyond the scope of this article. I'll just note that far more people believe that LLMs can do somebody else's job than their own: therapists wince and their therapeutic advice, journalists know their articles are trash, developers might use it to draft code but not to compile. VCs say they can do every single human activity, intellectual and artistic, except venture capital...
The real capabilities of LLMs determine (Keanu Reeves' voice) consequences but at this still relatively early stage of the game there isn't a consensus on them, so to understand varying adoption decisions we need to think about varying beliefs.
Like the people in them, companies take different market roles —strategies or bets— requiring different levels of fluency and expertise. It's rarely exclusively one or the other: a nuclear engineering firm still needs to send emails. But it's a bet on expertise that gets more valuable if it's even slightly better at nuclear engineering, not really if it sends slightly better at emails. If a company's bet is on creating a lot of content but the content of its content is irrelevant then it's a fluency play.
Now let there be LLMs: cheap fluency at unimaginable scale (at least while VC money keeps pouring in). Who does what and what happens then?
Short term, the most important facts are
- Most investors, media, managers, etc, believe LLMs give you more and cheaper fluency and expertise.
- Announcing you're using them and maybe playing around a bit with one is essentially free.
So the rational move for almost everybody —regardless of whether they work or even of whether you think they work— is to announce you're using LLMs and then play around with them. And that's what we're seeing! Their remarkable speed of adoption speaks to the low friction of a technology built around natural language, but also makes that adoption run ahead of proof.
Some technology cycles are based on the reasoning "these companies made a ton of money doing this, so I'll do it too, or at least say I do it." But nobody has yet made money adopting LLMs; everybody's adopting them because everybody's saying they will make money for them, and that's rational —short term— because adopting them, or rather saying that you do, does get you money, or at least attention.
But the marginal value of doing what everybody's doing goes to zero quite quickly, and for all of the speed and pseudo-religious overtones of LLMs, adoption-as-signalling is a familiar path. Incentives matter. As the value of adoption as signalling fades, company-specific factors will take more relevance.
Different types of companies will end up at different places:
- Fluency-driven companies will find LLMs a good complement, sometimes even a substitute, for humans. Top-down and bottom-up, investors, managers, and workers will use them as much as they can, because, for this, LLMs work.
- Most expertise-driven companies won't. In fact, most expertise-driven companies will not even try. Yes, they are all announcing LLM adoption during this phase of the cycle —it'd be irrational not to— but every organization based around expertise has by necessity built good sensors for it, and experts in every domain tend to balk at LLMs as soon as they test them for it.
These are extremes: some domain experts find coding tools useful. LLM adoption might not be zero anywhere. But that's patchy adoption as secondary tools, not a mass replacement of human expertise for software.
Anecdotally, all the people I've known who have become truly more productive using CoPilot-like tools —in a real sense, not just blindly generating code that's dangerous to their employers and their careers— are already deeply knowledgeable about the area they are writing the code for. They assist expensive experts, they don't replace them.
Back to who adopts and who doesn't, here's where it gets interesting: company-level adoption patterns are driven by what companies *are* not by what they *say* or *believe* they are. Because incentives matter but the incentives that matter aren't companies' explicit ones —what's on their websites and their 10-Ks— but the concrete incentives of the people inside them making those choices.
Take Boeing as a prototypical example. Over time it never changed its industry, market, or even its name, but as its leadership shifted towards people with different professional and financial incentives —different ways in which they expected to earn riches and the accolades of their peers, different sets of peers— those peoples' decisions changed, and so did the company's choices.
There's a strong analogy between the hollowing out of the worst forms of private equity and the infectious potential of LLMs: companies whose leadership and culture, knowingly or not, put more weight on fluency than on expertise or simply cannot reliably tell them apart —and this can and does happen on every industry, at every size— will find LLMs successful and adopt them as much in an expertise-coded industry as in a fluency-coded one.
Modeling what will happen next with LLM adoption is difficult because there are multiple sources of heterogeneity:
- Different industries, segments, and markets learn the difference between fluency and expertise at different rates. Submitting an LLM-generated chemical engineering paper to a paper mill has different implications at different speeds than using an LLM to control an industrial plant.
- LLMs create new marginally viable ultra-high fluency/ultra-low niches in existing industries, like the AI-only online publication.
- But there's also heterogeneity among peer companies in direct competition in the same markets: There are often slight differences in internal culture —a higher or lower commitment to expertise over fluency, or vice versa— that are often a result of deliberate strategy as of the accidents of history and recruitment. LLMs magnify those differences by amplifying the impact of a predilection for fluency: any company in which slop isn't preferred but is marginally tolerated becomes one in which slope is prevalent.
LLMs might not carry expertise, but wherever expertise is not a sine qua non they are economically unbeatable. So much that they induce a pressure to eliminate expertise as a requirement. These two facts delineate their niche and describe their danger.
Two codas for the longer term:
The current levels of investment on LLMs only make sense if they provide or can provide expertise. The demand for scalable fluency is great, but right now OpenAI's market is about USD 5 billion at below-cost prices. Functional improvements are bought at a huge cost, sometimes edging towards increased fluency but no longer advancing towards expertise. Your assumptions might vary; given the ones I'm working under, at some point the lack of knowledgeable LLMs will outlast the wallets and optimism of the technology world. This won't kill LLMs: we now know how to build and run them, and there are use cases for them. But they will settle down over the rest of the stack of mundane technologies somebody once bet would change the world. (A minor prediction: children today will grow up making fun of the fact that so many people today believe LLMs are even somewhat conscious. Familiarity breeds contempt, and sometimes it's deserved.)
Yet LLMs aren't all of AI, or even most of it. Under one label or another, pushing the frontier of computational tools to the previously impossible is one of the strongest and most reliable vectors of social and economic growth, from the first clay tablet to whatever comes next week. Most importantly, the fluency-only nature of LLMs is not universal to AIs: any half-competent chess engine will be linguistically inept but carries chess-playing expertise deeper and more certain than anything a LLM has about anything at all except, tautologically, the statistical patterns of language itself.
Expertise-first AIs exist today, with better ones every day and open fields barely explored; the dominance of LLMs is in terms of culture and funding, not engineering possibilities. With differences in publicity and scale that might change over time, companies are not just adopting fluency-focused technologies but also expertise-carrying ones. LLMs are influential and their net value questionable, but they aren't the whole story, even today. Too much focus on this single thread and we end up with a bad movie's oversimplified future —picturing one where LLMs are everywhere or nowhere, but LLMs are AI— and underestimates the degree of choice we have not just on whether to use LLMs but on whether to use AI, what sorts, for what, and for whom.
(Originally posted at the IEET's Substack.)