A question in a talk I recently had with a group of investors prompted me to think again about the impact of AI on macroeconomics. However, I want to focus on a specific set of prices: the price of brains. I'm talking about cognition in general — the partial fungibility of cognitive mechanisms is one of the long-term basis of civilization — but AIs, as computers before, are thought about mostly as capital substitutes for human brains in the form of wages, lowering them.
I'd like to think about this in a more generic way, with AIs simply lowering the cost of (a certain but growing subset of) thinking — oversimplifying "cognition" as a single intermediate good. The following is just a set of quick notes rather than a definite position, starting at the micro level:
For individual organizations, and particularly in economically nervous times, the reflex is to use lower costs in intermediate goods to lower overall costs. In this context, that means automating tasks or augmenting lower-wage workers to do tasks that usually require people with higher wages.
On the other hand, one aspect of cognition as an input is that you can embody a practically boundless "amount" of it in any given product of service. There's only so much plastic you can use to build a phone, but no matter how many engineering capabilities you use to design it, you're always going to wish you could have put more. This is commonly accepted of "technology companies," but in fact there are few or no products or services that cannot be done better — for almost any metric you might care to use — adding more thinking into it. So in parallel with the race-to-the-bottom strategy of using the lower cost of cognition to reduce production costs there's a race-to-the-top strategy of using the lower cost of cognition to add more cognition to the process, getting a higher quality product or service for the same costs. This often has an interesting side effect: put enough cognition into a product, and you often get a qualitatively new product category.
Companies can pursue either strategy, but they must pursue one of them: don't implement AI, or do it as a token gesture, and you'll be eaten from below by more automated competitors, and pushed down from above by those with much smarter products.
In any case this is microeconomics. What about macroeconomics?
Intuitively, the first thing we should expect is a deflationary impact. Much like energy, if you lower the cost of key input, this should lead to lower prices in general. But there's also a large overlap between the cost of cognition and wage income, the more so in developed economies. Absent a commensurate increase in total demand, increased cognitive supplies leads to lower wages (AIs substitute for humans and there's no increase in demand for non-cognitively intensive human work, and the displaced labor depresses wages elsewhere). This also has other structural impacts: e.g., the lower the returns to early education, the harder it is for lower-income families (or even lower-income States) to afford it, leading to lower educational levels overall with both economic and sociopolitical effects.
The macroeconomic response is well-known if not always heeded: whenever you have a positive productivity shock, use fiscal policy to increase demand until you get full employment. In this context, the most direct question is How do we increase total demand for cognition to maintain full brain employment given a large and continuous positive supply shock?
One thing you can't do is just forbid the use of AI (in this sense; regulatory concerns in term of privacy, bias, etc, are independent of this analysis). It'd have the same issues as forbidding, say industrial robots: you keep comparatively high employment at comparatively low productivity, and ultimately productivity per capita is the ceiling, if not the floor, of consumption levels.
The most generic solution would be to use fiscal policy to support total demand across everything impacted by AI: if, say, it makes teaching cheaper, you use fiscal policy to raise your population overall consumption of education services. If a single nurse can now treat more people thanks to AI assistants and autonomous devices, you increase access to healthcare so you keep up the demand for nurses (also: you pay them better). If you have fiscal space and any sort of potential productivity gap, in fact, you should be doing this anyway.
But there's another policy angle that's specific to the AI cognition supply shock, which is to use regulatory means (and the direct and indirect purchasing influence of the State) to increase the cognitive intensity of products and services. Both of the usual two approaches might work:
- Governments can increase the quality of the services they provide by continuously "embedding" more cognition in them. This doesn't mean firing people, but instead aiming at a quickly improving quality of service and soaking up, so to speak, cognitive capabilities to do so.
- Governments can also increase the cognitive intensity of private goods by a combination of regulation and direct support, even including subsidized access to AI infrastructure for SMEs.
Here's worth making an observation. Talking about "cognition" as a commodity is somewhat nonsensical (and I can rant at length and with enthusiasm against the term "content"), and can be specially confusing when talking about quantities. If you have built an AI model to do something, spawning another instance of it will often be almost trivially cheap. When we talk about using "more cognition" it doesn't mean, or it doesn't only mean, "in more places" but rather "more complex," or "more specialized," or "more effective."
The porosity of national frontiers to software know-how makes developing economies specially vulnerable to this productivity shock: they can't do much about its timing, and they are more vulnerable to its negative effects (depressed wages due to cognitive labor substitution by non-humans specially at the key intermediate educational levels) than able to benefit from its positive ones (increased wages due to higher productivity in a context of high output sustained by high-cognition improved goods and fiscal and regulatory support).
We can summarize the difference between the micro- and macroeconomic impacts of AI in this way: Companies have the choice of maintaining their current cognition inputs and use AIs to lower costs (I believe this is a trap — you're just making time until the next category-redefining high-cognition competitor to wipe you out — but I understand the temptation) or of seeking competitive advantages through (unprecedentedly) high-cognition goods and services. Governments don't have the choice of being conservative, but in the absence of a well-known success case that can be used to justify (or even conceptualize) an ambitious agenda, I'm skeptical about their response. (I'm also underwhelmed by most companies' AI strategy, but ultimately what's creative destruction when a company gets it wrong becomes plain destruction when a government does.)
None