Unstarted Revolutions: How to think about what AI hasn't changed (yet)

2022-06-11

Behind hype and hope there are organized ways to think about why AI has changed or not different industries, not just to diagnose past efforts but to try to understand what may come next. This is one such model.

Consider the relationship, in arbitrary units, between cost and cognitive power (both in the most generic sense, so this includes human cognition, institutional information management, mathematical models, and of course computer software):

For any level of resources you choose to invest — money, people, time, risk, political capital — you can buy or build a certain "amount" of cognitive power. This determines the profitability of different business models - you can't run something, at least past the cash-burning stage, that needs more thought that it can pay for. This is often an underrated problem, because we tend not to see the cognitive costs of things like large-scale project coordination, or even just data cleanup. Going really back for examples, every form of government structure from city-states like Ur to sophisticated empires like Ming China to the contemporary nation-state would have been impossible without their own modes of capturing and processing information.

There's another interesting factor that plays into this, an overall investment constraint that's exogenous for each context, depending mostly on macroeconomic factors (e.g. interest rate, status and possibilities open of sovereign wealth funds) and the cultural zeitgeist. The investment constraint determines, for the cognitive supply curve, the upper limit of cognitive demands you can sustain:

We can conceptualize Artificial Intelligence, or rather whatever we choose to call Artificial Intelligence in any given year, as an increase in the slope of this cognitive supply curve. There are two things that this doesn't mean:

Now, how does this change the frontier of possibility? At this level of abstraction there's a first obvious impact — everything is cheaper. In other words, building and running a system — in the sense of a combination of humans, computers, and processes — that can handle a given cognitive problem requires fewer resources (although the relative mix of resources can change). One can use this opportunity to lower costs and pocket the difference, but the availability of relatively cheap capital, and changes in cultural assumptions about what "the future" would look like, meant that lower unit (problem instance) costs were leveraged into larger scale operations.

None

Adding intra- and inter-organization communication as a form of cognitive work, which they are, allows us to frame in this way part of the drive behind the grow and success of companies like Google or Amazon: the unit cost of the Internet search or a remote purchase fell dramatically (if you invested enough), but instead of this increasing the operating margins of Yahoo! or Sears, it was leveraged into new and huge companies.

We can also use this model to conceptualize many contemporary startup models. Most apps, fintech platforms, and so on are predicated on the idea that current or expected drops in the cost of a process like transferring money, monitoring a fitness program, or running immersive 3D graphics will allow them to scale them up to a point where their size will allow them to lock in platform rents. It's not always as clear a strategy as it's meant to be — it's harder for Uber to lock in people than for AWS to lock in companies — but it's a great place to be in if you can get there.

But it's not the only new place where you can go! "Everything is cheaper" doesn't just mean that you can get the same cognitive work for less investment. It also means you can get more cognitive power for the same investment! This merits an exclamation point (there's always one inside my head when I say it) because, if you get close to your context's investment constraint frontier, you can now do things past your previous upper complexity bound.

This isn't as popular a strategy as the other one, or even clearly described as a separate one, so it's worth re-emphasizing it. It's not automation, "democratization" (unpacking the use of this term in the way it's used in the tech space is... more than we can do here, but no less important), or platform-building; you don't measure it, generally speaking, with the size of your user base. Instead, it implies tackling problems that were literally impossible before for your context's investment constraint.

One field where this is the norm is gaming: except for the occasional, unpredictable indie hit, the usual strategy isn't pushing out games that are cheaper to develop and operate, and therefore more profitable for a given price. Instead, they compete by staying at or near the investment frontier, and from there pushing the technical limits of what's possible at any given time. It's a financially risky strategy, of course, but whenever it works it works really well — and has led to a continuous technical improvement curve and no small societal impact.

Despite general optimism, areas like drug discovery or even education are mostly approached using the lower cost/larger scale strategy. Most EdTech projects attempt to leverage technology to automate things like grading, student surveillance, or dropout risk classification, but those are already things we know how to do with our current combination of people, technologies, and processes. The elephant in the room is that our understanding of learning is still extremely primitive: for a given set of resources (students' health and family context, time available to study, the teacher's attention budget, experience, and resources) we simply aren't teaching much more per student-year than we used to, and we don't a sufficient understanding of the problem to undertake a sustained improvement program — there's no Moore's Law in schools. To rephrase what's a sensitive issue: we know how to make most existing educational systems better because most existing educational systems are understaffed, underfunded, and have to help students facing some form or another of hardship at home. Those are all things we know how to fix, if not how to put together the political will to do so. But knowing what makes something worse isn't the same as knowing how to push its limits forward.

Education is thus a process that lies somewhat above our current complexity constraint, but that's not unreachable. The sustained application of resources and advanced cognitive tools — if we took education not as something we know how to do that we want to make "more digital" or some such, but as an unsolved problem of the first order — will, I think, move it within the frontier of the complexity constraint. The impact of this event would be hard to understate: the introduction of universal schooling, even of the most "traditional" kind, in any society where it wasn't present before has always had transformative effects that put to shame those of things like the Internet or even computers themselves. An order of magnitude change in how well we can learn, although currently outside the frontier of what we understand, would be epochal.

But that's the civilization-level impact of a higher complexity constraint. What about those of us with not quite as many resources? Even in the capital-rich world of tech entrepreneurship, funding rounds and budgets are tighter than they have been in a while...

The stylized model is still of help here. A change in context, whether from rising interest rates or as we shift our frame of reference from Alphabet to a seed-money startup or an SME, shifts of course both the frontier of investment and the complexity frontier:

However, an investment context is also a competitive context. Markets (and societies) often grade against the local curve: you can reap substantial benefits by out-competing those at your same level of size, resources, etc, and use those benefits to expand your own context's frontier of investment.

In most cases the strategy, even among technologically inclined actors, is still to use the changed supply curve for cognitive power to scale up, or even just to increase margins by reducing employee headcount or the need for physical stores. These strategies are analogous to Lewis Carroll's Red Queen racing: as most competitors will do them — they are ingrained now in our strategic culture — these are improvements you do to remain in the game, not to win it.

The higher risk/reward move is to keep close to your context's investment frontier and use AI to push through that context's complexity constraint:

This is a trivial application of the simplest of models, but illustrates an option that's attempted much less frequently than the scale and visibility of its successes would warrant. Actors in contexts with tight investment constraints aren't unaware of the change in the cognitive supply curve, but there's an implicit assumption that the only way to take advantage of this chance with low resources is to use them to lower costs: as I said, not an unreasonable move, but a conservative one.

Regardless of the context — of country, market, and size — every actor has the chance to leverage new cognitive technology to tackle the problems that until now hadn't been possible in their context instead of automating existing activities. The nature of these problems and the way AI and other technologies can be used to tackle them will be highly specific to the actor, but the choice between the lower-cost and the higher-power strategies is one that's universally available but seldom exploited.

Post-credits scene

None

None