Fixing biases in decision-making won't get you far enough

2024-07-17

A lot of theoretical analysis and practical advice on decision-making focuses on a long list of cognitive biases. It's not wrong: we do suffer from those biases and we do have significant impacts on the outcomes of our activities. But focusing on tricks and techniques to mitigate them can be a dead-end in a universe that's far more complex that can be handled through even de-biased human cognition.

A clear example of this is poker. I'm not talking about our cognitive biases regarding randomness, bluffing, etc. Instead, the intrinsic structure of poker is enormously complex in rather surprising ways. The ranking of pairs of cards at the early stage of a Texas Hold'em game, the most basic "objective," "unbiased" tool for rational playing, is far from a simple concept.

From the paper The Topology of Poker:

Knowing the winning probabilities of a hand against another one is fundamental to any poker strategy, and are at the heart of von Neumann’s analysis of poker. What we argue, however, is that winning probabilities give at best partial information on the current game state, and sometimes are meaningless.

[...]

Our main result is that the homotopy type of Texas Hold’m is quite rich; in other words, there are intricate card configurations in which every player could be winning against another one; thus even the concept of “bluff” needs to be revisited since it is impossible to define, at some moments, who is bluffing against whom[bold emphasis mine]

Human intuition fails here in ways that go beyond any simple cognitive bias. For a simpler example look at intransitive dice, sets of at least three dice where for every dice there's another one that has a better than even chance of winning against it (John Law, who first introduced the concept of paper money in France for a brief and not happily ending moment, later moved to Venice and lived off gambling leveraging this unintuitive idea); in fact, depending on some technical details, you can prove that most sets of random dice will be intransitive. Those aren't weird mathematical exceptions - the universe is closer to a huge game or rock, paper, scissors than what a utility expectations maximizer would prefer.

Back to poker, you'd have noticed that they mention the topological term homotopy type; their analysis uses the technical machinery of topology in a way that's relatively straightforward to follow for a professional mathematician or even somebody just familiar with the mathematics, but it is a topological analysis, and the result, however I might have described its consequences, is that poker is complex enough that you can only look at it as it is with a conceptual toolset at least as complex as that.

And this is just a relatively simple game you can learn to play very quickly (although badly enough to lose a lot of money if you aren't aware that you aren't yet good at it). Even the most basic biological subsystem, or our most complex engineering achievements, are immensely more complex in ways that natural language and intuitively understandable decision-making tools, and the cognitive capabilities we use to process and understand them, are intrinsically unable to handle. Just as you can't do quantum mechanics in anything but the relevant abstract mathematics — whatever intuition physicists have about it, they are closer to intuitions about the mathematics themselves — it's likely that we can't do theoretical and applied biology without sufficiently complex theoretical languages that can't really be mapped to intuitively manageable "this does this because of this" explanations, and the same probably applies to complex problems in management, strategy, engineering, etc.

Experts in every domain are deeply aware of this: a reasonable definition of expertise in, e.g., circuit design is that you don't work with a variation of natural language and sensorial intuitions, but rather with specialized abstract languages and processes, because what we can comfortably and intuitively talk and think about is a very narrow part of the universe as a whole. In a way, jargon isn't the mark of an expert but rather of an expert trying to simplify what they do to a non-expert (or a non-expert trying to pass for an expert doing that): experts don't have their own words, they have their own entire languages.

Much of this gets lost though as expert analysis flows to decision-makers and the public (which also count as decision-makers in their own spheres and public politics). The unspoken assumption when making a presentation or writing about a topic is that it can be explained without any technical conceptual machinery in a way that might lose detail but carries enough information to make a valid decision. Sometimes this is the case, sometimes it's not. And different decision-makers do have different technical training: the leader of a quantitative hedge fund is likely, and is assumed to have a technical understanding of the relevant mathematics, computer science, and finance.

Yet the overall landscape is one in which most companies — I believe most decision-making roles in society — receive inputs that are constrained to natural language and very simple quantitative analysis that's far less rich than the systems they are trying to describe and the questions they are trying to make. Our decisions are not only impacted by the biases with which we process this information, but also, and more profoundly, by the languages we use to talk, write, and think about the world.

This has always been the case and this will always be the case. Both theoretical and applied science advance in part through the creation of new languages, and wider organizational applications lag behind but always tend, slowly and in limited ways, to take some of them. Emphasis on slow and in limited ways. The default assumption when looking at any decision-making system whether to improve or to beat them should be not just that they have biases but also that the conceptual frameworks and languages they use to think about the world are probably more intuitive and less effective than they could be.