Beyond Kahneman - How to think very slow

(Based on an Instituto Baikal talk. Spanish version here.)

The central simplification of Daniel Kahneman's Thinking, Fast and Slow (itself a simplified popularization of a complex and still evolving body of research) is that we have two fairly separate systems we think with:

  • System 1, responsible for our off-the-cuff, instinctive, fast responses, whether natural or trained, like navigating a set of presentation slides.
  • System 2, in charge of our more conscious, deliberate, effortful actions, like writing an important email.

This works well — we are fast when we need to, and flexible when we can afford to — although it can lead to trouble when we use the wrong system and reply an important email with the first thing that comes to mind.

A bigger issue is that the world has more than sizes.

Problems, fast, slow, slower, and even slower

Many of the most important problems are both too slow and too complex even for System 2, from running a project to handling a public health crisis. Even something like a two-hours meeting stretches our brain's capabilities for sustained abstract thought; professional chess players have to train specifically for the physical demands of a tournament. Fortunately, we have a large set of cognitive tools to help us with this, from cuneiform tablets to scientific software processing petabytes of data.

But existence doesn't imply widespread use, and the default in our organizations and our lives are still Systems 1 and 2. Even a modern government or business is run essentially by meetings, emails, and (with any luck, digital) documents. There might be pockets of sophisticated data analysis and comprehensive organizational strategies, but, if you look closely at what's happening, it's mostly people talking with each other, reading or writing documents, and essentially making guesses and decisions using System 2 (in theory almost always) and System 1 (in practice quite often).

  • When creating a document or a spreadsheet, software is in control of nearly-instantaneous activities like spell-checking or doing a mathematical calculation, but the overall process of design, analysis, composition, and edition is still driven by our training, habits, and how well we slept last night.
  • There are decades of research about most industries and processes, and a company can have vast amounts of expensively collected data, yet decisions at every level will often be made by whatever could fit in the brains of six people sitting around a table or in a conference call.
  • Even worse, an institutional strategy can be carefully crafted by rigorous and sophisticated analysis, and yet be implemented as a set of emails, memos, guides, presentations, and other documents, ultimately processed, filtered, and executed through people's System 2, competing with all the knowledge, instructions, problems, habits, and instincts they already had.

Sometimes we are dumb because our brains are smart

Thinking deeply is slow and metabolically expensive, so our brains generally try to do the simplest and most enjoyable thing that will let us get away with whatever we are doing. That's often a mixture of System 1 and System 2 that you could call "System 1.5": that easy place where you are thinking about something (or most, often, talking about it with somebody) but not with the strict rules and focus that would be involved if you were writing a detailed technical paper about it for an skeptical audience. It's not just that the former is easier, it's also (for most people most of the time) more enjoyable.

Even when we make a conscious institutional decision to shift to a more "data-driven" (to use a contemporary phrase) organization, usually it doesn't go beyond good intentions, some scattered tools and databases, and more graphs in documents. Not because people are dumb, but because we are smart: in every meeting, email, report, or small workday decision, it takes a lot less time and energy to go the "System 1.5" way than the more complex one that's theoretically required, and the rest of the organization will be happy to accept it. Without the right infrastructure it's faster and cheaper to do it the semi-intuitive way, and who can argue with faster and cheaper?

With the right infrastructure it's faster and cheaper to it the non-intuitive way, and infrastructure is much cheaper to set up than most people assume, but organizations (and more so organizational cultures) are well-adapted to the information technology of their peers, and have trouble adjusting to new ones. You can write and share dozens of reports on your intranet using a very expensive laptop before wondering why you're filtering data everybody who needs to could have real-time access to through exactly the same format Philip II used to communicate with his Viceroys in New Spain in the XVIth century.

This isn't to say we can't think well. In fact, we do it quite often. There are aspects of every contemporary organization, and even our individual activities, that are orders of magnitude more powerful, cognitively speaking, than anything in our past.

The question becomes how do we do it when we are doing it well?

How we think when we think better than we can think

I won't postulate a One True Way To Think Beyond Systems 1 and 2, but there are a couple of patterns I think are common whenever individuals and organizations do.

One is taking flexibility away. Not in the sense of setting up strict rules and guidelines about how you want your organization (or yourself) to think, but in the more basic sense of setting things up so your every possible action at the System 1 and 2 time scales is effective. This gets very clear with two examples, both among the most brilliant tools for expanding the human brain: mathematics and the bureaucratic form. They are both conceptual machines that put very close limits on what you can do at each step, but in compensation you can prove theorems or collect information from people in an extremely organized and usable way (nobody likes forms, but you can't run a remotely viable state without them, at least not before computers, and even now forms are one of the main interfaces between people and organizations).

Taking flexibility away for its own sake is of course nonsense, although very common nonsense. It has to be done in a way that links the times scales of Systems 1 and 2 with effective tools at larger time scales, like the long-term project, the research plan, or even a single complex decision. We have powerful and well-known tools to handle problems at those levels of complexity. They fail not so much because of their limitations, but because our actual very-short-term actions don't aren't compatible with them: the most sophisticated data analysis algorithm has limited value if the final decision is made while talking over slides.

When we succeed at making our System 1 and 2 actions work consistently with the larger systems (think of gears of different sizes all working together) we have in essence saturated time: as humans, we can only think and operate in our own limited range of time scales, but the emergent result of our actions is a bigger system able to deal with arbitrarily complex problems.

None of this is new. I already mentioned cuneiform tablets, and the concept of linking Systems 1 and 2 with more complex processes is very similar to the way psychologists conceptualize expertise, but it does offer a way to set up some relatively simple enhancements to how an individual or an organization thinks in a certain domain, by taking advantage not of the power of computers but of the flexibility of software.

The only two things you can be sure will get done are what's in the (real, unconscious) culture of your organization and in the defaults of the software it uses, and realistically speaking you can only change the latter, so focus on that.

This means

  • If doing or thinking about something something is really important, you should have software specifically for it.
  • The software should implement the best way you know to do something, and offer you as little flexibility as possible.
  • Learn constantly about the issue (research papers, expert's best practices, your own experiments) but nothing counts until you've changed your software.

It can feel like a very restrictive pattern, and in some ways it is. It's unlikely that any specialized software will have all the features of the tools you are already using. But power and features aren't the same thing. if you are only going to use few programs, you want them to be as flexible as possible, but if you are going to use specific programs for specific tasks, the ability to do things that aren't exactly what needs to be done at that moment in the best possible way isn't a plus, but a limitation. The software you use to figure out how something could be done shouldn't be the software you use to get it done. A good laboratory is a bad factory, and vice versa.

A trivial personal example: A common productivity advice is to always have a single explicit next thing to do. Most productivity tools make it fairly obvious which is the next task, but in my last rewrite of my personal tool, I just made the program only show that task. The point isn't that this is hard or spectacularly useful, or that this can't be done with other tools, but rather that often what might be a difficult mental habit to acquire can be "learned" quickly with a simple code change.

the next step after learning (or wanting to test) that you should only look at your next task is not to try to develop the habit of doing it, but just modifying your software so that's what happens.

A more important example

Meetings, by and large, are broken. One way to look at why is to note that every meeting is three different meetings happening all at once (actually more, but I'm ignoring psychological and social issues here):

  • How does this aspect of the world work?
  • What do we know, by observation and inference, about this specific case?
  • Given what is known in general, what we know and don't know about this case, and our preferences and priorities, what do we do?

Those are all important, difficult questions, and none of the answers is ever fixed. Testing new things often changes, and in fact should change, the way we think the world works. But answering these questions requires different processes at different time and complexity scales:

  • How does this aspect of the world work?: We call this "science" or at least "academic research", and it takes decades to build and a rather long time to even begin to grasp.
  • What do we know, by observation and inference, about this specific case? This is "big data" and "data science", and is continuous if your systems are correctly set up, but can take weeks or be impossible if they aren't.
  • Given what is known in general, what we know and don't know about this case, and our preferences and priorities, what do we do? This is "decision-making." It can be solid and organized if you did the first two steps before, but becomes a crap shot without it.

Most meetings continuously jump between these three very different activities, continuously adding to a vague information pool on the table different bits of data, inferences, preferences, and theory about the world, some of it in the form of texts or graphs, but even so being processed in the System 2 of the people in the meeting, and in the back and forth of their talk. Perhaps with the help of some post-its on a whiteboard...

Applying our framework, it would be more effective to do these different things in different way and at different times, and having everything be tied up through software. Specifically:

  • Define your goals and priorities (if you don't know them, you have a different sort of problem), and some strategy to approach them, or at least a way to evaluate the choices based on historical data, expert knowledge, and simulations, explicitly enough that you should be able to write it as a program. Write that program.
  • The program should have real-time access to all the data it needs. If some of it isn't in your databases (e.g., is some expert's yes/no judgment about something) the database should ask the expert for it whenever appropriate.

With any luck, now the software makes the decision all the time, and your next meeting is for the decision-makers to look at the software's explanation and either re-evaluate what they thought they knew and wanted and iterate the process, or give the decision the legal and social "management seal of approval."

This seems restrictive, inflexible, and very uncomfortable. And it is uncomfortable. But by structuring your process to explicitly remove Systems 1 and 2 as bottlenecks for information and cognition, you can get data-driven decisions in a way that's much easier to understand, audit, and continuously improve than any traditional process.

Wait a second...

But by structuring your process to explicitly remove Systems 1 and 2 as bottlenecks for information and cognition, you can get data-driven decisions in a way that's much easier to understand, audit, and continuously improve than any traditional process.

Then why keep humans in the process?

The right answer is that you can get away with it, you don't. At some level somebody monitors the process or, more likely, monitors the monitors that monitor the monitors of the process, but by then you are in the realms of policy and politics, which is a different question. But the obvious endgame of sidestepping the limitations of Systems 1 and 2 is simply to sidestep the humans themselves. You can read the "recipe" in this article as an sketch of how to do it gradually. Beginning, if you can, with yourself.

Beyond Beyond Kahneman

So far we've talked about the implementation or execution of know-how, not the creation of new knowledge. Much of scientific and technological research is of course the application of existing processes, so that can be accelerated and improved as well, but there are two lines of development that might have structural implication in how we think in the near future.

To understand the first one, remember that in many of the most interesting applications of AI, from translation to chemistry, the systems first find an abstract "latent space," a sort of language purely derived from the data, removing the redundancies and arbitrary details of the ways we normally use to talk about anything from a common event in our day to a molecular structure. Much like the role of mathematics in physics, the system operates first in that latent space to generate, classify, or predict, and only later translates things back to a human language, a song, or a painting.

This is more than a technical trick: an important part of scientific research, and in fact of creative thought itself, is figuring out useful new languages to describe what we know, because new languages can make simple things that were impossible (like moving from Roman to Arabic numerals). It's long been one of the pillars of physics, and is a common step in data analysis, but we are also learning how to come up automatically with interesting symbolic representations of the data — not just making graphs, but making equations, and even theories — and a world in which this is as natural and automated a part of how we work and think as using an spreadsheet to make calculations is a world in which understanding things has become an order of magnitude faster.

The other significant advance, although still in its earlier stages, is the application of the same methods used to build AIs that play Go or Chess to the generation and proof of interesting mathematical theorems. Besides a few famous examples, the use of automated proof tools in mathematics is still relatively infrequent, but the structural analogies between areas in which AIs are surpassing human performance and those necessary for mathematics suggest it's a matter of time before the development of tools for mathematical research (and not just mathematical computation) of unheard-of power.

Automating the creation of theories and making theoretical mathematicians much more productive doesn't look of immense interest, until you realize that those are some of the aspects of scientific research that have yet to be speed-up by the use of computers. "Unblocking" them should over time speed up, if not everything, certainly the cutting edge of how we think in science and engineering (and perhaps even science-inspired areas like strategy and management) with all the implications, economic, social, and even philosophical, that his would have.

Ending with more questions

What's the best path from here to there in an existing organization? Can we train people to do this as an habit? What happens if we educate children like this? How does this change what education means? Management? The nature of work?

These are not rhetorical questions. Even if most of our thinking remains in System 1 and 2, the vanguard is already moving away in new directions, and how that happens, and who chooses or can follow it, will be one of the key factors in the economic, political, and social structure of the coming years.

Comments are disabled for this post