This screen is an attack surface

2017-07-05

A very short note on why human gut feeling isn't just subpar, but positively dangerous.

One of the most active areas of research in machine learning is adversarial machine learning, broadly defined as the study of how to fool and subvert other people's machine learning algorithms for your own goals, and how to prevent it from happening to yours. A key way to do this is through controlling sampling; the point of machine learning, after all, is to have behavior be guided by data, and sometimes the careful poisoning of what an algorithm sees — not the whole of its data, just a set of well-chosen inputs — can make its behavior deviate from what its creators intended.

A very public example of this is the nascent tradition of people collectively turning a public Microsoft demonstration chatbot into a bigot spouting conspiracy theories, by training it with the right conversations, last year with "Tay" and this week with "Zo." Humans are obviously subject to all sorts of analogous attacks through lies, misdirection, indoctrination, etc, and a big part of our socialization consists on learning to counteract (and, let's be honest, to enact) the adversarial use of language. But there's a subtler vector of attack that, because it's not really conscious, is extremely difficult to defend from.

Human minds rely very heavily on what's called the availability heuristic: when trying to figure out what will happen, we tend to give more weight to possibilities we can easily recall and picture. This is a reasonable automatic process in stable environments and first-hand observations, as it's fast and likely to give good predictions. We easily imagine the very frequent and the very dangerous, so our decision-making follows probabilities, with a bias towards avoiding that place where a lion almost ate us five years ago.

However, we don't observe most of our environment first-hand. Most of us, thankfully, have more exposure to violence through fiction than through real experience, always in highly memorable forms (more and better-crafted stories about violent crime than about car accidents), making our intuition misjudge relative probabilities and dangers. The same happens in every other area of our lives: tens of thousands of words about startup billionaires for every phrase about founders who never got a single project to work, Hollywood-style security threats versus much more likely and cumulatively harmful issues, the quick gut decision versus the detached analysis of multiple scenarios.

And there's no way to fix this. Retraining instincts is a difficult and problematic task, even for very specific ones, much less for the myriad different decisions we make in our personal and professional lives. Every form of media aims at memorability and interest over following reality's statistical distribution — people read and watch the new and spectacular, not the thing that keeps happening — so most of the information you've acquired during your life comes from an statistically biased sample. You might have a highly accurate gut feeling for a very specific area where you've deliberately accumulated an statistically strong data set and interacted with it in an intensive way, in other words, where you've developed expertise, but for most decisions we make in our highly heterogeneous professional and personal activities, our gut feelings have already been irreversibly compromised into at best suboptimal and at worst extensively damaging patterns.

It's a rather disheartening realization, and one that goes against the often raised defense of intuition as one area where humans outperform machines. We very much don't, not because our algorithms are worse (although that's sometimes also true) but because training a machine learning algorithm allows you to carefully select the input data and compensate for any bias in it. To get an equivalently well-trained human you'd have to begin when they are very young, put them on a diet of statistically unbiased and well-structured domain information, and train them intensively. That's how we get mathematicians, ballet dancers, and other human experts, but it's very slow and expensive, and outright impossible for poorly defined areas — think management and strategy — or ones where the underlying dynamics change often and drastically — again, think management and strategy.

So in the race to improve our decision-making, which over time is one of the main factors influencing our ultimate success, there's really no way around substituting human gut feeling with algorithms. The stronger you feel about a choice, the more likely it is to be driven by how easy it is to picture, and that's going to have more to do with the interesting and spectacular things you read, watched, and remember than with the boring or unexpected things that do happen.

Psychologically speaking, those are the most difficult and scariest decisions to delegate. Which is why there's still, and might still be for some time, a window of opportunity to gain competitive advantage by doing it.

But hurry. Sooner or later everybody will have heard about it.