Category Archives: Psychology

Any sufficiently advanced totalitarianism is indistinguishable from Facebook

Gamification doesn't need to be enjoyable to be effective.

You're more likely to cheat on your taxes than to walk barefoot into a bank, even if it's summer and your feet hurt. That's because we don't just care about how bad the consequences of something could be, but also how certain they are to happen, and, illogically but consistently, how soon they will happen.

That's what makes Facebook so addictive. Staying another minute isn't going to make you happy, but it guarantees a small and immediate dose of socially-triggered emotion, and that's an incredibly powerful driver of behavior. The business of Facebook is to know enough about you, and have enough material, to make sure it can keep that subliminal promise while showing you targeted ads.

Governments' tools are noticeably blunter. Most of the laws that are generally respected reflect some sort of pre-existing social agreement. Conversely, where that social agreement doesn't exist (e.g., the legitimacy of buying dollars in Argentina, or the acceptability of misogyny pretty much everywhere), laws can only be enforced sporadically and with delay, and hence are seldom effective.

What the ongoing deployment by totalitarian governments — and the totalitarian arms of not-entirely-totalitarian governments — is making possible is the recreation of Facebook, but one co-founded by Foucault. The granularity, flexibility, and speed of perception and action, once a State is digitized enough, is unfathomable by the standards of any State in history. You can charge a fine, report a behavior to a boss, inconvenience a family member, impact a credit score, or notify a child's school the very moment a frowned-upon action was performed, with (sufficiently) total certainty and visibility. It doesn't have to be a large punishment or a lavish reward, or even the same for everybody: just as Facebook knows what you like, a government good enough at processing the data it has can know what you care about, and calibrate exactly how to use it so even small transgressions and small "socially beneficial activities" will get a small but fast and certain reward. Small but fast and certain is a cheap and effective way of shaping behavior, as long as it's something you do care about, and not generic "points" or "achievements." It can be your children's educational opportunities, your job, your public image, anything — governments, once they develop the right process and software infrastructure, can always find buttons to push.

This kind of detail-oriented totalitarianism only used to be possible in the most insanely paranoid societies (the Stasi being a paradigmatic example) but it escalated very poorly, and with ultimately suicidal economic and social costs.

Doing it with contemporary technology, on the other hand, scales very well, as long as a government is willing to cede control of the "last mile" of carrots and sticks to software. You would be very surprised if you entered Facebook one day and saw something as impersonal and generic, or at best as fake-personalized, as most interactions with the State are now. A government leveraging contemporary technology has a some significant computing power constantly looking at you and thinking about you — what you're doing, what you care about, what you're likely to do next — and instead of different parts of the government keeping their own files and dealing with you on their own time, everything from the cop on your street to your grandparents' pharmacist is integrated into that bit of the State that is exclusively and constantly dedicated to nudging you into being the best citizen you can possibly be.

It won't just be a cost-effective way of social control. Everything we know of psychology, and our recent experience with social networks and other mobile games, suggests it'll be an effective way of shaping our decisions before we even make them.

Big Data, Endless Wars, and Why Gamification (Often) Fails

Militaries and software companies are currently stuck in something of a rut: billions of dollars are spent on the latest technology, including sophisticated and supposedly game-changing data gathering and analysis, and yet for most victory seems a best to be a matter of luck, and at worst perpetually elusive.

As different as those "industries" are, this common failure has a common root; perhaps unsurprisingly so, given the long and complex history of cultural, financial, and technological relationships between them.

Both military action and gamified software (of whatever kind: games, nudge-rich crowdsourcing software, behaviorally intrusive e-commerce shops, etc) are focused on the same thing: changing somebody else's behavior. It's easy to forget, amid the current explosion — pun not intended — of data-driven technologies, that wars are rarely fought until the enemy stops being able to fight back, but rather until they choose not to, and that all the data and smarts behind a game is pointless unless more players do more of what you want them to do. It doesn't matter how big your military stick is, or how sophisticated your gamified carrot algorithm, that's what they exist for.

History, psychology, and personal experience show that carrots and sticks, alone or in combination, do, work. So why do some wars take forever, and some games or apps whimper and die without getting any traction?

The root cause is that, while carrots and sticks work, different people and groups have different concepts of what counts as one. This is partly a matter of cultural and personal differences, and partly a matter of specific situations: as every teacher knows, a gold star only works for children who care about gold stars, and the threat of being sent to detention only deters those for whom it's not an accepted fact of life, if not a badge of honor. Hence the failure of most online reputational systems, the endemic nature of trolls, the hit-and-miss nature of new games not based on an already successful franchise, or, for that matter, the enormous difficulty even major militaries have stopping insurgencies and other similar actors.

But the root problem behind that root problem isn't a feature in the culture and psychology of adversaries and customers (and it's interesting to note that, artillery aside, the technologies applied on both aren't always different), but in the culture and psychology of civilian and military engineers. The fault, so to speak, is not in our five-stars rating systems, but in ourselves.

How so? As obvious as it is that achieving the goals of gamified software and military interventions requires a deep knowledge of the psychology, culture, and political dynamics of targets and/or customer bases, software engineers, product designers, technology CEOs, soldiers, and military strategists don't receive more than token encouragement to develop a strong foundation in those areas, much less are required to do so. Game designers and intelligence analysts, to mention a couple of exceptions, do, but their advice is often given but a half-hearted ear, and, unless they go solo, they lack any sort of authority. Thus we end, by and large, with large and meticulously planned campaigns — of either sort — that fail spectacularly or slowly fizzle out without achieving their goals, not for failures of execution (those are also endemic, but a different issue) but because the link between execution and the end goal was formulated, often implicitly, by people without much training in or inclination for the relevant disciplines.

There's a mythology behind this: they idea that, given enough accumulation of data and analytical power, human behavior can be predicted and simulated, and hence shaped. This might yet be true — the opposite mythology of some ineffable quality of unpredictability in human behavior is, if anything, even less well-supported by facts — but right now we are far from that point, particularly when it comes to very different societies, complex political situations, or customers already under heavy "attack" by competitors. It's not that people can't be understood, and forms of shaping their behavior designed, it's that this takes knowledge that for now lies in the work and brains of people who specialize in studying individual and collective behavior: political analysts, psychologists, anthropologists, and so on.

They are given roles, write briefs, have fun job titles, and sometimes are even paid attention to. The need for their type of expertise is paid lip service to; I'm not describing explicit doctrine, either in the military or in the civilian world, but rather more insidious implicit attitudes (the same attitudes the drive, in an even more ethically, socially, and pragmatically destructive way, sexism and racism in most societies and organizations).

Women and minorities aside (although there's a fair and not accidental degree of overlap), people with a strong professional formation in the humanities are pretty much the people you're least likely to see — honorable and successful exceptions aside — in a C-level position or having authority over military strategy. It's not just that they don't appear there: they are mostly shunned, and implicitly or explicitly, well, let's go with "underappreciated." Both Silicon Valley and the Pentagon, as well as their overseas equivalents, are seen and see themselves at places explicitly away from that sort of "soft" and "vague" thing. Sufficiently advanced carrots and sticks, goes the implicit tale, can replace political understanding and a grasp of psychological nuance.

Sometimes, sure. Not always. Even the most advanced organizations get stuck in quagmires (Google+, anyone?) when they forget that, absent an overwhelming technological advantage, and sometimes even then (Afghanistan, anyone?) successful strategy begins with a correct grasp of politics and psychology, not the other way around, and that we aren't yet at a point where this can be provided solely by data gathering and analysis.

Can that help? Yes. Is an organization that leverages political analysis, anthropology, and psychology together with data analysis and artificial intelligence like to out-think and out-match most competitors regardless of relative size? Again, yes.

Societies and organizations that reject advanced information technology because it's new have, by and large, been left behind, often irreparably so. Societies and organizations that reject humanities because they are traditional (never mind how much they have advanced) risk suffering the same fate.

This screen is an attack surface

A very short note on why human gut feeling isn't just subpar, but positively dangerous.

One of the most active areas of research in machine learning is adversarial machine learning, broadly defined as the study of how to fool and subvert other people's machine learning algorithms for your own goals, and how to prevent it from happening to yours. A key way to do this is through controlling sampling; the point of machine learning, after all, is to have behavior be guided by data, and sometimes the careful poisoning of what an algorithm sees — not the whole of its data, just a set of well-chosen inputs — can make its behavior deviate from what its creators intended.

A very public example of this is the nascent tradition of people collectively turning a public Microsoft demonstration chatbot into a bigot spouting conspiracy theories, by training it with the right conversations, last year with "Tay" and this week with "Zo." Humans are obviously subject to all sorts of analogous attacks through lies, misdirection, indoctrination, etc, and a big part of our socialization consists on learning to counteract (and, let's be honest, to enact) the adversarial use of language. But there's a subtler vector of attack that, because it's not really conscious, is extremely difficult to defend from.

Human minds rely very heavily on what's called the availability heuristic: when trying to figure out what will happen, we tend to give more weight to possibilities we can easily recall and picture. This is a reasonable automatic process in stable environments and first-hand observations, as it's fast and likely to give good predictions. We easily imagine the very frequent and the very dangerous, so our decision-making follows probabilities, with a bias towards avoiding that place where a lion almost ate us five years ago.

However, we don't observe most of our environment first-hand. Most of us, thankfully, have more exposure to violence through fiction than through real experience, always in highly memorable forms (more and better-crafted stories about violent crime than about car accidents), making our intuition misjudge relative probabilities and dangers. The same happens in every other area of our lives: tens of thousands of words about startup billionaires for every phrase about founders who never got a single project to work, Hollywood-style security threats versus much more likely and cumulatively harmful issues, the quick gut decision versus the detached analysis of multiple scenarios.

And there's no way to fix this. Retraining instincts is a difficult and problematic task, even for very specific ones, much less for the myriad different decisions we make in our personal and professional lives. Every form of media aims at memorability and interest over following reality's statistical distribution — people read and watch the new and spectacular, not the thing that keeps happening — so most of the information you've acquired during your life comes from an statistically biased sample. You might have a highly accurate gut feeling for a very specific area where you've deliberately accumulated an statistically strong data set and interacted with it in an intensive way, in other words, where you've developed expertise, but for most decisions we make in our highly heterogeneous professional and personal activities, our gut feelings have already been irreversibly compromised into at best suboptimal and at worst extensively damaging patterns.

It's a rather disheartening realization, and one that goes against the often raised defense of intuition as one area where humans outperform machines. We very much don't, not because our algorithms are worse (although that's sometimes also true) but because training a machine learning algorithm allows you to carefully select the input data and compensate for any bias in it. To get an equivalently well-trained human you'd have to begin when they are very young, put them on a diet of statistically unbiased and well-structured domain information, and train them intensively. That's how we get mathematicians, ballet dancers, and other human experts, but it's very slow and expensive, and outright impossible for poorly defined areas — think management and strategy — or ones where the underlying dynamics change often and drastically — again, think management and strategy.

So in the race to improve our decision-making, which over time is one of the main factors influencing our ultimate success, there's really no way around substituting human gut feeling with algorithms. The stronger you feel about a choice, the more likely it is to be driven by how easy it is to picture, and that's going to have more to do with the interesting and spectacular things you read, watched, and remember than with the boring or unexpected things that do happen.

Psychologically speaking, those are the most difficult and scariest decisions to delegate. Which is why there's still, and might still be for some time, a window of opportunity to gain competitive advantage by doing it.

But hurry. Sooner or later everybody will have heard about it.

The Mental Health of Smart Cities

Not the mental health of the people living in smart cities, but that of the cities themselves. Why not? We are building smart cities to be able to sense, think, and act; their perceptions, thoughts, and actions won't be remotely human, or even biological, but that doesn't make them any less real.

Cities can monitor themselves with an unprecedented level of coverage and detail, from cameras to government records to the wireless information flow permeating the air. But these perceptions will be very weakly integrated, as information flows slowly, if at all, between organizational units and social groups. Will the air quality sensors in a hospital be able to convince most traffic to be rerouted further away until rush hour passes? Will the city be able to cross-reference crime and health records with the distribution of different business, and offer tax credits to, say, grocery stores opening in a place that needs them? When a camera sees you having trouble, will the city know who you are, what's happening to you, and who it should call?

This isn't a technological limitation. It comes from the way our institutions and business are set up, which is in turn reflected in our processes and infrastructure. The only exception in most parts of the world is security, particularly against terrorists and other rare but high-profile crimes. Organizations like the NSA or the Department of Homeland Security (and its myriad partly overlapping versions both within and outside the United States) cross through institutional barriers, most legal regulations, and even the distinction between the public and the private in a way that nothing else does.

The city has multiple fields of partial awareness, but they are only integrated when it comes to perceiving threats. Extrapolating an overused psychological term, isn't this an heuristic definition of paranoia? The part of the city's mind that deals with traffic and the part that deals with health will speak with each other slowly and seldom, the part who manages taxes with the one who sees the world through the electrical grid. But when scared, and the city is scared very often, and close to being scared every day, all of its senses and muscles will snap together in fear. Every scrap of information correlated in central databases, every camera and sensor searching for suspects, all services following a single coordinated plan.

For comparison, shopping malls are built to distract and cocoon us, to put us in the perfect mood to buy. So smart shopping malls see us like customers: they track where we are, where we're going, what we looked at, what we bought. They try to redirect us to places where we'll spend more money, ideally away from the doors. It's a feeling you can notice even in the most primitive "dumb" mall: the very shape of the space is built as a machine to do this. Computers and sensors only heighten this awareness; not your awareness of the space, but the space's awareness of you.

We're building our smart cities in a different direction. We're making them see us as elements needing to get from point A to point B as quickly as possible, taking little or no care of what's going on at either end... except when it sees us, and it never sees or thinks as clearly and as fast, as potential threats. Much of the mind of the city takes the form of mobile services from large global companies that seldom interact locally with each other, much less with the civic fabric itself. Everything only snaps together with an alert is raised and, for the first time, we see what the city can do when it wakes up and its sensors and algorithms, its departments and infrastructure, are at least attempting to work coordinately toward a single end.

The city as a whole has no separate concept of what a person is, no way of tracing you through its perceptions and memories of your movements, actions, and context except when you're a threat. As a whole, it knows of "persons of interest" and "active situations." It doesn't know about health, quality of life, a sudden change in a neighborhood. It doesn't know itself as anything else than a target.

It doesn't need to be like that. The psychology of a smart city, how it integrates its multiple perceptions, what it can think about, how it chooses what to do and why, all of that is up to us. A smart city is just an incredibly complex machine we live in and whom we give life to. We could build it to have a sense of itself and of its inhabitants, to perceive needs and be constantly trying to help. A city whose mind, vaguely and perhaps unconsciously intuited behind its ubiquitous and thus invisible cameras, we find comforting. A sane mind.

Right now we're building cities that see the world mostly in terms of cars and terrorism threats. A mind that sees everything and puts together very little except when it scares it, where personal emergencies are almost entirely your own affair, but becomes single-minded when there's a hunt.

That's not a sane mind, and we're planning to live in a physical environment controlled by it.

When the world is the ad

Data-driven algorithms are effective not because of what they know, but as a function of what they don't. From a mathematical point of view, Internet advertising isn't about putting ads on pages or crafting seemingly neutral content. There's just the input — some change to the world you pay somebody or something to make — and the output — a change in somebody's likelihood of purchasing a given product or voting for somebody. The concept of multitouch attribution, the attempt to understand how multiple contacts with different ads influenced some action, is a step in the right direction, but it's still driven by a cosmology that sees ads as little gems of influence embedded in a larger universe that you can't change.

That's no longer true. The Internet isn't primarily a medium in the sense of something that is between. It's a medium in that we live inside it. It's the atmosphere through which the sound waves of information, feelings, and money flow. It's the spacetime through which the gravity waves from some piece of code shifting from data center to data center according to some post-geographical search of efficiency reach your car to suggest a route. And, on the opposite direction, it's how physical measurements of your location, activities — even physiological state — are captured, shared, and reused in ways that are increasingly more difficult to know about, and much less to be aware of during our daily life. Transparency of action often equals, and is used to achieve, opacity to oversight.

Everything we experience impacts our behavior, and each day more of what we experience is controlled, optimized, configured, personalized — pick your word — by companies desperately looking for a business model or methodically searching for their next billion dollars or ten.

Consider as a harbinger of the future that most traditional of companies, Facebook, a space so embedded in our culture that people older than credit cards (1950, Diners) use it without wonder. Among the constant experimentation with the willingly shared content of our lives that is the company, they ran an experiment attempting to deliberately influence the mood of their users by changing the order of what they read. The ethics of that experiment are important to discuss now and irrelevant to what will happen next, because the business implications are too obvious not to be exploited: some products and services are acquired preferentially by people in a certain mood, and it might be easier to change the mood of an already promising or tested customer than to find another new one.

If nostalgia makes you buy music, why wait until you feel nostalgic to show you an ad, when I can make sure you encounter mentions of places and activities from your childhood? A weapons company (or a law-and-order political candidate) will pay to place their ad next to a crime story, but if they pay more they can also make sure the articles you read before that, just their titles as you scroll down, are also scary ones, regardless of topic. Scary, that is, specifically for you. And knowledge can work just as well, and just as subtly: tracking everything you read, and adapting the text here and there, seemingly separate sources of information will give you "A" and "B," close enough for you to remember them when a third one offers to sell you "C." It's not a new trick, but with ubiquitous transparent personalization and a pervasive infrastructure allowing companies to bid for the right to change pretty much all you read and see, it will be even more effective.

It won't be (just) ads, and it won't be (just) content marketing. The main business model of the consumer-facing internet is to change what they consume, and when it comes down to what can and will be leveraged to do it, the answer is of course all of it.

Along the way, advertising will once again drag into widespread commercial application, as well as public awareness, areas of mathematics and technology currently used in more specialized areas. Advertisers mostly see us — because their data systems have been built to see us — as black boxes with tagged attributes (age, searches, location). Collect enough black boxes and enough attributes, and blind machine learning can find a lot of patterns. What they have barely begun to do is to open up those black boxes to model the underlying process, the illogical logic by which we process our social and physical environment so we can figure out what to do, where to go, what to buy. Complete understanding is something best left to lovers and mystics, but every qualitative change in our scalable, algorithmic understanding of human behavior under complex patterns of stimuli will be worth billions in the next iteration of this arms race.

Business practices will change as well, if only as a deepening of current tendencies. Where advertisers now bid for space on a page or a video slot, they will be bidding for the reader-specific emotional resonance of an article somebody just clicked on, the presence of a given item in a background picture, or the location and value of an item in an Augmented Reality game ("how much to put a difficult-to-catch Pokemon just next to my Starbucks for this person, whom I know has been out in this cold day enough for me to believe it'd like a hot beverage?"). Everything that's controlled by software can be bid upon by other software for a third party's commercial purposes. Not much isn't, and very little won't be.

The cumulative logic of technological development, one in which printed flyers co-exist with personalized online ads, promises the survival of what we might call by then overt algorithmic advertising. It won't be a world with no ads, but one in which a lot of what you perceive is tweaked and optimized so it's collective effect, whether perceived or not, is intended to work as one.

We can hypothesize a subliminally but significantly more coherent phenomenological experience of the world — our cities, friendships, jobs, art — a more encompassing and dynamic version of the "opinion bubbles" social networks often build (in their defense, only magnifying algorithmically the bubbles we had already built with our own choices of friends and activities). On the other hand, happy people aren't always the best customers, so transforming the world into a subliminal marketing platform might end up not being very pleasant, even before considering the impact on our societies of leveraging this kind of ubiquitous, personalized, largely subliminal button-pushing for political purposes.

In any case, it's a race in and for the background, and once that already started.