Category Archives: Politics

Any sufficiently advanced totalitarianism is indistinguishable from Facebook

Gamification doesn't need to be enjoyable to be effective.

You're more likely to cheat on your taxes than to walk barefoot into a bank, even if it's summer and your feet hurt. That's because we don't just care about how bad the consequences of something could be, but also how certain they are to happen, and, illogically but consistently, how soon they will happen.

That's what makes Facebook so addictive. Staying another minute isn't going to make you happy, but it guarantees a small and immediate dose of socially-triggered emotion, and that's an incredibly powerful driver of behavior. The business of Facebook is to know enough about you, and have enough material, to make sure it can keep that subliminal promise while showing you targeted ads.

Governments' tools are noticeably blunter. Most of the laws that are generally respected reflect some sort of pre-existing social agreement. Conversely, where that social agreement doesn't exist (e.g., the legitimacy of buying dollars in Argentina, or the acceptability of misogyny pretty much everywhere), laws can only be enforced sporadically and with delay, and hence are seldom effective.

What the ongoing deployment by totalitarian governments — and the totalitarian arms of not-entirely-totalitarian governments — is making possible is the recreation of Facebook, but one co-founded by Foucault. The granularity, flexibility, and speed of perception and action, once a State is digitized enough, is unfathomable by the standards of any State in history. You can charge a fine, report a behavior to a boss, inconvenience a family member, impact a credit score, or notify a child's school the very moment a frowned-upon action was performed, with (sufficiently) total certainty and visibility. It doesn't have to be a large punishment or a lavish reward, or even the same for everybody: just as Facebook knows what you like, a government good enough at processing the data it has can know what you care about, and calibrate exactly how to use it so even small transgressions and small "socially beneficial activities" will get a small but fast and certain reward. Small but fast and certain is a cheap and effective way of shaping behavior, as long as it's something you do care about, and not generic "points" or "achievements." It can be your children's educational opportunities, your job, your public image, anything — governments, once they develop the right process and software infrastructure, can always find buttons to push.

This kind of detail-oriented totalitarianism only used to be possible in the most insanely paranoid societies (the Stasi being a paradigmatic example) but it escalated very poorly, and with ultimately suicidal economic and social costs.

Doing it with contemporary technology, on the other hand, scales very well, as long as a government is willing to cede control of the "last mile" of carrots and sticks to software. You would be very surprised if you entered Facebook one day and saw something as impersonal and generic, or at best as fake-personalized, as most interactions with the State are now. A government leveraging contemporary technology has a some significant computing power constantly looking at you and thinking about you — what you're doing, what you care about, what you're likely to do next — and instead of different parts of the government keeping their own files and dealing with you on their own time, everything from the cop on your street to your grandparents' pharmacist is integrated into that bit of the State that is exclusively and constantly dedicated to nudging you into being the best citizen you can possibly be.

It won't just be a cost-effective way of social control. Everything we know of psychology, and our recent experience with social networks and other mobile games, suggests it'll be an effective way of shaping our decisions before we even make them.

Open Source is one of the engines of the world's economy and culture. Its next iteration will be bigger.

Once upon a time, the very concept of Open Source was absurd, and only its proponents ever thought it could be other than marginal. Important software could only be built and supported by sophisticated businesses, an expensive industrial component whose blueprints — the source code — was extremely valuable.

But Open Source won. It became clear, to no historian's surprise, that once knowledge is sufficiently distributed and tools become cheap enough, distributed development by heterogeneously (and heterogeneously motivated) people not only creates high-quality software at zero marginal cost; because it only takes a single motivated individual to leverage existing developments and move them forward regardless of its novelty or risk, it's inherently much more creative.

Open Source developers can take risks others can't, and they begin from further ahead, on the shoulder of other, taller developers. What's more adventurous than a single individual toying with an idea out of love and curiosity? When has true innovation began in any other way?

The form of this victory, though, wasn't the one expected by early adopters. Desktop computers as they were known are definitely on the wane, and it's still not "the Year of Linux on the Desktop." Relatively few people knowingly use Open Source software as their main computing environment, and the smartphone, history's most popular personal computing platform, is regardless of software licenses as regulated a proprietary environment as you could imagine.

The social and political promise of Open Source is still unrealized. Things have software inside them now, programs monitoring and controlling them to a larger degree than most people imagine, and this software is closed in every sense of the word. It's not just for surveillance: the software in car engines lies to pass government regulation tests, the one controlling electric batteries makes them work worse than they could so you have the "option" of paying more to the manufacturer for flipping a software switch to de-hobble them, and so on and so forth. Things work worse than they say they do, do things they aren't supposed to, and are not really under your control even after you bought them, and there's little that you can do about that, and that little very difficult, not just because the source code is hidden, but because in many cases, and through a Kafkian global system of "security" and copyright laws, it's literally a crime to try to understand, never mention fix, what this thing you bought is doing.

No, the main impact of Open Source was also what made it possible: the Internet. It's not just that the overwhelming majority of the software that runs it, from most of the operating systems of most servers to the JavaScript frameworks rendering web pages, is Open Source. There could've been no explosive growth of the online world with license costs for every individual piece of software, no free-form experimentation of content, shapes, tools, modes of use. Most of the sites and services we use today, and most of the tools used to build them, began as an individual's crazy idea — as just one example, the browser you're using to read this was originally a tool built by and for scientists — and, had the Internet's growth been directed by the large software companies of that age, it'd look more like cable TV, in diversity, speed of technological change, overall social impact, than what we have now.

Even if you don't own an smartphone or a computer, finance, government, culture, our entire society has been profoundly influenced by an Internet, and a computing ecosystem in general, simply unthinkable without Open Source. Like many of the truly influential technological shifts, its invisibility to most people doesn't diminish, but rather highlights, its ubiquity and power.

What's next?

More Open Source is an obvious, true, but conservative observation. Of course people, governments, and companies (even those whose business model includes selling some software) will continue to write, distribute, and use Open Source. Each of them for their own goals, some of them attempting to cheat or break the system, but, most likely, always coming back to the economic attractor of a system of creating and using technology that, for many uses and in many contexts, simply works too well to abandon.

What comes next is what's happening now. Still not fully exploited, the Internet is no longer the cutting edge of how computing is impacting our societies. Call this latest iteration Artificial Intelligence, cognitive computing, or however you want. Silicon Valley throws money at it, popular newspapers write about the danger it poses to jobs, China aims at having the most advanced AI technology in the world as an strategic goal of the highest priority, and even Vladimir Putin, not a man inclined to idealistic whimsy, said that whichever country leads in Artificial Intelligence "will rule the world."

Unlike Open Source during its critical years, Artificial Intelligence certainly isn't a low-profile phenomenon. But a lot of the coverage seems to make the same assumptions the software industry used to make, that truly relevant AI can only be built by superpowers, giant companies, or cutting-edge labs.

To some degree this is true: some AI problems are still difficult enough that they require billions of dollars to attack and solve, and the development of the tools required to build and train AIs requires in many cases extremely specialized knowledge in mathematics and computer science.

However, "some" doesn't mean "all," and once the tools used to build AIs are Open Source, which many if not most of them are, using them becomes progressively eaiser. There's something happening that has happened before: almost every month it's cheaper, and it requires less specialized knowledge, to make a program that learns from humans how to do something no machine ever could, or that finds ways to do it much better than we can. Rings a bell?

The more intuitive parallel isn't software, but rather another success story of open, collaborative development that went from a ridiculous proposition to upending a centuries-old industry: Wikipedia. Like Open Source software, and with a higher public profile, Wikipedia went from an esoteric idea with no chances of competing in quality with the carefully curated professional encyclopedias, to what's very often the first (and, too often for too many people, the only) source of factual information about a topic.

What we're beginning to build is a Wikipedia of Artificial Intelligences, or, better yet, and Internet of them: smart programs highly skilled in specific areas that anybody can download, use, modify, and share. The tools have just began to be available, and the intelligences themselves are still mostly built by programmers for programmers, but as the know-how required to build a certain level of intelligence becomes smaller and better distributed, this is beginning to change.

Instead of scores of doctors contributing to a Wikipedia page or a personal site about dealing with a certain medical emergency at home, we'll have them contributing to teach what they know to a program that will be freely available to anybody, giving perhaps life-saving advice in real time. A program any doctor in the world will be able to contribute to, modify, and enhance, keeping up with scientific advances, adapting it to different countries and contexts.

It won't replace doctors, lawyers, interior decorators, editors, or other human experts — certainly not the ones who leverage those programs to make themselves even better — but it'll potentially give each human in the world access to advice and intellectual resources in every profession, art, and discipline known to humankind, from giving you honest feedback about your amateur opera singing, to reading and explaining the meaning of whatever morass of legal terms you're about to click "I Accept" to. Instantaneously, freely, continuously improving, and not limited to what a company would find profitable or a government convenient for you to know.

If the Internet, whenever and wherever we choose to, is or can be something we build together, a literal commons of infinitely reusable knowledge, we'll be building, when and where we choose to, a commons of infinitely reusable skills at our command.

It will also resemble Wikipedia more than Open Source on the ease with which people will be able to add to it. Developing powerful software has never been easier, but contributing to Wikipedia, or making a post on a site or social network about something you know about, only requires technical knowledge many societies already take for granted: open a web page and start typing about the history of Art Deco, your ideas for a revolutionary fusion of empanadas with Chinese cousine, or whatever else it is you want to teach the world about.

Teaching computers about many things will be even easier than that. We're close to the point where computers will be able to learn your recipe just from a video of you cooking and talking about it, and if besides sending that video to a social network you give access to it to an Open Cook, then it'll learn from your recipy, mix it with other ideas, and be able to give improved advice to anybody else in the world. You'll also be able to directly engage with these intelligences to teach them deliberately: just as artificial intelligences can learn to beat games just by playing them, they'll be able to "pick up" skills from humans by doing things and asking for feedback. And if you don't like how it does something, you can always teach it to do in a different way, and anybody will be able to use your version if they think it's better, and in turn modify it any way they want.

Neither Open Source nor Wikipedia, under different names, looks, and motivations, are as new as they seem to be. They've been known for decades, and only seemed pointless or impossible because our shared imagination often runs a bit behind our shared power. We've began to realize we can make computers do an enormous number of things, much sooner than we thought we would, and while we try to predict and shape the implications of this, we're still approaching at it as if revolutionary technology can only work if built and controlled by giant countries and companies.

They are a part of it, but not the only one, and over the long term perhaps not even the most important part. Google matters because it gives us access to the knowledge we — journalists, scientists, amateurs, scholars, people armed with nothing more and nothing less than a phone and curiosity — built and shared. We go to Facebook to see what we are doing.

Some Artificial Intelligences can only be built by sophisticated, specialized, organizations; some companies will become wealthy (or even more so) doing it. And some others can and will be built by all of us, together, and over the long term, their impact will be just as large, if not more. The world changed once everybody was able, at least in theory, to read. It changed again when everybody was able, at least in theory, to write something than everybody in the world can read.

How much will it change again once the things around us learn how to do things on their own, and we teach them together?


This article is based on the talk I gave at the Red Hat Forum Buenos Aires 2017.

Russia 1, Data Science 0

Both sides in the 2016 election had access to the best statistical models and databases money could buy. If Russian influence (which as far as we know involved little more than the well-timed dumping of not exactly military grade hacked information, plus some Twitter bots and Facebook ads) was at any level decisive, then it's a slap on the face for data-driven campaigning, which apparently hasn't rendered obsolete the old art of manipulating cognitive blind spots in media coverage and political habits ("they used Facebook and Twitter" explains nothing: so did all US candidates, in theory with better data and technology, and so do small Etsy shops; it should've made no difference).

The lessons, I suspect, are three:

  • The theory and practice of data-driven campaigning is still very immature. Algorithmize the Breitbart-Russia-Assange-Fox News maneuver, and you'll have something far ahead of the state of the art. (I believe this will come from more sophisticated psychological modeling, rather than more data.)
  • If a country's political process is as vulnerable as the US' was to what the Russians did, then how will it do against an external actor properly leveraging the kind of tools you can develop at the intersection of obsessive data collection, an extremely Internet-focused government, cutting-edge AI, and an assertive foreign policy.
  • You know, like China. Hypothetically.

Whenever this happens, the proper reaction to this isn't to get angry, but to recognize that a political system proved embarrassingly vulnerable, and take measures to improve it. That said, that's slightly less likely to happen when those informational vulnerabilities are also used by the same local actors that are partially responsible for fixing them.

(As an aside, "out under-investment on security /deliberate exploiting of regulatory gaps we lobbied for/cover-up of known vulnerabilities would've been fine if not for those dastardly hackers" is also the default response of large companies to this kind of thing; this isn't a coincidence, but a shared ethos.)

¿Qué quiere decir que la Argentina crezca 3%?

¿Es mucho, es poco? Es las dos cosas. Lo que sigue es una explicación rápida, ignorando un montón de factores importantes, de qué quiere decir ese 3% que predice el Banco Mundial para el bolsillo y las elecciones.

La forma más clara que se me ocurre de explicar ese 3% es pensar qué pasaría si se mantiene de acá hasta las elecciones del 2019. Simplificando mucho, y suponiendo que no pase algo inesperadamente bueno (en un sentido más bien despiadado, un ejemplo sería un colapso ecológico en las zonas productoras de soja en EEUU) o inesperadamente malo (como, por ejemplo, una guerra en Corea), 3% de acá a 2019 permitiría:

  • Reducir el déficit más o menos en un quinto (o más si se reduce el gasto estatal).
  • Aumentar el gasto estatal por persona más o menos en un 10% (o más si se mantiene o sube el déficit).

Por un lado, esto sería un logro significativo: crecer 3% sin que el precio de tus exportaciones primarias haya saltado es muy difícil de lograr, y hacerlo por varios años todavía más. A los EEUU les encantaría poder hacerlo de manera sostenida, y el año pasado menos de un país de cada tres pudo hacerlo. Por otro lado, se traduciría en cambios positivos, pero no espectaculares en el nivel de vida de los Argentinos.

De acá a las elecciones de 2023 lo improbable es casi seguro, pero imaginando que se mantiene ese 3% por año de crecimiento — y esto sería realmente un triunfo administrativo y político — el déficit podría reducirse a un quinto de lo que es ahora, con el gasto estatal por persona alrededor de un tercio más alto (con diferentes números de acuerdo a reformas impositivas, decisiones políticas, etc; esto es un escenario razonable, nada más). Un cambio muy positivo en la calidad de vida, definitivamente. No espectacular. Tan bueno como sería realista esperar, probablemente.

¿Políticamente suficiente para mantener, sea cual sea el partido que gane, una política económica coherente por la década o más que sería necesaria para poner al país en algo parecido a una curva de crecimiento autosustentable? Esa es la cuestión. Históricamente, la Argentina tiene tres problemas económicos estructurales:

  • Una economía poco avanzada, y con mecanismos internos prácticamente diseñados para hacer difícil mejorarla.
  • Tiempos políticos (en última instancia, culturales) incompatibles con lo que toma la clase de crecimiento incremental sostenido que es la única forma en la que las economías crecen (salvo excepciones históricas en situaciones en las que la Argentina no está).
  • Un "techo" bastante rígido para la eficiencia de la economía que parece ser bastante estructural; nunca pudimos atravesarlo, y sospecho que requeriría cambios culturales y sociales bastante radicales, especialmente en el contexto de las tradiciones políticas Argentinas.

En el largo plazo (lo que en este contexto tristemente quiere decir "no las próximas elecciones, sino las que les siguen") el desafío del Gobierno — de cualquier gobierno — es doble. Por un lado, una política administrativa, económica, y de negociación interna que permita una tasa de crecimiento significativa sostenida a lo largo del tiempo, y por otro lado la satisfacción de expectativas públicas que son más altas de lo que de la velocidad de crecimiento hace posible en un sentido puramente material (y en algunos casos con razón; una familia pasando hambre no puede esperar a que ese 3% haga su trabajo). Realizar ese malabarismo constante entre lo material y lo simbólico a lo largo de por lo menos un par de décadas — en un país que sospecha profundamente, de manera históricamente entendible pero también demasiado automáticamente — del concepto mismo de una economía y Estado técnicamente sofisticados, es lo que la clase política elegida y/o tolerada por los Argentinos no ha sido capaz de lograr, en los pocos casos en los que siquiera se intentó.

Resumiendo, ese 3% es, empíricamente, un logro. Mantenerlo consistentemente hasta las próximas elecciones, y especialmente hasta las que vienen después, sería un triunfo administrativo y político notable, además de requerir una dosis importante de suerte.

Es la peculiaridad del país, y la trampa en la que se encuentra desde hace más de un siglo, el que, por razones buenas y malas, no está para nada claro que sería suficiente.

Tesla (or Google) and the risk of massively distributed physical terrorist attacks

You know, an autonomous car is only a software vulnerability away from being a lethal autonomous weapon, and a successful autonomous car company is only a hack away from being the world's largest (if single-use) urban combat force. Such an event would easily be the worst terrorist attack in history. Imagine a year's worth of traffic car deaths, in multiple countries all over the world, during a single, horrifying span of ten minutes. And how ready is your underfunded public transit system to cope with a large proportion of the city's cars being unusable during the few days it takes the company to deal with the hack while everybody is going at them with pitchforks both legal and more or less literal?

But this is a science-fictional premise that's already been used in fiction more than once. In the real world, the whole of our critical software infrastructure is practically impervious to any form of attack, and, if nothing else, companies take the ethical responsibilities inherent in their control over data and systems with the seriousness it demands, even lobbying for higher levels of regulation than less technically sophisticated public and governments demand. And, while current on-board software systems are known to be ridiculously vulnerable to remote attacks, it's only to be expected that more complex programs running on heterogeneous large-scale platforms under overlapping spheres of regulation and oversight will be much safer.

So nothing to worry about.

Big Data, Endless Wars, and Why Gamification (Often) Fails

Militaries and software companies are currently stuck in something of a rut: billions of dollars are spent on the latest technology, including sophisticated and supposedly game-changing data gathering and analysis, and yet for most victory seems a best to be a matter of luck, and at worst perpetually elusive.

As different as those "industries" are, this common failure has a common root; perhaps unsurprisingly so, given the long and complex history of cultural, financial, and technological relationships between them.

Both military action and gamified software (of whatever kind: games, nudge-rich crowdsourcing software, behaviorally intrusive e-commerce shops, etc) are focused on the same thing: changing somebody else's behavior. It's easy to forget, amid the current explosion — pun not intended — of data-driven technologies, that wars are rarely fought until the enemy stops being able to fight back, but rather until they choose not to, and that all the data and smarts behind a game is pointless unless more players do more of what you want them to do. It doesn't matter how big your military stick is, or how sophisticated your gamified carrot algorithm, that's what they exist for.

History, psychology, and personal experience show that carrots and sticks, alone or in combination, do, work. So why do some wars take forever, and some games or apps whimper and die without getting any traction?

The root cause is that, while carrots and sticks work, different people and groups have different concepts of what counts as one. This is partly a matter of cultural and personal differences, and partly a matter of specific situations: as every teacher knows, a gold star only works for children who care about gold stars, and the threat of being sent to detention only deters those for whom it's not an accepted fact of life, if not a badge of honor. Hence the failure of most online reputational systems, the endemic nature of trolls, the hit-and-miss nature of new games not based on an already successful franchise, or, for that matter, the enormous difficulty even major militaries have stopping insurgencies and other similar actors.

But the root problem behind that root problem isn't a feature in the culture and psychology of adversaries and customers (and it's interesting to note that, artillery aside, the technologies applied on both aren't always different), but in the culture and psychology of civilian and military engineers. The fault, so to speak, is not in our five-stars rating systems, but in ourselves.

How so? As obvious as it is that achieving the goals of gamified software and military interventions requires a deep knowledge of the psychology, culture, and political dynamics of targets and/or customer bases, software engineers, product designers, technology CEOs, soldiers, and military strategists don't receive more than token encouragement to develop a strong foundation in those areas, much less are required to do so. Game designers and intelligence analysts, to mention a couple of exceptions, do, but their advice is often given but a half-hearted ear, and, unless they go solo, they lack any sort of authority. Thus we end, by and large, with large and meticulously planned campaigns — of either sort — that fail spectacularly or slowly fizzle out without achieving their goals, not for failures of execution (those are also endemic, but a different issue) but because the link between execution and the end goal was formulated, often implicitly, by people without much training in or inclination for the relevant disciplines.

There's a mythology behind this: they idea that, given enough accumulation of data and analytical power, human behavior can be predicted and simulated, and hence shaped. This might yet be true — the opposite mythology of some ineffable quality of unpredictability in human behavior is, if anything, even less well-supported by facts — but right now we are far from that point, particularly when it comes to very different societies, complex political situations, or customers already under heavy "attack" by competitors. It's not that people can't be understood, and forms of shaping their behavior designed, it's that this takes knowledge that for now lies in the work and brains of people who specialize in studying individual and collective behavior: political analysts, psychologists, anthropologists, and so on.

They are given roles, write briefs, have fun job titles, and sometimes are even paid attention to. The need for their type of expertise is paid lip service to; I'm not describing explicit doctrine, either in the military or in the civilian world, but rather more insidious implicit attitudes (the same attitudes the drive, in an even more ethically, socially, and pragmatically destructive way, sexism and racism in most societies and organizations).

Women and minorities aside (although there's a fair and not accidental degree of overlap), people with a strong professional formation in the humanities are pretty much the people you're least likely to see — honorable and successful exceptions aside — in a C-level position or having authority over military strategy. It's not just that they don't appear there: they are mostly shunned, and implicitly or explicitly, well, let's go with "underappreciated." Both Silicon Valley and the Pentagon, as well as their overseas equivalents, are seen and see themselves at places explicitly away from that sort of "soft" and "vague" thing. Sufficiently advanced carrots and sticks, goes the implicit tale, can replace political understanding and a grasp of psychological nuance.

Sometimes, sure. Not always. Even the most advanced organizations get stuck in quagmires (Google+, anyone?) when they forget that, absent an overwhelming technological advantage, and sometimes even then (Afghanistan, anyone?) successful strategy begins with a correct grasp of politics and psychology, not the other way around, and that we aren't yet at a point where this can be provided solely by data gathering and analysis.

Can that help? Yes. Is an organization that leverages political analysis, anthropology, and psychology together with data analysis and artificial intelligence like to out-think and out-match most competitors regardless of relative size? Again, yes.

Societies and organizations that reject advanced information technology because it's new have, by and large, been left behind, often irreparably so. Societies and organizations that reject humanities because they are traditional (never mind how much they have advanced) risk suffering the same fate.

A simplified nuclear game with Kim Jong-un

Despite its formal apparatus and cold reputation, game theory is in fact the systematic deployment of empathy. It's hard to overstate how powerful this can be, without or without mathematical machinery behind it, so let's take an informal look at a game-theoretical way of empathizing with somebody none of us would particularly want to, North Korea's Kim Jong-un.

First, a caveat: as I'm not trained in international politics, and this is an informal toy model rather than a proper analytical project, it'll be very oversimplified both in form and content. The main point is simply to show a quick example of how to think "game-theoretically" (in a handwavy, pre-mathematical sense) that for once isn't the Prisoner's Dilemma.

This particular game has two players, Kim and the US, and three possible outcomes: regime change, collapse, and status quo. We don't need to put specific values to each outcome to note that each player has clear preferences:

  • For the US, collapse < status quo < regime change
  • For Kim, collapse,regime change < status quo

(From Kim's point of view, a collapsing North Korea and one where he's no longer in charge are probably equivalent.)

Let's simplify the United States' possible moves to attempt regime change and do nothing. The latter results in the status quo with certainty, while the former might end up in a proper regime change with probability p, or in a more or less quick collapse with probability 1-p. Therefore, the United States will attempt a regime change as soon as

 \displaystyle p \times \mbox{ regime change} + (1-p) \times \mbox{ collapse} > \mbox{status quo}

There are multiple ways in which Kim's perceived risk can rise, even aside from direct threats. For example:

  • Decreased rapport between the US and South Korea or China (the two major countries who would suffer the brunt of the costs of a collapse) decreases the cost of collapse in the US' strategic calculations, and hence makes a regime change attempt more likely.
  • Every attempt of regime change by the US elsewhere in the world, and any expression of increased self-confidence in their ability to perform one, makes Kim's estimate of the US' estimate of p that much higher, and hence a regime change attempt more likely.
  • Any internal change in North Korea's politics risking Kim's control of the country, should it be found, will also raise p.
  • For that matter, a sufficiently strong fall in their military capabilities would eventually have the same effect.

Kim most likely knows he can't actually defend himself from an attempted regime change (there's no repelled regime change attempt outcome), so his only shot at staying in power is to change the US' strategic calculus. Given how unlikely it seems to be that he can make the status quo more desirable, he has, from a strategic point of view, to make the cost of an attempted regime change high enough to deter one. That's what atomic bombs are for: you change the payout matrix, and you change the game equilibrium. Once you can blow up something in the United States, which of course has an extremely negative value for the US, then even if p = 1,

 \displaystyle (p \times \mbox{ regime change} + (1-p) \times \mbox{ collapse}) + \mbox{Alaska goes boom} < \mbox{status quo}

The unintended problem is that, by both signalling and action, Kim and his regime have convinced the world that they are not entirely rational in strategic terms. As Schelling noted, deterrence often requires convincing other players that you're "crazy enough to do it," but in Kim's case nobody feels entirely certain that he will only use a nuclear weapon in case of an attempted regime change, or exactly what he'd consider one, so, although possessing a nuclear weapon decreases the expected value of a regime change attempt, it also decreases the value of the status quo, making the net impact on the US' strategic calculus &mdahs; the real goal of North Korea's nuclear program — doubtful. It can, and perhaps has, set the system in a dangerous course: the US decries the country as dangerous, the probability of a regime change attempt grows, Kim tries to develop and demonstrate stronger nuclear capabilities, this makes the US posture harsher, etc.

In this toy model — and I emphasize it's one — any attempt to de-escalate has to being by acknowledging that Kim's preferences between outcomes are what they are. Sanctions that weaken the regime spur, rather than delay, nuclear development. Paradoxically and distastefully, what you want is to credibly commit to not attempting a regime change, which at this point can only be done by actively strengthening it. This is something that both China and South Korea seem acutely aware of: pressures on and threats to North Korea tend to be of the "annoying but not regime-threatening" kind, as anything stronger would be counterproductive and not credible, and their assistance to the country has nothing to do with ideological sympathy, and everything to do with keeping the country away from collapse.

But not everything is bleakly pragmatic in game theory, and more humane suggestions can be derived from the above analysis. E.g.,

  • A Chinese offer to strengthen and modernize North Korea's nuclear command chain to avoid hasty or accidental deployments would raise a bit the value of the status quo without increasing the chance of a regime attempt, a mutual win that'd probably be accepted.
  • Any form of humanitarian development, as long as it's not seen as threatening the regime, could be implemented if Kim can sell it internally as being his own accomplishment. That'd be very annoying to everybody else, but suggests that quality of life in North Korea (although not political freedom) can be improved in the short term.
  • Credibly limited tit-for-tat counterattacks might, paradoxically, reinforce everybody's trust in mutual boundaries. So, if a North Korean hack against an US bank is retailed to by hitting Kim's own considerable financial resources in a way that is obviously designed to hurt him while also obviously designed to not impact his grip on power, that'd have a much higher chance of changing his behavior than threatening war.

To once again repeat my caveats, this is far from a proper analysis. To mention one of a multitude of disqualifying limitations, useful strategic analysis of this kind often involves scores of players (e.g., we'd have to look at internal politics in North and South Korea, China, Japan, and the United States, to begin with) with multiple, overlapping, multi-step games, and certainly more detailed and well-sourced domain information than what I've applied here. To derive real-world opinions or suggestions from it would be analytical malpractice.

The point of the article isn't to give yet another uninformed opinion on international politics, but rather to show how even a very primitive and only roughly formal analysis can help frame a discussion about a complex topic in a way that a more unstructured approach couldn't, specially when there are strong moral issues at play.

Sometimes emotions get in the way of understanding somebody else. Thankfully, we have maths to help with that.

Don't worry about opaque algorithms; you already don't know what anything is doing, or why

Machine learning algorithms are opaque, difficult to audit, unconstrained by ethics , and there's always the possibility they'll do the unthinkable when facing the unexpected. But that's true of most our society's code base, and, in a way, they are the most secure part of it, because we haven't talked ourselves yet into a false sense of security about them.

There's a technical side to this argument: contemporary software is so complex, and the pressures under which it's developed so strong, that it's materially impossible to make sure it'll always behave the way you want it to. Your phone isn't supposed to freeze while you're making a call, and your webcam shouldn't send real-time surveillance to some guy in Ukraine, and yet here we are.

But that's not the biggest problem. Yes, some Toyota vehicles decided on their own to accelerate at inconvenient times because their software systems were mindbogglingly and unnecessarily complex, but nobody outside the company knew they were because it was so legally difficult to have access to the code that even after the crashed they had to be inspected by an outside expert under conditions usually reserved to high-level intelligence briefings.

And there was the hidden code in VW engines designed to fool emissions tests, and the programs Uber uses to track you even while they say they aren't, or even Facebook's convenient tools to help advertisers target the emotionally vulnerable.

The point is, the main problem right now isn't what a self-driving car _might_ do when it has to make a complex ethical choice guided by ultimately unknowable algorithms, but what the car is doing on every other moment, reflecting ethical choices guided by corporate executives that might be unknowable in a philosophical, existential sense, but are worryingly familiar in an empirical one. You don't know most of what your phone is doing at any given time, not to mention other devices, it can be illegal to try to figure it out, and it can also be illegal if not impossible to change it even if you did.

And a phone a thing you hold in your hand and can, at least in theory, put in a drawer somewhere if you want to have a discrete chat with a Russian diplomat. Even more serious are all the hidden bits of software running in the background, like the ones that can automatically flag you as a national security risk, or are constantly weighting whether you should be allowed to turn on your tractor. Even if the organization that developed or runs the software did its job uncommonly well and knows what it's doing down to the last bit, you don't and most likely never will.

This situation, perhaps first and certainly most forcefully argued against by Richard Stallman, is endemic to our society, and absolutely independent of the otherwise world-changing Open Source movement. Very little of the code in our lives is running in something resembling a personal computer, after all, and even when it does, it mostly works by connecting to remote infrastructures whose key algorithms are jealously guarded business secrets. Emphasis on secret, with a hidden subtext of specially from users.

So let's not get too focused on the fact that we don't really understand how a given neural network works. It might suddenly decide to accelerate your car, but "old fashioned" code could, and as a matter of fact did, and in any case there's very little practical difference between not knowing what something is doing because it's a cognitively opaque piece of code, and not knowing what something is doing because the company controlling the thing you bought doesn't want you to know and has the law on its side if it wants to send you to jail if you try to.

Going forward, our approach to software as users, and, increasingly, as citizens, cannot but be empirical paranoia. Just assume everything around you is potentially doing everything it's physically capable of (noting that being remotely connected to huge amounts of computational power makes even simple hardware quite more powerful than you'd think), and if any of that is something you don't find acceptable, take external steps to prevent it, above and beyond toggling a dubiously effective setting somewhere. Recent experience shows that FOIA requests, legal suits, and the occasional whistleblower might be more important for adding transparency to our technological infrastructure than your choice of operating system or clicking a "do not track" checkbox.

The insidious not-so-badness of technological underemployment, and why more education and better technology won't help

Mass technological unemployment is seen by some as a looming concern, but there are signs we're already living in an era of mass technological underemployment. It's not just an intermediate phase: its politics are toxic, it increases inequality, and it's very difficult to get out of.

Underemployment doesn't necessarily mean working less hours than you'd like, or switching jobs frequently. In fact, it often means working a lot, under psychologically and/or physically unhealthy conditions, for low pay, with few or no protections against abuse and firing, and doing your damndest to keep that job because the alternatives are worse. The United States is a paradigmatic case: unemployment is low, but wage growth has been stagnant for a very long while, and working conditions for large numbers of workers aren't particularly great.

Technology isn't the only culprit — choices in macroeconomic management, fiscal policy, and political philosophy are at least just as important — but it certainly hasn't helped. Yes, computers make anybody who knows how to use them much more productive, from the trucker who can use satellite measurements and map databases to identify their location and figure out an optimal route to the writer using a global information network to gather news and references for a article. But you see the problem: those are extremely useful things, but "using a GPS" and "googling" are also extremely easy things. Most jobs require some form of technological literacy, but when most people got enough of it to fulfill the requirements — thanks in part to decades of single-minded focus in the computer industry — knowing how to use computers makes you more productive, but doesn't get you a better salary. Supply and demand.

More technology obviously won't come to the rescue here; the more advanced our computers become, the easier it is for people to interact with them to get a certain task done (until it's automated and you don't need to interact at all), which makes workers more productive, just not better paid. As most of the new kinds of jobs being created tend to be based on intensive use of technology, they are intrinsically prone to this kind of technological underemployment, and more vulnerable to eventual technological unemployment. The people building those tools are usually safe from this dynamic, but the scalability of mass production, and the even more impressive scalability of software systems, mean that you don't need many people to build those tools and infrastructure. And as we've become more adept at making software easy to use, we've become very good at giving it at best a neutral effect on wages.

Don't think "software engineer," think "underpaid person with an hourly contract working in the local warehouse of a highly advanced global logistics company under the control of a sophisticated software system." There are more of the latter than of the former (and things that used to look like the former have become easy enough to begin to look like the latter...).

More education is equally useless. *Not* to the individual: besides its non-economic significance, your education relative is one of the strongest predictors of your wages. But raising everybody's educational level, just like making everybody's technology easier to use, doesn't raise anybody's wages. By making people more productive, it makes it possible for companies to pay higher wages, but as long as there's more educated-enough people than positions you want to fill, it doesn't make it necessary, so of course (an "of course" contingent on a specific political philosophy) it doesn't happen.

Absent a huge exogenous increase in the demand for labor, or an infinitely more ominous exogenous decrease in its supply, the ongoing dynamic is that technology will keep being improved in power and ease of use, making workers more productive and at the same time giving them less bargaining power, and therefore stalling or reducing their wages and their working conditions.

The developing world faces this problem no less than the developed world, with the added difficulty, but also the ironic advantage, of starting behind them in human, physical, and institutional capital. Investment and integration with the global economy can raise living standards very significantly from that baseline, but eventually hitting the same plateau (and usually at a much lower absolute level).

This isn't just an economic tragedy of missed opportunities, it's an extremely toxic political environment. Mass unemployment isn't politically viable for long — sooner or later, peacefully or not, some action is demanded, which might or might not be rational, humane, or work at all, but which definitely changes the status quo — but mass underemployment of this kind just keeps everybody busy holding on to crappy jobs and trying to learn enough new technology or soft skills or whatever's being talked about this month in order to keep holding to it or even get a promotion to an slightly less crappy job where, not coincidentally, you're likely to end up using less technology (the marketing intern googling something vs the marketing VP having a power breakfast with a large customer). It sustains the idea that people could get a better life if they just studied and worked hard enough, which is true in an individual sense — highly skilled software engineers are very well paid — and absurd as a policy solution — once everybody can do what a highly skilled software engineer can do, then highly skilled software engineers won't be very well paid. Yet it's the kind of absurdity that sounds obvious, and therefore ends up driving politics and hence policy.

The fact that technology and education don't help with this problem doesn't mean we need less of either. There are other problems they help with, and for those problems we need more of both. But we do need to fight back increased underemployment, not to avoid it shifting into mass unemployment, but because there's a good risk of a it becoming widespread and structural, with serious social and political side effects .

There are workable solutions for this , but they lie in the realm of macroeconomics and fiscal policy, which ultimately depend on political philosophy, and that's a different post.

The case for blockchains as international aid

Blockchains aren't primarily financial tools. They are a political technology, and their natural field of application is the developing world.

The main problem a blockchain is meant to solve is lack of a trusted third party, which is at its root a problem of institutions, that is, politics. Bitcoin isn't used because it's convenient or scalable, but because it works as a rudimentary global financial system without having to trust any person or organization (at least that's the theory; poorly regulated financial intermediaries, like life, always find a way). The fact is that we do have a global financial system that it's relatively trusted, but bitcoin users — speculators aside — think the system checks don't work, think they work and want to avoid them, or some combination of both. I'm not judging.

Yet beyond those (huge) nooks and crannies in the developed world, there are billions of people who just don't have access to financial systems they can trust, and beyond finance, there are billions of people who don't have access to any kind of governance system they can trust. Honest cops, relatively functional bureaucracies, public records that don't change overnight: building a state that has and deserves a certain amount of trust takes generations, is always a work in progress, and is very difficult to even begin. Low trust environments are self-perpetuating, simply because individual incentives, risks, and choices become structurally skewed in that way.

Can blockchains solve this? No, obviously not.

But they can provide one small bit of extra buttressing, through a globally visible and verified public document ledger. Don't think in terms of financial transactions, but of more general documents: ownership transfer records, government contracts, some judicial and fiscal records, etc. Boring, old-fashioned, absolutely essential bits of information that everybody in a developed country just assumes without thinking are present, accessible, and reliable, but people elsewhere know can be anything but.

Blockchains working as a sort of global notary, set up by international development organizations but basing their reliability on the processing power donated by a multitude of CPU-rich but often money- and time-poor activists, would give citizens, businesses, and governments a way to fight some forms of mutual abuse. It won't, and cannot, prevent it, but it can at least raise the reputational cost of hiding, changing, or destroying documents that are utterly uninteresting to the likes of WikiLeaks, but that for a family can mean the difference between keeping or losing their home.

Even countries that have improved much in this area can strengthen their international reputations, and therefore their attractiveness for investments and migration, by this kind of globally verifiable transparency.

It's not sexy, it'll never make money, and it doesn't fully, or even mostly, solve the problem. It doesn't disrupt the business model of corruption and structural incompetence, and, best case, it'll put a small pebble in one or two undeservedly expensive shoes. Hopefully. Maybe.

But good governance is the core platform of a prosperous and healthy society. Getting it right is one of the hardest things, but also one of the most important we can try to help each other do.

Don't blame algorithms for United's (literally) bloody mess

It's the topical angle, but let's not blame algorithms for the United debacle. If anything, algorithms might be the way to reduce how often things like this happen.

What made it possible for a passenger to be hit and dragged off a plane to avoid inconveniencing an airline's personnel logistics wasn't the fact that the organization implements and follows quantitative algorithms, but the fact that it's an organization. By definition, organizations are built to make human behavior uniform and explicitly determined.

A modern bureaucratic state is an algorithm so bureaucrats will behave in homogeneous, predictable ways.

A modern army is an algorithm so people with weapons will behave in homogeneous, predictable ways.

And a modern company is an algorithm so employees will behave in homogeneous, predictable ways.

It's not as if companies used to be loose federations of autonomous decision-making agents applying both utilitarian and ethical calculus to their every interaction with customers. The lower you are in an organization's hierarchy, the less leeway you have to deviate from rules, no matter how silly or evil they prove to be in a specific context, and customers (or, for that matter, civilians in combat areas) rarely if ever interact with anybody who has much power.

That's perhaps an structural, and certainly a very old, problem in how humans more or less manage to scale up our social organizations. The specific problem in Dao's case was simply that the rules were awful, both ethically ("don't beat up people who are behaving according to the law just because it'll save you some money") and commercially ("don't do things that will get people viscerally and virally angry with you somewhere with cameras, which nowadays is anywhere with people.")

Part of the blame could be attributed to United CEO's Muños and his tenuous grasp of at least simulated forms of empathy, as manifested by his first and probably most sincere reaction. But hoping organizations will behave ethically or efficiently when and because they have ethical and efficient leaders is precisely why we have rules: one of the major points of a Republic is that there are rules that constrain even the highest-ranking officers, so we limit both the temptation and the costs of unethical behavior.

Something of a work in progress.

So, yes, rules are or can be useful to prevent the sort of thing that happened to Dao. And to focus on current technology, algorithms can be an important part of this. In a perhaps better world, rules would be mostly about goals and values, not methods, and you would trust the people on the ground to choose well what to do and how to do it. In practice, due to a combination of the advantages of homogeneity and predictability of behavior, the real or perceived scarcity of people you'd trust to make those choices while lightly constrained, and maybe the fact that for many people the point of getting to the top is partially to tell people what to do, employees, soldiers, etc, have very little flexibility to shape their own behavior. To blame this on algorithms is to ignore that this has always been the case.

What algorithms can do is make those rules more flexible without sacrificing predictability and homogeneity. While it's true that algorithmic decision-making can have counterproductive behaviors in unexpected cases, that's equally true of every system of rules. But algorithms can take into account more aspects of a situation than any reasonable rule book could handle. As long as you haven't given your employees the power to override rules, it's irrelevant whether the algorithm can make better ethical choices than them — the incremental improvement happens because it can make a better ethical choice than a static rule book.

In the case of United, it'd be entirely possible for an algorithm to learn to predict and take into account the optics of a given situation. Sentiment analysis and prediction is after all a very active area of application and research. "How will this look on Twitter?" can be part of the utility function maximized by an algorithm, just as much as cost or time efficiencies.

It feels quite dystopic to think that, say, ride hailing companies should have machine learning models to prevent them from suddenly canceling trips for pregnant women going to the hospital to pick up a more profitable trip elsewhere; shouldn't that be obvious from everybody from Uber drivers to Uber CEOs? Yes, it should. And no, it isn't. Putting "morality" (or at least "a vague sense of what's likely to make half the Internet think you're scum") in code that can be reviewed, as — in the best case — a redundancy backup to a humane and reasonable corporate culture, is what we already do in every organization. What we can and should do is to teach algorithms to try to predict the ethical and PR impact of every recommendation they make, and take that into account.

Whether they'll be better than humans at this isn't the point. The point is that, as long as we're going to have rules and organizations where people don't have much flexibility not to follow them, the behavioral boundaries of those organizations will be defined by that set of rules, and algorithms can function as more flexible and careful, and hence more humane, rules.

The problem isn't that people do what computers tell them to do (if you want, you can say that the root problem is when people do bad things other people tell them to do, but that has nothing to do with computers, algorithms, or AI). Computers do what people tell them. We just need to, and can, tell them to be more ethical, or at least to always take into account how the unavoidable YouTube video will look.

The new (and very old) political responsibility of data scientists

We still have a responsibility to prevent the ethical misuse of new technologies, as well as helping make their impact on human welfare a positive one. But we now have a more fundamental challenge: to help defend the very concept and practice of the measurement and analysis of quantitative fact.

To be sure, a big part of practicing data science consists of dealing with the multiple issues and limitations we face when trying to observe and understand the world. Data seldom means what its name implies it means; there are qualifications, measurement biases, unclear assumptions, etc. And that's even before we engage the useful but tricky work of making inferences off that data.

But the end result of what we do — and not only, or even mainly us, for this collective work of observation and analysis is one of the common threads and foundations of civilization — is usually a pretty good guess, and it's always better than closing your eyes and giving whatever number provides you with an excuse to do what you'd rather do. Deliberately messing with the measurement of physical, economic, or social data is a lethal attack on democratic practices, because it makes impossible for citizens to evaluate government behavior. Defending the impossibility of objective measurement (as opposed to acknowledging and adapting to the many difficulties involved) is simply to give up on any form of societal organization different from mystical authoritarianism.

Neither attitude is new, but both have gained dramatically in visibility and influence during the last year. This adds to the existing ethical responsibilities of our profession a new one, unavoidably in tension with them. We not only need to fight against over-reliance on algorithmic governance driven by biased data (e.g. predicting behavior from records compiled by historically biased organizations) or the unethical commercial and political usage of collected information, but also, paradoxically, we need to defend and collaborate in the use of data-driven governance based on best-effort data and models.

There are forms of tyranny based on the systematic deployment of ubiquitous algorithmic technologies, and there are forms of obscurantism based on the use of cargo cult pseudo-science. But there are also forms of tyranny and obscurantism predicated on the deliberate corruption of data or even the negation of the very possibility of collecting it, and it's part of our job to resist them.

Economists and statisticians in Argentina, when previous governments deliberately altered some national statistics and stopped collecting others, rose to the challenge by providing parallel, and much more widely believed, numbers (among the first, the journalist and economist — a combination of skills more necessary with every passing year — Sebastián Campanario). Theirs weren't the kind of arbitrary statements that are frequently part of political discourse, nor did they reject official statistics because they didn't match ideological preconceptions or it was politically convenient to do so. Official statistics were technically wrong in their process of measurement and analysis, and for any society that aspires to meaningful self-government the soundness and availability of statistics about itself are an absolute necessity.

Data scientists are increasingly involved in the process of collection and analysis of socially relevant metrics, both in the private and the public sectors. We need to consistently refuse to do it wrong, and to do our best to do it correctly even, and specially, when we suspect other people are choosing not to. Nowcasting, inferring the present from the available information, can be as much of a challenge, and as important, as predicting the future. The fact that we might end up having to do it without the assumption of possibly flawed but honest data will be a problem we have in other contexts already began to work on. Some of the earliest applications of modern data-driven models in finance, after all, were in fraud detection.

We are all potentially climate scientists now, massive observational efforts to be refuted based on anecdotes, disingenuous visualizations to be touted as definitive proof, and eventually the very possibility of quantitative understanding to be violently mocked. We (still) have to make sure the economic and social impact of things like ubiquitous predictive surveillance and technology-driven mass unemployment are managed in positive ways, but this new responsibility isn't one we can afford to ignore.

The Mental Health of Smart Cities

Not the mental health of the people living in smart cities, but that of the cities themselves. Why not? We are building smart cities to be able to sense, think, and act; their perceptions, thoughts, and actions won't be remotely human, or even biological, but that doesn't make them any less real.

Cities can monitor themselves with an unprecedented level of coverage and detail, from cameras to government records to the wireless information flow permeating the air. But these perceptions will be very weakly integrated, as information flows slowly, if at all, between organizational units and social groups. Will the air quality sensors in a hospital be able to convince most traffic to be rerouted further away until rush hour passes? Will the city be able to cross-reference crime and health records with the distribution of different business, and offer tax credits to, say, grocery stores opening in a place that needs them? When a camera sees you having trouble, will the city know who you are, what's happening to you, and who it should call?

This isn't a technological limitation. It comes from the way our institutions and business are set up, which is in turn reflected in our processes and infrastructure. The only exception in most parts of the world is security, particularly against terrorists and other rare but high-profile crimes. Organizations like the NSA or the Department of Homeland Security (and its myriad partly overlapping versions both within and outside the United States) cross through institutional barriers, most legal regulations, and even the distinction between the public and the private in a way that nothing else does.

The city has multiple fields of partial awareness, but they are only integrated when it comes to perceiving threats. Extrapolating an overused psychological term, isn't this an heuristic definition of paranoia? The part of the city's mind that deals with traffic and the part that deals with health will speak with each other slowly and seldom, the part who manages taxes with the one who sees the world through the electrical grid. But when scared, and the city is scared very often, and close to being scared every day, all of its senses and muscles will snap together in fear. Every scrap of information correlated in central databases, every camera and sensor searching for suspects, all services following a single coordinated plan.

For comparison, shopping malls are built to distract and cocoon us, to put us in the perfect mood to buy. So smart shopping malls see us like customers: they track where we are, where we're going, what we looked at, what we bought. They try to redirect us to places where we'll spend more money, ideally away from the doors. It's a feeling you can notice even in the most primitive "dumb" mall: the very shape of the space is built as a machine to do this. Computers and sensors only heighten this awareness; not your awareness of the space, but the space's awareness of you.

We're building our smart cities in a different direction. We're making them see us as elements needing to get from point A to point B as quickly as possible, taking little or no care of what's going on at either end... except when it sees us, and it never sees or thinks as clearly and as fast, as potential threats. Much of the mind of the city takes the form of mobile services from large global companies that seldom interact locally with each other, much less with the civic fabric itself. Everything only snaps together with an alert is raised and, for the first time, we see what the city can do when it wakes up and its sensors and algorithms, its departments and infrastructure, are at least attempting to work coordinately toward a single end.

The city as a whole has no separate concept of what a person is, no way of tracing you through its perceptions and memories of your movements, actions, and context except when you're a threat. As a whole, it knows of "persons of interest" and "active situations." It doesn't know about health, quality of life, a sudden change in a neighborhood. It doesn't know itself as anything else than a target.

It doesn't need to be like that. The psychology of a smart city, how it integrates its multiple perceptions, what it can think about, how it chooses what to do and why, all of that is up to us. A smart city is just an incredibly complex machine we live in and whom we give life to. We could build it to have a sense of itself and of its inhabitants, to perceive needs and be constantly trying to help. A city whose mind, vaguely and perhaps unconsciously intuited behind its ubiquitous and thus invisible cameras, we find comforting. A sane mind.

Right now we're building cities that see the world mostly in terms of cars and terrorism threats. A mind that sees everything and puts together very little except when it scares it, where personal emergencies are almost entirely your own affair, but becomes single-minded when there's a hunt.

That's not a sane mind, and we're planning to live in a physical environment controlled by it.

The best political countersurveillance tool is to grow the heck up

The thing is, we're all naughty. The specifics of what counts as "wrong" depend on the context, but there isn't anybody on Earth so boring that haven't done or aren't doing something they'd rather not be known worldwide.

Ordinarily this just means that, as every other social species, we learn pretty early how to dissimulate. But we aren't living in an ordinary world. As our environment becomes a sensor platform with business models bolted on top of it, private companies have access to enormous amounts of information about things that were ordinarily very difficult to find, non-state actors can find even more, and the most advanced security agencies... Well. Their big problem is managing and understanding this information, not gathering it. And all of this can be done more cheaply, scalably, and just better than ever before.

Besides issues of individual privacy, this has a very dangerous effect on politics wherever it's coupled with overly strict standards: it essentially gives a certain degree of veto power over candidates to any number of non-democratic actors, from security agencies to hacker groups. As much as transparency is an integral part of democracy, we haven't yet adapted to the kind of deep but selective transparency this makes possible, the US election being but the most recent, glaring, and dangerous example.

It will happen again, it will keep happening, and the prospect of technical or legal solutions is dim. This being politics, the structural solution isn't technical, but human. While we probably aren't going to stop sustaining the fiction that we are whatever our social context considers acceptable, we do need to stop reacting to "scandals" in an indiscriminate way. There are individual advantages to doing so, of course, but the political implications of this behavior, aggregated over an entire society, are extremely deleterious.

Does this mean this anything goes? No, quite the contrary. It means we need to become better at discriminating between the embarrassing and the disqualifying, between the hurtful crime and the indiscretion, between what makes somebody dangerous to give power to, and what makes them somebody with very different and somewhat unsettling life choices. Because everybody has something "scandalous" in their lives that can and will be digged up and displayed to the world whenever it's politically convenient to somebody with the power to do it, and reacting to all of it in the same way will give enormous amounts of direct political power to organizations and individuals, everywhere and at all points in the spectrum of legality, that are among the least transparent and accountable in the world.

This means knowing the difference between the frowned upon and the evil. It's part of growing up, yet it's rarer, and more difficult, the larger and more interconnected a group becomes. Eventually the very concept of evil as something other than a faux pas disappears, and, historically, socially sanctioned totalitarianism follows because, while political power in nominally democratic societies seldom arrogates to itself the power to define what's evil, it has enormous power to change the scope of "adequate behavior."

We aren't going to shift our public morals to fully match our private behavior. We aren't really wired that way; we are social primates, and lying to each other is the way we make our societies work. But we are social primates living in an increasingly total surveillance environment vulnerable to multiple actors, a new (geo)political development with impossible technical solutions, but a very simple, very hard, and very necessary sociological fix: we just need to grow the heck up.

The informal sector Singularity

At the intersection of cryptocurrencies and the "gig economy" lies the prospect of almost self-contained shadow economies with their own laws and regulations, vast potential for fostering growth, and the possibility of systematic abuse.

There have always been shadow, "unofficial" economies overlapping and in some places overruling their legal counterparts. What's changing now is that technology is making possible the setup and operation of extremely sophisticated informational infrastructures with very few resources. The disruptive impact of blockchains and related technologies isn't any single cryptocurrency, but the fact that it's another building block for any group, legal or not, to operate their own financial system.

Add to this how easy it is to create fairly generic e-commerce marketplaces, reputation tracking systems, and, perhaps most importantly, purely online labor markets. For employers, the latter can be a flexible and cost-efficient way of acquiring services, while for many workers it's becoming an useful, and for some an increasingly necessary, source of income. Large rises in unemployment, especially those driven by new technologies, always increase the usefulness of this kind of labor markets for employers in both regulated and unregulated activities, as a "liquid" market over sophisticated platforms makes it easy to continuously optimize costs.

You might call it a form of "Singularity" of the informal sector: there are unregulated or even fully criminal sectors that are technologically and algorithmically more sophisticated than the average (or even most) of the legal economy.

While most online labor markets are fully legal, this isn't always the case, even when the activity being contracted isn't per se illegal. One current example is Uber's situation in Argentina: their operation is currently illegal due to regulatory non-compliance, but, short of arresting drivers — something that's actually being considered, due in some measure to the clout of the cab driver's union — there's nothing the government can do to completely stop them. Activities less visible than picking somebody up in a car — for example, anything you can do from a computer or a cellphone in your home — contracted over the internet and paid in a cryptocurrency or in any parallel payment system anywhere in the world are very unlikely to be ever visible to, or regulated by, the state or states who theoretically govern the people involved.

There are clear potential upsides to this. The most immediate one is that these shadow economies are often very highly efficient and technologically sophisticated by design. They can also help people avoid some of the barriers of entry that keep many people from full-time legal employment. A lack of academic accreditations, a disadvantaged socioeconomic background, or membership in an unpopular minority or age bracket can be a non-issue for many types of online work. In other cases they simply make possible types of work so new there's no regulatory framework for them, or that are impeded by obsolete ones. And purely online activities are often one of the few ways in which individuals can respond to economic downturns in their own country by supplying services overseas without intermediate organizations capturing most or all of the wage differential.

The main downside is, of course, that a shadow economy isn't just free from obsolete regulatory frameworks, but also free from those regulations meant to prevent abuse, discrimination, and fraud: minimum wages, safe working conditions, protection against sexual harassment, etc.

These issues might seem somewhat academic right now: most of the "gig economy" is either a secondary source of income, or the realm of relatively well-paid professionals. But technological unemployment and the increase in inequality suggest that this kind of labor markets are likely to become more important, particularly for the lower deciles of the income distribution.

Assuming a government has the political will to attack the problem of a growing, technologically advanced, and mostly unregulated labor economy — for some, at least, this seems to be a favoured outcome rather than a problem — fines, arrests, etc, are very unlikely to work, at least in moderately democratic societies. The global experience with software and media piracy shows how extremely difficult it is to stop an advanced decentralized digital service regardless of its legality. Silk Road was shut down, but it was one site, and run by a conveniently careless operator. The size, sophistication, and longevity of the on-demand network attacks, hacked information, and illegal pornography sectors are a better indicator of the impossibility of blocking or taxing this kind of activity once supply and demand can meet online.

A more fruitful approach to the problem is to note that, given the choice, most people prefer to work inside the law. It's true that employers very often prefer the flexibility and lower cost of an unregulated "high-frequency" labor economy, but people offer their work in unregulated economies when the regulated economy is blocked to them by discrimination, the legal framework hasn't kept up with the possibilities of new technologies, or there simply isn't enough demand in the local economy, making "virtual exports" an attractive option.

The point isn't that online labor markets, reputation systems, cryptocurrencies, etc, are unqualified evils. Quite the contrary. They offer the possibility of wealthier, smarter economies with a better quality of life, less onerous yet more effective regulations for both employers and employees, and new forms of work. However, these changes have to be fully implemented. Upgrading the legal economy to take advantage of new technologies — and doing it very soon — isn't a matter of not missing an opportunity, particularly for less developed economies. Absent a technological overhaul of how the legal economy works, more effective and flexible unregulated shadow economies are only going to keep growing; a lesser evil than effective unemployment, but not without a heavy social price.

For the unexpected innovations, look where you'd rather not

Before Bill Gates was a billionaire, before the power, the cultural cachet, and the Robert Downey Jr. portrayals, computers were for losers who would never get laid. Their potential was of course independent of these considerations, but Steve Jobs could become one of the richest people on Earth because he was fascinated with, and dedicated time to, something that cool kids — specially from the wealthy families who could most easily afford access to them — wouldn't have been caught dead playing with, or at least loving.

Geek, once upon a time, was an unambiguous insult. It was meant to humiliate. Dedicating yourself to certain things meant you'd pay a certain social price. Now, of course, things are better for that particular group; if nothing else, an entire area of intellectual curiosity is no longer stigmatized.

But as our innovation-driven society is locked into computer geeks as the source of change, that means it's going to be completely blindsided by whatever comes next.

Consider J. K. Rowling. Stephenie Meyer. E. L. James. It's significant that you might not recognize the last two names: Meyer wrote Twilight and James Fifty Shades of Grey. Those three women (and it's also significant that they are women) are among the best-selling and most widely influential writers of our time, and pretty much nobody in the publishing industry was even aware that there was a market for what they were doing. Theirs aren't just the standard stories of talented artists struggling to be published. By the standards of the (mostly male) people who ran and by and large still run the publishing industry, the stories they wrote were, if they were to be kind, pointless and low-brow. A school for wizards where people died during a multi-volume malignant cou d'état? The love story of a teenager torn between her possessive werewolf friend and a teenage-looking centuries old vampire struggling to maintain self-control? Romantic sadomasochism from a female point of view?

Who'd read that?

Millions upon millions did. And then they watched the movies, and read the books again. Many of them were already writing the things they wanted to read — James' story was originally fan fiction in the Twilight universe — and wanted more. The publishing industry, supposedly in the business of figuring out that, had ignored them because they weren't a prestigious market (they were women, to be blunt, including very young women who "weren't supposed" to read long books, and older women who "weren't supposed" to care about boy wizards), and those weren't prestigious stories. When it comes to choosing where to go next, industries are as driven by the search for reputation as they are for the search of profit (except finance, where the search for profit regardless of everything else is the basis of reputation). Rowling and Meyer had to convince editors, and James first surge of sales came through self-published Kindle books. The next literary phenomenon might very well bypass publishers, and if that becomes the norm then the question will be what the publishing industry is for.

Going briefly back to the IT industry, gender and race stereotypes are still awfully prevalent. The next J. K. Rowling of software — and there will be one — will have to go through a much more difficult path than she should've had to. On the other hand, a whole string of potential early investors will have painful almost-did-it stories they'll never tell anyone.

This isn't a modern development, but rather a well-established historical pattern. It's the underdogs — the sidelined, the less reputable — who most often come up with revolutionary practices. The "mechanical arts" that we now call engineering were once a disreputable occupation, and no land-owning aristocrat would have guessed that one day they'll sell their bankrupted ancestral homes to industrialists. Rich, powerful Venice began, or so its own legend tells, as a refugee camp. And there's no need to recount the many and ultimately fruitful ways in which the Jewish diaspora adapted to and ultimately leveraged the restrictions imposed everywhere upon them.

Today geographical distances have greatly diminished, and are practically zero when it comes to communication and information. The remaining gap is social — who's paid attention to, and what about.

To put it in terms of a litmus test, if you wouldn't be somewhat ashamed of putting it in a pitch deck, it might be innovative, brilliant, and a future unicorn times ten, but it's something people already sort-of see coming. And a candidate every one of your competitors would consider hiring is one that will most likely go to the biggest or best-paying one, and will give them the kind of advantage they already have. To steal a march on them — to borrow a tactic most famously used by Napoleon, somebody no king would have appointed as a general until he won enough wars to appoint kings himself — you need to hire not only the best of the obvious candidates, but also look at the ones nobody is looking at, precisely because nobody is looking at them. They are the point from which new futures branch.

The next all-caps NEW thing, the kind of new that truly shifts markets and industries, is right now being dreamed and honed by people you probably don't talk to about this kind of thing (or at all) who are doing weird things they'd rather not tell most people about, or that they love discussing but have to go online to find like-minded souls who won't make fun of them or worse.

Diversity isn't just a matter of simple human decency, although it's certainly that as well, and that should be enough. In a world of increasingly AI-driven hyper-corporations that can acquire or reproduce any technological, operational, or logistical innovation anybody but their peer competitors might come up with, it's the only reliable strategy to compete against them. "Black swans" only surprise you if you never bothered looking at the "uncool" side of the pond.

The Differentiable Organization

Neural networks aren't just at the fast-advancing forefront of AI research and applications, they are also a good metaphor for the structures of the organizations leveraging them.

DeepMind's description of their latest deep learning architecture, the Differentiable Neural Computer highlights one of the core properties of neural networks: they are differentiable systems to perform computations. Generalizing the mathematical definition, for a system to be differentiable implies that it's possible to work backwards quantitatively from its current behavior to figure out the changes that should be done to the system to improve it. Very roughly speaking — I'm ignoring most of the interesting details — that's a key component of how neural networks are usually trained, and part of how they can quickly learn to match or outperform humans in complex activities beginning from a completely random "program." Each training round provides not only a performance measurement, but also information about how to tweak the system so it'll perform better the next time.

Learning from errors and adjusting processes accordingly is also how organizations are supposed to work, through project postmortems, mission debriefings, and similar mechanisms. However, for the majority of traditional organizations this is in practice highly inefficient, when at all possible.

  • Most of the details of how they work aren't explicit, but encoded in the organizational culture, workflow, individual habits, etc.
  • They have at best a vague informal model — encoded in the often mutually contradictory experience and instincts of personnel — of how changes to those details will impact performance.
  • Because most of the "code" of the organization is encoded in documents, culture, training, the idiosincratic habits of key personnel, etc, they change only partially, slowly, and with far less control than implied in organizational improvement plans.

Taken together, these limitations — which are unavoidable in any system where operational control is left to humans — make learning organizations almost chimerical. Even after extensive data collection, without a quantitative model of how the details of its activities impact performance and a fast and effective way of changing them, learning remains a very difficult proposition.

By contrast, organizations that have automated low-level operational decisions and, most importantly, have implemented quick and automated feedback loops between their performance and their operational patterns, are, in a sense, the first truly learning organizations in history. As long as their operations are "differentiable" in the metaphorical sense of having even limited quantitative models allowing to work out in a backwards faction desirable changes from observed performance — you'll note that the kind of problems the most advanced organizations have chosen to tackle are usually of this kind, beginning in fact relatively long ago with automated manufacturing — then simply by continuing their activities, even if inefficiently at first, they will be improving quickly and relentlessly.

Compare this pattern with an organization where learning only happens in quarterly cycles of feedback, performed by humans with a necessarily incomplete, or at least heavily summarized, view of low-level operations and the impact on overall performance of each possible low-level change. Feedback delivered to humans that, with the best intentions and professionalism, will struggle to change individual and group behavior patterns that in any case will probably not be the ones with the most impact on downstream metrics.

It's the same structural difference observed between manually written software and trained and constantly re-trained neural networks; the former can perform better at first, but the latter's improvement rate is orders of magnitude higher, and sooner or later leaves them in the dust. The last few years in AI have shown the magnitude of this gap, with software routinely learning in hours or weeks from scratch to play games, identify images, and other complex tasks, going poor or absolutely null performance to, in some cases, surpassing human capabilities.

Structural analogies between organizations and technologies are always tempting and usually misleading, but I believe the underlying point is generic enough to apply: "non-differentiable" organizations aren't, and cannot be, learning organizations at the operational level, and sooner or later aren't competitive with other that set up from the beginning automation, information capture, and the appropriate, automated, feedback loops.

While the first two steps are at the core of "big data" organizational initiatives, the latter is still a somewhat unappreciated feature of the most effective organizations. Rare enough, for the moment, to be a competitive advantage.

The truly dangerous AI gap is the political one

The main short term danger from AI isn't how good it is, or who's using it, but who isn't: governments.

This impacts every aspect of our interaction with the State, beginning with the ludicrous way in which we have to move papers around (at best, digitally) to tell one part of the government something another part of the government already knows. Companies like Amazon, Google, or Facebook are built upon the opposite principle. Every part of them knows everything any part of the company knows about you (or at least it behaves that way, even if in practice there are still plenty of awkward silos).

Or consider the way every business and technical process is monitored and modeled in a high-end contemporary company, and contrast it with the opacity, most damagingly to themselves, of government services. Where companies strive to give increasingly sophisticated AI algorithms as much power as possible, governments often struggle to give humans the information they need to make the decisions, much less assist or replace them with decision-making software.

It's not that government employees lack the skills or drive. Governments are simply, and perhaps reasonably, biased toward organizational stability: they are very seldom built up from scratch, and a "fail fast" philosophy would be a recipe for untold human suffering instead of just a bunch of worthless stock options. Besides, most of the countries with the technical and human resources to attempt something like this are currently leaning to one degree or another towards political philosophies that mostly favor a reduced government footprint.

Under these circumstances, we can only expect the AI gap between the public and the private sector to grow.

The only areas where this isn't the case are, not coincidentally, the military and intelligence agencies, who are enthusiastic adopters of every cutting edge information technology they can acquire or develop. But these exceptions only highlight one of the big problems inherent in this gap: intelligence agencies (and to a hopefully lesser degree, the military) are by need, design, or their citizens' own faith the government areas least subject to democratic oversight. Private companies lose money or even go broke and disappear if they mess up; intelligence agencies usually get new top-level officers and a budget increase.

As an aside, even individuals are steered away from applying AI algorithms instead of consuming their services, through product design and, increasingly, laws that prohibit them from reprogramming their own devices with smarter or at least more loyal algorithms.

This is a huge loss of potential welfare — we are getting worse public services, and at a higher cost, than we could given the available technology — but it's also part of a wider political change, as (big) corporate entities gain operational and strategic advantages that shift the balance of power away from democratically elected organizations. It's one thing for private individuals to own the means of production, and another when they (and often business-friendly security agencies) have a de facto monopoly on superhuman smarts.

States originally gained part of their power through early and massive adoption of information technologies, from temple inventories in Summer to tax censuses and written laws. The way they are now lagging behind bodes ill for the future quality of public services, and for democratic oversight of the uses of AI technologies.

It would be disingenuous to say that this is the biggest long- and not-so-long-term problem states are facing, but only because there are so many other things going wrong or still to be done. But it's something that will have to be dealt with; not just with useful but superficial online access to existing services, or with the use of internet media for public communication, but also with deep, sustained investment in the kind of ubiquitous AI-assisted and AI-delegated operations that increasingly underlie most of the private economy. Politically, organizationally, and culturally as near-impossible as this might look.

The recently elected Argentinean government has made credible national statistics one of its earliest initiatives, less an act of futuristic boldness than a return to the 20th century baseline of data-driven decision-making, a departure of the previous government that was not without large political and practical costs. By failing to resort intensively to AI technologies in their public services, most governments in the world are failing to measure up to the technological baseline of the current century, an almost equally serious oversight.

The gig economy is the oldest one, and it's always bad news

Let's say you have an spare bedroom and you need some extra income. What do you do? You do more of what you've trained for, in an environment with the capital and tools to do it best. Anything else only makes sense if the economy is badly screwed up.

The reason is quite simple: unless you work in the hospitality industry, you are better — able to extract from it a higher income — at doing whatever else you're doing than you are at being a host, or you wouldn't take it up as a gig, but rather switch to it full time. Suppliers in the gig economy (as opposed to professionals freelancing in their area of expertise) are by definition working more hours but less efficiently so, whether because they don't have the training and experience, or because they aren't working with the tools and resources they'd take advantage of in their regular environments. The cheaper, less quality, badly regulated service they provide might be desirable to many customers, but this is achieved partly through de-capitalization. Every hour and dollar an accountant spends caring for a guest instead of, if he wants a higher income, doing more accounting or upgrading his tools, is a waste of his knowledge. From the point of view of overall capital and skill intensity, a professional low-budget hotel chain would be vastly more efficient over the long term (of course, to do that you need to invest capital in premises and so on instead of on vastly cheaper software and marketing).

The only reason for an accountant, web designer, teacher, or what not, for doing "gigs" instead of extra hours, freelance work, or similar, is if there is no demand for their professional labor. While it's entirely possible that overtime or freelance work might be relatively less valuable than the equivalent time spent at their main job, to do something else they would have to get less than what they can get from a gig for which they have little training and few tools. That's not how a capital- and skill-intensive economy looks like.

For an specific occupation falling out of favor, this is just the way of things. For wide swaths of the population to find themselves in this position, perhaps employed but earning less than they would like, and unable to trade more of their specialized labor for income, the economy as a whole has to be suffering from depressed demand. What's more, they still have to contend with competitors with more capital but still looking to avoid regulations (e.g., people buying apartments specifically to rent via Airbnb), in turn lowering their already low gig income.

This is a good thing if you want cheaper human-intensive services or have invested on Airbnb and similar companies, and it's bad news if you want an skill-intensive economy with proportionally healthy incomes.

In the context of the gig economy, flexibility is an euphemism for I have a (perhaps permanent!) emergency and can't get extra work, and efficiency refers to the liquidity of services, not the outcome of high capital intensity. And while renting a room or being an Uber driver might be less unpleasant than, and downright utopian compared to, the alternatives open to those without a room to rent or an adequate car, the argument that it's fun doesn't survive the fact that nobody has ever been paid to go and crash on other people's couches.

Neither Airbnb nor Uber are harmful on themselves — who doesn't think cab services could use more a transparent and effective dispatch system? — but customer ratings don't replace training, certification, and other forms of capital investment. Shiny apps and cool algorithms aside, a growing gig economy is a symptom of an at least partially de-skilling one.

Bitcoin is Steampunk Economics

From the point of view of its largest financial backers, the fact that Bitcoin combines 21st century computer science with 17th century political economy isn't an unfortunate limitation. It's what they want it for.

We have grown as used to the concept of money as to any other component of our infrastructure, but, all things considered, it's an astoundingly successful technology. Even in its simplest forms it helps solve the combinatorial explosion implicit in any barter system, which is why even highly restricted groups, like prison populations, implement some form of currency as one of the basic building blocks of their polities.

Fiat money is a fascinating iteration of this technology. It doesn't just solve the logistical problems of carrying with you an impractical amount of shiny metals or some other traditional reference commodity, it also allows a certain degree of systemic adaptation to external supply and demand shocks, and pulls macroeconomic fine-tuning away from the rather unsuitable hands of mine prospectors and international trading companies.

A protocol-level hack that increases systemic robustness in a seamless distributed manner: technology-oriented people should love this. And they would, if only that hack weren't, to a large degree... ugh... political. From the point of view of somebody attempting to make a ton of money by, literally, making a ton money, the fact that a monetary system is a common good managed by a quasi-governmental centralized organization isn't a relatively powerful way to dampen economic instabilities, but an unacceptable way to dampen their chances of making said ton of money.

So Bitcoin was specifically designed to make this kind of adjustment impossible. In fact, the whole, and conceptually impressive, set of features that characterize it as a currency, from the distributed ledger to the anonymity of transfers to the mathematically controlled rate of bitcoin creation, presupposes that you can trust neither central banks nor financial institutions in general. It's a crushingly limited fallback protocol for a world where all central banks have been taken over by hyperinflation-happy communists.

The obvious empirical observation is that central banks have not been taken over by hyperinflation-happy communists. Central banks in the developed world have by and large mastered the art of keeping inflation low – in fact, they seem to have trouble doing anything else. True, there are always Venezuelas and Argentinas, but designing a currency based on the idea that they are at the cutting edge of future macroeconomic practice crosses the line from design fiction to surrealist engineering.

As a currency, Bitcoin isn't the future, but the past. It uses our most advanced technology to replicate the key features of an obsolete concept, adding some Tesla coils here and there for good effect. It's gold you can teleport; like a horse with an electric headlamp strapped to its chest, it's an extremely cool-looking improvement to a technology we have long superseded.

As computer science, it's magnificent. As economics, it's an steampunk affectation.

Where bitcoin shines, relatively speaking, is in the criminal side of the e-commerce sector — including service-oriented markets like online extortion and sabotage — where anonymity and the ability to bypass the (relative) danger of (nominally, if not always pragmatically) legal financial institutions are extremely desirable features. So far Bitcoin has shown some promise not as a functional currency for any sort of organized society, but in its attempt to displace the hundred dollar bill from its role as what one of William Gibson's characters accurately described as the international currency of bad shit.

This, again, isn't an unfortunate side effect, but a consequence of the design goals of Bitcoin. There's no practical way to avoid things like central bank-set interest rates and taxes, without also avoiding things like anti-money laundering regulations and assassination markets. If you mistrust government regulations out of principle and think them unfixable through democratic processes — that is, if you ignore or reject political technologies developed during the 20th century that have proven quite effective when well implemented — then this might seem to you a reasonable price to pay. For some, this price is actually a bonus.

There's nothing implicit in contemporary technologies that justifies our sometimes staggering difficulties managing common goods like sustainably fertile lands, non-toxic water reservoirs, books written by people long dead, the antibiotic resistance profile of the bacteria whose planet we happen to live in, or, case in point, our financial systems. We just seem to be having doubts as to whether we should, doubts ultimately financed by people well aware that there are a few dozen deca-billion fortunes to be made by shedding the last two or three centuries' worth of political technology development, and adding computationally shiny bits to what we were using back then.

Bitcoin is a fascinating technical achievement mostly developed by smart, enthusiastic people with the best of intentions. They are building ways in which it, and other blockchain technologies like smart contracts, can be used to make our infrastructures more powerful, our societies richer, and our lives safer. That most of the big money investing in the concept is instead attempting to recreate the financial system of late medieval Europe, or to provide a convenient complement to little bags of diamonds, large bags of hundred dollar bills, and bank accounts in professionally absent-minded countries, when they aren't financing new and excitingly unregulated forms of technically-not-employment, is completely unexpected.

The price of the Internet of Things will be a vague dread of a malicious world

Volkswagen didn't make a faulty car: they programmed it to cheat intelligently. The difference isn't semantics, it's game-theoretical (and it borders on applied demonology).

Regulatory practices assume untrustworthy humans living in a reliable universe. People will be tempted to lie if they think the benefits outweigh the risks, but objects won't. Ask a person if they promise to always wear their seat belt, and the answer will be at best suspect. Test the energy efficiency of a lamp, and you'll get an honest response from it. Objects fail, and sometimes behave unpredictably, but they aren't strategic, they don't choose their behavior dynamically in order to fool you. Matter isn't evil.

But that was before. Things now have software in them, and software encodes game-theoretical strategies as well as it encodes any other form of applied mathematics, and the temptation to teach products to lie strategically will be as impossible to resist for companies in the near future as it has been to VW, steep as their punishment seems to be. As it has always happened (and always will) in the area of financial fraud, they'll just find ways to do it better.

Environmental regulations are an obvious field for profitable strategic cheating, but there are others. The software driving your car, tv, or bathroom scale might comply with all relevant privacy regulations, and even with their own marketing copy, but it'll only take a silent background software upgrade to turn it into a discrete spy reporting on you via well-hidden channels (and everything will have its software upgraded all the time; that's one of the aspects of the Internet of Things nobody really likes to contemplate, because it'll be a mess). And in a world where every device interacts with and depends on a myriad others, devices from one company might degrade the performance of a competitor's... but, of course, not when regulators are watching.

The intrinsic challenge to our legal framework is that technical standards have to be precisely defined in order to be fair, but this makes them easy to detect and defeat. They assume a mechanical universe, not one in which objects get their software updated with new lies every time regulatory bodies come up with a new test. And even if all software were always available, cheking it for unwanted behavior would be unfeasible — more often than not, programs fail because the very organizations that made them haven't or couldn't make sure it behaved as they intended.

So the fact is that our experience of the world will increasingly come to reflect our experience of our computers and of the internet itself (not surprisingly, as it'll be infused with both). Just as any user feels their computer to be a fairly unpredictable device full of programs they've never installed doing unknown things to which they've never agreed to benefit companies they've never heard of, inefficiently at best and actively malignant at worst (but how would you now?), cars, street lights, and even buildings will behave in the same vaguely suspicious way. Is your self-driving car deliberately slowing down to give priority to the higher-priced models? Is your green A/C really less efficient with a thermostat from a different company, or it's just not trying as hard? And your tv is supposed to only use its camera to follow your gestural commands, but it's a bit suspicious how it always offers Disney downloads when your children are sitting in front of it.

None of those things are likely to be legal, but they are going to be profitable, and, with objects working actively to hide them from the government, not to mention from you, they'll be hard to catch.

If a few centuries of financial fraud have taught us anything, is that the wages of (regulatory) sin are huge, and punishment late enough that organizations fall into temptation time and again, regardless of the fate of their predecessors, or at least of those who were caught. The environmental and public health cost of VW's fraud is significant, but it's easy to imagine industries and scenarios where it'd be much worse. Perhaps the best we can hope for is that the avoidance of regulatory frameworks on Internet of Things won't have the kind of occasional systemic impact that large-scale financial misconduct has accustomed us to.

We aren't uniquely self-destructive, just inexcusably so

Natural History is an accretion of catastrophic side effects resulting from blind self-interest, each ecosystem an apocalyptic landscape to the previous generations and a paradise to the survivors' thriving and well-adapted descendants. There was no subtle balance when the first photosynthetic organisms filled the atmosphere with the toxic waste of their metabolism. The dance of predator and prey takes its rhythm from the chaotic beat of famine, and its melody from an unreliable climate. Each biological innovation changes the shape of entire ecosystems, giving place to a new fleeting pattern than will only survive until the next one.

We think Nature harmonious and wise because our memories are short and our fearful worship recent. But we are among the first generations of the first species for which famine is no accident, but negligence and crime.

No, our destruction of the ecosystems we were part of when we first learned the tools of fire, farm, and physics is not unique in the history of our planet, it's not a sin uniquely upon us.

It is, however, a blunder, because we know better, and if we have the right to prefer to a silent meadow the thousands fed by the farms replacing it, we have no right to ignore how much water it's safe to draw, how much nitrogen we will have to use and where it'll come from, how to preserve the genes we might need and the disease resistance we already do. We made no promise to our descendants to leave them pandas and tigers, but we will indeed be judged poorly if we leave them a world changed by the unintended and uncorrected side effects of our own activities in ways that will make it harder for them to survive.

We aren't destroying the planet, couldn't destroy the planet (short of, in an ecological sense, sterilizing it with enough nuclear bombs). What we are doing is changing its ecosystems, and in some senses its very geology and chemistry, in ways that make it less habitable for us. Organisms that love heat and carbon in the air, acidic seas and flooded coasts... for them we aren't scourges but benefactors. Biodiversity falls as we change the environment with a speed, in an evolutionary scale, little slower than a volcano's, but the survivors will thrive and then radiate in new astounding forms. We may not.

Let us not, then, think survival a matter of preserving ecosystems, or at least not beyond what an aesthetic or historical sense might drive us to. We have changed the world in ways that make it worse for us, and we continue to do so far beyond the feeble excuses of ignorance. Our long term survival as a civilization, if not as a species, demands from us to change the world again, this time in ways that will make it better for us. We don't need biodiversity because we inherited it: we need it because it makes ecosystems more robust, and hence our own societies less fragile. We don't need to both stop and mitigate climate change because there's something sacred about the previous global climate: we need to do it because anything much worse than what we've already signed for might be too much for our civilization to adapt to, and runaway warming might even be too much for the species itself to survive. We need to understand, manage, and increase sustainable cycles of water, soil, nitrogen, and phosphorus because that's how we feed ourselves. We can survive without India's tigers. But collapse the monsoon or the subcontinent's irrigation infrastructure and at least half a billion people will die.

We wouldn't be the first species killed by our own blind success, nor the first civilization destroyed by a combination of power and ignorance, empty cities the only reminders of better architectural than ecological insight. We know better, and should act in a way befitting what we know. Our problem is no larger than our tools, our reach no further than our grasp.

The only question is how hard we'll make things for us before we start working on earnest to build a better world, one less harsh to our civilization, or at least not untenably more so. The question is how many people will unnecessarily die, and what long-term price we'll pay for our delay.

The Telemarketer Singularity

The future isn't a robot boot stamping on a human face forever. It's a world where everything you see has a little telemarketer inside them, one that knows everything about you and never, ever, stops selling things to you.

In all fairness, this might be an slight oversimplification. Besides telemarketers, objects will also be possessed by shop attendants, customer support representatives, and conmen.

What these much-maligned but ubiquitous occupations (and I'm not talking here about their personal qualities or motivations; by and large, they are among the worst exploited and personally blameless workers in the service economy) have in common is that they operate under strict and explicitly codified guidelines that simulate social interaction in order to optimize a business metric.

When a telemarketer and a prospect are talking, of course, both parties are human. But the prospect is, however unconsciously, guided by a certain set of rules about how conversations develop. For example, if somebody offers you something and you say no, thanks, the expected response is for that party to continue the conversation under the assumption that you don't want it, and perhaps try to change your mind, but not to say ok, I'll add it to your order and we can take it out later. The syntax of each expression is correct, but the grammar of the conversation as a whole is broken, always in ways specifically designed to manipulate the prospect's decision-making process. Every time you have found yourself talking on the phone with a telemarketer, or interacting with a salesperson, far longer than you wanted to, this was because you grew up with certain unconscious rules about the patterns in which conversations can end — and until they make the sell, they will neither initiate nor acknowledge any of them. The power isn't in their sales pitch, but in the way they are taking advantage of your social operating system, and the fact that they are working with a much more flexible one.

Some people, generally described by the not always precise term sociopath, are just naturally able to ignore, simulate, or subvert these underlying social rules. Others, non-sociopathic professional conmen, have trained themselves to be able to do this, to speak and behave in ways that bypass or break our common expectations about what words and actions mean.

And then there are telemarketers, who these days work with statistically optimized scripts that tell them what to say in each possible context during a sales conversation, always tailored according to extensive databases of personal information. They don't need to train themselves beyond being able to convey the right emotional tone with their voices: they are, functionally, the voice interface of a program that encodes the actual sales process, and that, logically, has no need to conform to any societal expectation of human interaction.

It's tempting to call telemarketers and their more modern cousins, the computer-assisted (or rather computer-guided) sales assistants, the first deliberately engineered cybernetic sociopaths, but this would miss the point that what matters, what we are interacting with, isn't a sales person, but the scripts behind them. The person is just the interface, selected and trained to maximize the chances that we will want to follow the conversational patterns that will make us vulnerable to the program behind.

Philosophers have long toyed with a mental experiment called the Chinese Room: There is a person inside a room who doesn't know Mandarin, but has a huge set of instructions that tells her what characters to write in response to any combination of characters, for any sequence of interactions. The person inside doesn't know Mandarin, but anybody outside who does can have an engaging conversation by slipping messages under the door. The philosophical question is, who is the person outside dialoging with? Does the woman inside the room know Mandarin in some sense? Does the room know?

Telemarketers are Chinese Rooms turned inside-out. The person is outside, and the room is hidden from us, and we aren't interacting socially with either. We only think we do, or rather, we subconsciously act as if we do, and that's what makes cons and sales much more effective than, rationally, they should be.

We rarely interact with salespeople, but we interact with things all the time. Not because we are socially isolated, but because, well, we are surrounded by things. We interact with our cars, our kitchens, our phones, our websites, our bikes, our clothes, our homes, our workplaces, and our cities. Some of them, like Apple's Siri or the Sims, want us to interact with them as if they were people, or at least consider them valid targets of emotional empathy, but what they are is telemarketers. They are designed, and very carefully, to take advantage of our cultural and psychological biases and constraints, whether it's Siri's cheerful personality or a Sim's personal victories and tragedies.

Not every thing offer us the possibility of interacting with them as if they were human, but that doesn't stop them from selling to us. Every day we see the release of more smart objects, whether it's consumer products or would-be invisible pieces of infrastructure. Connected to each other and to user profiling databases, they see us, know us, and talk to each and to their creators (and to their creators' "trusted partners," who aren't necessarily anybody you have even heard of) about us.

And then they try to sell us things, because that's how the information economy seems to work in practice.

In some sense, this isn't new. Expensive shoes try to look cool so other people will buy them. Expensive cars are in a partnership with you to make sure everybody knows how awesome they make you look. Restaurants hope that some sweet spot of service, ambiance, food, and prices will make you a regular. They are selling themselves, as well as complementary products and services.

But smart objects are a qualitatively different breed, because, being essentially computers with some other stuff attached to them, what their main function is might not be what you bought them for.

Consider an internet-connected scale that not only keeps track of your weight, but also sends you through a social network congratulatory messages when you reach a weight goal. From your point of view, it's just a scale that has acquired a cheerful personality, like a singing piece of furniture in a Disney movie, but from the point of view of the company that built and still controls it, it's both a sensor giving them information about you, and a way to tell you things you believe are coming from something – somebody who knows you, in some ways, better than friends and family. Do you believe advertisers won't know whether to sell you diet products or a discount coupon in the bakery around the corner from your office? Or, even more powerfully, that your scale won't tell you You have earned yourself a nice piece of chocolate cake ;) if the bakery chain is the one who purchased that particular "pageview?"

Let's go to the core of advertising: feelings. Much of the Internet is paid for by advertisers' belief that knowing your internet behavior will tell them how you're feeling and what you're interested on, which will make it easier to sell things to you. Yet browsing is only one of the things we do that computers know about in intimate detail. Consider the increasing number of internet-connected objects in your home that are listening to you. Your phone is listening for your orders, but that doesn't mean it's all it's listening for. The same goes for your computer, your smart TV (some of which are actually looking at you as well), even some children's dolls. As the Internet of Things grows way beyond the number of screens we can deal with, or apps we are willing to use to control them, voice will become the user interface of choice, just like smartphones overtook desktop computers. That will mean that possibly dozens of objects, belonging to a handful of companies, will be listening to you and selling that information to whatever company pays enough to become a "trusted partner." (Yes, this is and will remain legal. First, because we either don't read EULAs or do and try not to think about them. And second, because there's no intelligence agency in the planet who won't lobby to keep it legal.)

Maybe they won't be reporting everything you say verbatim, that will depend on how much external scrutiny there is on the industry, but your mood (did you yell at your car today, or sang aloud as you drove?), your movements, the time of the day you wake up, which days you cook and which days you order takeout? Everybody trying to sell things to you will know all of this, and more.

That will be just an extension of the steady erosion of our privacy, and even of our expectation of it. More delicate will be the way in which our objects will actively collaborate in this sales process. Your fridge's recommendations when you run out of something might be oddly restricted to a certain brand, and if you never respond to them, shift to the next advertiser with the best offer — that is, the most profitable for whoever is running the fridge's true program, back in some data center somewhere. Your watch might choose to delay low-priority notifications while you're watching a commercial from a business partner or, more interestingly, choose to interrupt you every time there's a competitor's commercial. Your kitchen will tell you that it needs some preventive maintenance, but there's a discount on Chinese takeover if you press that button or just say "Sure, Kitchen Kate." If you put it on shuffle, your cloud-based music service will tailor its very much only random-looking selection based on where you are and what the customer tracking companies tells them you're doing. No sad music when you're at the shopping mall or buying something online! (Unless they have detected that you're considering buying something out of nostalgia or fear.) There's already a sophisticated industry dedicated to optimizing the layout, sonic background, and even smells of shopping malls to maximize sales, much in the same way that casinos are thoroughly designed to get you in and keep you inside. Doing this through the music you're listening to is just a personalized extension of these techniques, an edge that every advertiser is always looking for.

If, in defense of old-school human interaction, you go inside some store to talk with an actual human being instead of an online shop, a computer will be telling each sales person, through a very discrete earbud, how you're feeling today, and how to treat you so you'll feel you want to buy whatever they are selling, the functional equivalent of almost telepathic cold reading skills (except that it won't be so cold; the sales person doesn't know you, but the sales program... the sales program knows you, in many ways, better than you do yourself). In a rush? The sales program will direct the sales person to be quick and efficient. Had a lousy day? Warmth and sympathy. Or rather simulations thereof; you're being sold to by a sales program, after all, or an Internet of Sales Programs, all operating through salespeople, the stuff in your home and pockets, and pretty much everything in the world with an internet connection, which will be almost everything you see and most of what you don't.

Those methods work, and have probably worked since before recorded history, and knowing about them doesn't make them any less effective. They might not make you spend more in aggregate; generally speaking, advertising just shifts around how much you spend on different things. From the point of view of companies, it'll just be the next stage in the arms race for ever more integrated and multi-layered sensor and actuator networks, the same kind of precisely targeted network-of-networks military planners dream of.

For us as consumers, it might mean a world that'll feel more interested in you, with unseen patterns of knowledge and behavior swirling around you, trying to entice or disturb or scare or seduce you, and you specifically, into buying or doing something. It will be a somewhat enchanted world, for better and for worse.

The Balkanization of Things

The smarter your stuff, the less you legally own it. And it won't be long before, besides resisting you, things begin to quietly resist each other.

Objects with computers in them (like phones, cars, TVs, thermostats, scales, ovens, etc) are mainly software programs with some sensors, lights, and engines attached to them. The hardware limits what they can possibly do — you can't go against physics — but the software defines what they will do: they won't go against their business model.

In practice this means that you can't (legally) install a new operating system in your phone, upgrade your TV with, say, a better interface, or replace the notoriously dangerous and very buggy embedded control software in your Toyota. You can use them in ways that align with their business models, but you have to literally become a criminal to use them otherwise, even if what you want to do with them is otherwise legal.

Bear with me for a quick historical digression: the way the web was designed to work (back in the prehistoric days before everything was worth billions of dollars) you would be able to build a page using individual resources from all over the world, and offer the person reading it ways to access other resources in the form of a dynamic, user-configurable, infinite book, an hypertext that mostly remains only as the ht on http://.

What we ended having was, of course, a forest of isolated "sites" that guard jealously their "intellectual property" from each other, using the brilliant set of protocols that was meant to give us an infinite book just as a way for their own pages to talk with their servers and their user trackers, and so on, and woe to anybody that tries to "hack" a site to use it in some other way (at least not without a license fee and severe restrictions on what they can do). What we have is still much, much better than what we had, and if Facebook has its way and everything becomes a Facebook post or a Facebook app we'll miss the glorious creativity of 2015, but what we could have had still haunts technology so deeply that it's constantly trying to resurface on top of the semi-broken Internet we did build.

Or maybe there was never a chance once people realized there were lots of money to be made with these homogeneous, branded, restricted "websites." Now processors with full network stacks are cheap enough to be put in pretty much everything (including other computers — computers have inside them, funnily enough, entirely different smaller computers that monitor and report on them). So everybody in the technology business is imagining a replay of the internet's story, only at a much larger scale. Sure, we could put together a set of protocols so that every object in a city can, with proper authorizations, talk with each other regardless of who made it. And, sure, we could make possible for people to modify their software to figure out better ways of doing things with the things they bought, things that make sense to them without attaching license fees or advertisements. We would make money out of it, and people would have a chance to customize, explore, and fix design errors.

But you know how the industry could make more money, and have people pay for any new feature they want, and keep design errors as deniable and liability-free as possible? Why, it's simple: these cars talk with these health sensors only, and these fridges only with these e-commerce sites, and you can't prevent your shoes from selling your activity habits to insurers and advertisers because that'd be illegal hacking. (That the NSA and the Chinese gets to talk with everything is a given.)

The possibilities for "synergy" are huge, and, because we are building legal systems that make reprogramming your own computers a crime, very monetizable. Logically, then, they will be monetized.

It (probably) won't be any sort of resistentialist apocalypse. Things will mostly be better than before the Internet of Things, although you'll have to check that your shoes are compatible with your watch, remember to move everything with a microphone or a camera out of the bedroom whenever you have sex even if they seem turned off (probably something you should already be doing), and there will be some fun headlines when a hacker from insert here your favorite rogue country, illegal group, or technologically-oriented college decides technology has finally caught up with Ghost in the Shell in terms of security boondoggles, breaks into Toyota's network, and stalls a hundred thousand cars in Manhattan during rush hour.

It'll be (mostly) very convenient, increasingly integrated into a few competing company-owned "ecosystems" (do you want to have a password for each appliance in your kitchen?), indubitably profitable (not just the advertising possibilities of knowing when and how you woke up; logistics and product design companies alone will pay through the nose for the information), and yet another huge lost opportunity.

In any case, I'm completely sure we'll do better when we develop general purpose commercial brain-computer interfaces.

Yesterday was a good day for crime

Yesterday, a US judge helped the FBI strike a big blow in favor of the next generation of sophisticated criminal organizations, by sentencing Silk Road operator Ross Ulbricht (aka Dread Pirate Roberts) to life without parole. The feedback they gave to the criminal world was as precise and useful as any high-priced consultant's could ever be: until the attention-seeking, increasingly unstable human operator messed up, the system worked very well. The next iteration is obvious: highly distributed markets with less or zero human involvement. And law enforcement is woefully, structurally, abysmally unprepared to deal with this.

To be fair, they are already not dealing well with the existing criminal landscape. It was easier during the last century, when large, hierarchical cartels led by flamboyant psychopaths provided media-friendly targets vulnerable to the kind of military hardware and strategies favored by DEA doctrine. The big cartels were wiped out, of course, but this only led to a more decentralized and flexible industry that has proven so effective at providing the US and Western Europe with, e.g., cocaine, in a stable and scalable way, that demand is so thoroughly fulfilled they had to seek new products and markets to grow their business. There's no War on Drugs to be won, because they aren't facing an army, but an industry fulfilling a ridiculously profitable demand.

(The same, by the way, has happened during the most recent phase of the War on Terror: statistical analysis has shown that violence grows after terrorist leaders are killed, as they are the only actors in their organizations with a vested interest in a tactically controlled level of violence.)

In terms of actual crime reduction, putting down the Silk Road was as useless a gesture as closing down a torrent site, and for the same reason. Just as the same characteristics of the internet that make it so valuable make P2P file sharing unavoidable, the same financial, logistical, and informational infrastructures that make possible the global economy make also decentralized drug trafficking unavoidable.

In any case, what's coming is much, much worse than what's already happening. Because, and here's when things get really interesting, the same technological and organizational trends that are giving an edge to the most advanced and effective corporations, are also almost tailored to provide drug trafficking networks with an advantage over law enforcement (this is neither coincidence nor malevolence; the difference between Amazon's core competency and a wholesale drug operator's is regulatory, not technical).

To begin with, blockchains are shared, cryptographically robust, globally verifiable ledgers that record commitments between anonymous entities. That, right there, solves all sorts of coordination issues for criminal networks, just as it does for licit business and social ones.

Driverless cars and cheap, plentiful drones, by making all sorts of personal logistics efficient and programmable, will revolutionize the "last mile" of drug dealing along with Amazon deliveries. Like couriers, drones can be intercepted. Unlike couriers, there's no risk to the sender when this happens. And upstream risk is the main driver of prices in the drugs industry, particularly at the highest levels, where product is ridiculously cheap. It's hard to imagine a better way to ship drugs than driverless cars and trucks.

But the real kicker will be a combination of a technology that already exists, very large scale botnets composed of thousands or hundreds of thousands of hijacked computers running autonomous code provided by central controllers, and a technology that is close to being developed, reliable autonomous organizations based on blockchain technologies, the ecommerce equivalent to driverless cars. Put together, it will be possible for a drug user with a verifiable track record to buy from a seller with an equally verifiable reputation through a website that will exist in somebody's home machine only until the transaction is finished, and receive the product via an automated vehicle looking exactly the same as thousands of others (if not a remotely hacked one), which will forget the point of origin of the product as soon as it has left it, and forget the address of the buyer as soon as it has delivered its cargo.

Of course, this is just a version of the same technologies that will make Amazon and its competitors win over the few remaining legacy shops: cheap scalable computing power, reliable online transactions, computer-driven logistical chains, and efficient last-mile delivery. The main difference: drug networks will be the only organizations where data science will be applied to scale and improve the process of forgetting data instead of recording it (an almost Borgesian inversion not without its own poetry). Lacking any key fixed assets, material, financial, or human, they'll be completely unassailable by any law enforcement organization still focused on finding and shutting down the biggest "crime bosses."

That's ineffective today, and will be absurd tomorrow, which highlights one of the main political issues of the early 21st century. Gun advocates in the US often note that "if guns are outlawed, only the outlaws will have guns," but the important issue in politics-as-power, as opposed to politics-as-cultural-signalling, isn't guns (or at least not the kind of guns somebody without a friend in the Pentagon can buy): If the middle class and the civil society doesn't learn to leverage advanced autonomous distributed logistical networks, only the rich and the criminals will leverage advanced autonomous distributed logistical networks. And if you think things are going badly now...

The post-Westphalian Hooligan

Last Thursday's unprecedented incidents at one of the world's most famous soccer matches illustrate the dark side of the post- (and pre-) Westphalian world.

The events are well known, and were recorded and broadcasted in real time by dozens of cameras: one or more fans of Boca Juniors managed to open a small hole in the protective plastic tunnel through which River Plate players were exiting the field at the end of the first half, and managed to attack some of them with, it's believed, both a flare and a chemical similar to mustard gas, causing vision problems and first-degree burns to some of the players.

After this, it took more than an hour for match authorities to decide to suspend the game, and more than another hour for the players to leave the field, as police feared the players might be injured by the roughly two hundred fans chanting and throwing projectiles from the area of the stands from which they had attacked the River Plate players. And let's not forget the now mandatory illegal drone that was flown over the field controlled by a fan in the stands.

The empirical diagnosis of this is unequivocal: the Argentine state, as defined and delimited by its monopoly of force in its territory, has retreated from soccer stadiums. The police force present in the stadium — ten times as numerous as the remaining fans — could neither prevent, stop, nor punish their violence, or even force them to leave the stadium. What other proof can be required of a de facto independent territory? This isn't, as club and security officers put it, the work of a maladjusted few, or even an irrational act. It's the oldest and most effective form of political statement: Here and now, I have the monopoly of force. Here and now, this is mine.

What decision-makers get in exchange for this territorial grant, and what other similar exchanges are taking place, are local details for a separate analysis. This is the darkest and oldest part of the post-Westphalian characteristic development of states relinquishing sovereignty over parts of their territory and functions in exchange for certain services, in partial reversal to older patterns of government. It might be to bands of hooligans, special economic zones, prison gangs, or local or foreign militaries. The mechanics and results are the same, even in nominally fully functional states, and there is no reason to expect them the be universally positive or free of violence. When or where has it been otherwise in world history?

This isn't a phenomenon exclusive to the third world, or to ostensibly failed states, particularly in its non-geographical manifestations: many first world countries have effectively lost control of their security forces, and, taxing authority being the other defining characteristic of the Westphalian state, they have also relinquished sovereignty over their biggest companies, which are de facto exempt from taxation.

This is how the weakening of the nation-state looks like: not a dozen new Athens or Florences, but weakened tax bases and fractal gang wars over surrendered state territories and functions, streamed live.

The most important political document of the century is a computer simulation summary

To hell with black swans and military strategy. Our direst problems aren't caused by the unpredictable interplay of chaotic elements, nor by the evil plans of people who wish us ill. Global warming, worldwide soil loss, recurrent financial crisis, and global health risks aren't strings of bad luck or the result of terrorist attacks, they are the depressingly persistent outcomes of systems in which each actor's best choice adds up to a global mess.

It's well-known to economists as the tragedy of the commons: the marginal damage to you of pumping another million tons of greenhouse gasses into the atmosphere is minimal compared with the economic advantages of all that energy, so everybody does it, so enough greenhouse gases get pumped that it's way to becoming a problem for everybody, yet nobody stops, or even slows down significantly, because that would do very little on its own, and be very hurtful to whoever does it. So there are treaties and conferences and increased fuel efficiency standards, just enough to be politically advantageous, but not nearly so far as to make a dent on the problem. In fact, we have invested much more on making oil cheaper than on limiting its use, which gives you a more accurate picture of where things are going.

Here is that picture, from the IPCC:

figure-spm-5

A first observation: Note that the A2 model, the one in which temperatures are raised an average of more than 3°, was the "things go more or less as usual" model, not the "things go radically wrong" model... and it was not the "unconventional sources makes oil dirt cheap" scenario. At this point, it might as be the "wildly optimistic" scenario.

A second observation: Just to be clear, because worldwide averages can be tricky: 3° doesn't translate to "slightly hotter summers"; it translates to "technically, we are not sure we'll be able to feed China, India, and so on." Something closer to 6°, which is beginning to look more likely as we keep doing the things we do, translates to "we sure will miss the old days when we had humans living near the tropics".

And a third observation: All of these reports usually end at the year 2100, even though people being born now are likely to be alive then (unless they live in a coastal city in a low latitude, that is), not to mention the grandchildren of today's young parents. This isn't because it becomes impossible to predict what will happen afterwards — the uncertainty ranges grow, of course, but this is still thermodynamics, not chaos theory, and the overall trend certainly doesn't become mysterious. It's simply that, as the Greeks noted, there's a fear that drives movement, and there's a fear that paralyzes, and any reasonable scenario for the 2100 is more likely to belong to the second kind.

But let's take a step back and notice the way this graph, which is the summary of multiple computer simulations, driven by painstaking research and data gathering, maps our options and outcomes in a way that no political discourse can hope to match. To compare it with religious texts would be wrong in every epistemological sense, but it might be appropriate in every political one. When "climate skeptics" doubt, they doubt this graph, and when ecologists worry, they worry about this graph. Neither the worry nor the skepticism is doing much to change the outcomes, but at least the discussion is centered not in an individual, a piece of land, or a metaphysical principle, but rather in the space of trajectories of a dynamical system of which we are one part.

It's not that graphs or computer simulations are more convincing than political slogans; it's just that we have managed a level of technological development and sheer ecological footprint that our own actions and goals (the realm of politics) has escaped the descriptive possibilities of pure narrative, and we are thus forced to recruit computer simulations to attempt to grapple, conceptually if nothing else, with our actions and their outcomes.

It's not clear that we will find our way to a future that avoids catastrophe and horror. There are possible ways, of course — moving completely away from fossil fuels, geoengineering, ubiquitous water and soil management and recovery programs, and so on. It's all technically possible, with huge investments, a global sense of urgency, and a ruthless focus on preserving and making more resilient the more necessary ecological services. That we're seeing nothing of the kind, but instead a worsening of already bad tendencies, is due to, yes, thermodynamics and game theory.

It's a time-honored principle of rhetoric to end an statement in the strongest, most emotionally potent and conceptually comprehensive possible way. So here it is:

figure-spm-5

The changing clusters of terrorism

I've been looking at the data set from the Global Terrorism Database, an impressively detailed register of terrorism events worldwide since 1970. Before delving into the more finer-grained data, the first questions I wanted to ask for my own edification where

  • Is the frequency of terrorism events in different countries correlated?
  • If so, does this correlation changes over time?

What I did was summarize event counts by country and month, segment the data set by decade, and build correlation clusters for the countries with the most events each decade depending on co-occurring event counts.

The '70s looks more or less how you'd expect them to:

cluster1970

The correlation between El Salvador and Guatemala, starting to pick up in the 1980's, is both expected and clear in the data. Colombia and Sri Lanka's correlation is probably acausal, although you could argue for some structural similarities in both conflicts:

cluster1980

I don't understand the 1990's, I confess (on the other hand, I didn't understand them as they happened, either):

cluster1990

The 2000's make more sense (loosely speaking): Afghanistan and Iraq are close, and so are India and Pakistan.

cluster2000

Finally, the 2010's are still ongoing, but the pattern in this graph could be used to organize the international terrorism-related section of a news site:

cluster2010

I find most interesting how the India-Pakistan link of the 2000's has shifted to a Pakistan-Afghanistan-Iraq one. Needless to say, caveat emptor: shallow correlations between small groups of short time series is only one step above throwing bones into the ground and reading the resulting patterns, in terms of analytic reliability and power.

That said, it's possible in principle to use a more detailed data set (ideally, including more than visible, successful events) to understand and talk about international relationships of this kind. In fact, there's quite sophisticated modeling work being done in this area, both academically and in less open venues. It's a fascinating field, and if it might not lead to less violence in any direct way, anything that enhances our understanding of, and our public discourse about, these matters is a good thing.