All posts by Marcelo

When devops involves monitoring for excess suicides

There is strong observational evidence of prolonged social network usage being correlated with depression and suicide — enough for companies like Facebook to deploy tools to attempt to predict and preempt possible cases of self-harm. But taken in isolation, these measures are akin to soda companies sponsoring bicycle races. For social networks, massive online games, and other business models predicated on algorithmic engagement maximization, the things that make them potentially dangerous to psychological health — the fostering and maintenance of compulsive behaviors, the systemic exposure to material engineered to be emotionally upsetting — are the very things that make them work as businesses.

Developers, and particularly those involved in advertising algorithms, content engineering, game design, etc, have in this a role ethically similar to that of, say, scientists designing new chemical sweeteners for a food company. It's not enough for a new compound to have an addictive taste and be cheap to produce — it has to be safe, and it's part of the scientist and the company's responsibility to make sure it is. If algorithms can affect human behavior — and we know they do — and if they can do so in deleterious way — and we also know this to be true — then developers have a responsibility to account for this possibility not just as a theoretical concern, but as a metric to monitor as closely as possible.

Software development and monitoring practices are the sharp end of corporate values for technology companies. You can tell what a company really values by noting what will force an automated rollback of new code. For many companies this is some version of "blowing up," for others it's a performance regression, and for the most sophisticated, a negative change in a business metric. But any new deployment of, e.g., Facebook's feed algorithms or content filtering tools has the potential of causing a huge amount of psychological and political distress, or worse. So their deployment tools have to automatically monitor and react to not just the impact of new code on metrics like resource usage, user interface latencies, or revenue per impression, but also the psychological well-being of those users exposed to the newest version of the code.

I don't know whether companies like Facebook treat those metrics as first-order data input to software engineering decisions; perhaps they do, or are beginning to. The ethical argument for doing so is quite clear, and, if nothing else, it should be a natural first step in any goodwill PR campaign.

Short story: Nanobots and the Teenage Brain

It took a while to diagnose Charlie's problems; what thirteen years old boy isn't moody? But once his parents suspected there was something else going on inside his head, doctors injected a swarm of machines so small they were practically very large drugs, and the machines showed them that, to Charlie's annoyance, his parents had been right.

Brains are like ecosystems, Charlie's doctor explained to him and his parents. Every part of Charlie's brain works, but the way they synchronize and work together isn't the way we would prefer. The system is in a balance of sorts; it's just that it's a balance that results in things like mood swings and insomnia.

The doctor hadn't mentioned the nightmares, but Charlie suspected he knew how bad they were, even if he hadn't told him, or anybody else, how much the night scared him. Probably the machines inside his head had told the doctor the truth. Charlie didn't have insomnia, he just tried not to sleep.

What do we do then? had asked Charlie's mother. Give him medication? His uncle used to take antidepressants.

The doctor had nodded. That's what we would have tried a few years ago, but it takes quite of a bit of trial and error, and even once you find something that works, there are usually side effects. Almost always the side effects are minor compared with the original symptoms, and you can tweak the dosage and sometimes eventually cease the medication, but today we have better tools. We already have nanobots lodged in key areas of Charlie's brain. We are using them to diagnose him, but they can be partially rebuilt to integrate themselves with his brain functions.

Charlie's father, who had seen a lot of horror movies as a teenager, frowned. You mean use a computer to control his emotions?

The doctor smiled. Oh, no. Its like adding a carefully chosen new species into an ecosystem. It will interact with the rest of his brain, send a signal there, dampen a neurotransmitter here, and Charlie's brain will adapt slightly to it while the machines adapt very strongly to him. The end result will be a healthier and more resilient brain, but not a different one, and certainly not one under anybody's control but him. We are only beginning to try it in humans, but we can monitor it very closely and stop if anything looks wrong, so in a sense it's safer than the usual medication.

Using machines they could instantly switch sounded like a safer option than trying medications until they found something that worked, so they modified the nanobots in Charlie's brain to make them able to talk to it as well as listen.

The brain talked and listened to itself, and now itself included both the machines and the software controlling them from a small chip in Charlie's skull. The chip learned from Charlie's brain, Charlie's brain learned from the chip, and eventually they were just Charlie.

The mood swings and the nightmares went away. The chip didn't change Charlie, and nobody hacked them. This isn't that kind of story.

* * *

There's a thirteen years old child waking up from a nightmare, crying. But this is three years later, and she's called Grace.

* * *

Charlie's parents wanted to refuse. Would've, certainly, even to the doctor who had healed Charlie. But he had shown them videos of Grace, and although everybody agreed that it had been a low trick, it was enough to make the parents agree to leave the choice to Charlie.

She has the same sort of device you have, the doctor told Charlie. The device works well; yours too, by the way, you know I'll get an alert if anything went wrong. But the device needs to learn from the brain how to help it, and for some reason it's not able to learn from Grace's. We think her condition is somewhat different from yours, like the same riddle in a different accent, and the device isn't picking it up.

So what do you want me to do? asked Charlie. He wasn't a bad guy, but he didn't want to go to a hospital again, ever.

The doctor told him his plan. It was much worse than what Charlie had feared. Maybe that's why Charlie said he would do it, the way sixteen years old say 'yes' to whatever really scares them.

* * *

Grace and Charlie didn't lie in parallel operating tables, thick cables connecting their skulls. They sat in comfy chairs next to each other while both sets of parents watched. The doctor was telling them again how they had temporarily reprogrammed the chips in their skulls so Charlie's chip would control Grace's nanobots and the other way around, but that was mostly to fill the silence while he monitored everything.

Not that the parents paid much attention anyway. Charlie's were too worried about something going wrong, and Grace's were crying softly.

For the first time in a long while Grace had fallen asleep smiling.

* * *

It took seven sessions for Charlie to train Grace's device. At the end they were close strangers, people with nothing in common except a very important thing much too big to base a friendship on. But she was thirteen, so she had given him a nickname anyway.

Why does she call you that? asked the doctor after the last session, at a time when he and Charlie were briefly alone.

You know, said Charlie, rolling his eyes, like the guy from the movies. The one who can read minds.

The doctor, who had liked the character about two franchise reboots before, smiled. Well, your brain can do something nobody else's can, and you helped her, so she's not entirely wrong about that.

By the way Charlie looked at him while pretending to find him ridiculous, the doctor knew that he would agree to help if he ever asked again.

* * *

He did ask, four times. It turned out Charlie's success had been less likely than they had thought, and his brain's talent to train the device a rare one. Charlie was always enthusiastic to help, and Charlie's parents eventually made peace with it, not without fear, but also not without pride.

The doctor finally stopped asking for his help once the company designed new machines that could learn from any brain; they had figured out how to do that by watching Charlie help others, and in that sense he would always be helping. Charlie had shrugged when told, relieved but hating himself a bit for it.

He kept in touch with the doctor. They never mentioned the returning nightmares. Charlie had known his own well enough to understand they weren't his to begin with; his brain had learned them from the other kids' devices, who had learned them from their brains.

They talked about everything else, mostly about the people helped by the software they had built based on Charlie's brain, pretending it was a coincidence that the doctor always called the morning after a bad nightmare. He was still monitoring Charlie's device, after all.

Charlie hates the nightmares, and feels bad about never telling his parents about them. But if he had told them they wouldn't have let him help. Keeping secrets had been a necessary part of being a superhero, and if he woke up in a cold sweat more often than not... Most retired heroes had scars, and he had earned his helping others.

And he's no longer afraid of the night.

Short Story: The Voice of Things

She had liked the illustrated book so much much she told you right away she had prayed to get it for Christmas, alone in her bedroom where nobody but God could hear. You didn't mention her teddy bear had probably heard her and the toy company then sold the information to an advertiser who had offered you the book with an extraordinary discount. If she was happy, that was what mattered.

You never realized the bear sometimes talked back, not until the scandal made the news. It turned out it always could, it just had waited until its sensors told it kid and toy were alone. The license that came with the bear's software made this "user bonding" legal; the company went bankrupt anyway.

But nothing's ever forgotten if there's money in remembering, and sometimes you're almost sure things talk to your daughter not with their standard voices, but with one she remembers and trusts.

So you talked to her about cookies and the cloud, at least what you understand of it. She nodded along to your explanation, unsure, asking nothing. Afterwards, you wondered what things would tell her when she asked them.

Hegemonía electoral y outliers estadísticos

Hay elecciones que se ganan cómodamente, otras que se ganan por goleada... y está Santiago del Estero.

El Viernes tuve la suerte de participar en el Datatón Electoral organizado por Antonio Milanese, analizando sets de datos de las elecciones pasadas junto con otros analistas de datos, politólogos, etc. El análisis que probé no respaldó mi hipótesis (es el riesgo de trabajar con datos...), pero dio pie a una observación interesante.

A pesar de la aparente polarización electoral en la Argentina, incluso mesa por mesa los resultados tienden a ser relativamente cercanos. Por ejemplo, en las elecciones para diputados nacionales en el 2017, solo en el 47% de las mesas la opción ganadora en esa mesa sacó más de la mitad de los votos:

La asimetría de esta distribución es lógica (es difícil ser la opción ganadora en una mesa con menos del 40%), pero igualmente la cantidad de mesas en las que la opción ganadora sacó un porcentaje muy alto es en si misma muy alta: en el 1% de las mesas el ganador sacó más del 83% de los votos, algo que en un análisis estadístico superficial no debería pasar casi nunca. Esta es una "anomalía" estadística que refleja un patrón social bastante común. La gente que vota en las mismas mesas tiende a ser más homogénea social y políticamente que la que vota en mesas diferentes, y es natural haya más mesas políticamente homogéneas de lo que sería dado esperar si personas y mesas fuesen asignadas al azar.

Pero por otro lado, si miramos donde están esas mesas inesperadamente homogéneas, surge algo que tiene menos que ver con la sociología abstracta. De las 1004 mesas en las que el ganador sacó más del 83% de los votos...

  • 46 están en la Ciudad de Buenos Aires
  • 74 están en la Provincia de Buenos Aires
  • 87 están en Formosa
  • 607 están en Santiago del Estero

A nivel nacional, alrededor de una de cada cien mesas fue por goleada; en Santiago del Estero, más de una de cada tres. Las siguientes son las diez provincias con mayor porcentaje de mesas por goleada (haga click en el gráfico para agrandarlo, pero, como puede imaginar, la barra gigante de la izquierda es Santiago del Estero):

Esta no es una observación sorprendente dada la realidad política de Santiago del Estero o Formosa, pero muestra cómo algunos patrones sociales y políticos locales son visibles incluso en el análisis cuantitativo más superficial.

Short story: Soul in the Loop

Every shower she takes makes you more certain she will have killed herself before her daughter's tenth birthday, and these days she's taking one every time she logs off. You aren't allowed to tell her, but the NDA you made her sign has so many post-employment clauses she wouldn't be likely to find a job elsewhere anyway.

A daughter, two parents with Alzheimer, and the obsolete skillset of a radiologist and former e-sports semi-pro: She's as good a match for this job as any human could be, and only humans are allowed to. That's the point.

The politics of deploying killer robots require humans watching what they do to prevent them from doing the a posteriori unacceptable, but the business side of the equation — and somewhere in the company's software stack there's a piece of mathematics modeling just that — compels humans to barely if ever stop the robots from taking the shot. Armies don't pay for robots that don't shoot. The Oxford Protocol supervisors are there to ensure they could, theoretically, be stopped, and to suffer the legal consequences if it becomes convenient for somebody to.

So she logs in ten hours a day to watch the death of people she could have saved — people who might or might not be innocents, people whose names she'll never know — at the cost of risking homelessness for herself, her daughter, and two helpless people who once raised her and she still loves. She never stops a robot. She just takes a shower immediately after every session, the company's contract-mandated monitoring of home network logging it as another data point in her profile.

The company's behavioral prediction models indicate that compulsive showering correlates with late-stage burnout, which means you should start choosing a replacement for her from the vast and growing pool of the economically deprecated. Some of them would last longer than others, and some would actually enjoy their jobs. You always pick the ones who don't, the ones who eventually need a shower every time they log off, and sometime after that require a replacement of their own.

You understand the business case for this company policy, yet find ironic that you would be barred from doing the job you choose people for. But it's not like you don't enjoy your own.

.finis.

Short story: Logs from a haunted heart

She's scared all the time. But is her fear the reason why her heart suddenly speeds up a dozen times a day, shifting in a second from the dull ticking of dread into the accelerating staccato of runaway panic? The diagnostics in her peacemaker's app say that everything is normal, but perhaps they can be faked by somebody with maintenance access to the device. She doesn't have it, she's only the patient.

Maybe her ex-husband, a medical tech sales rep, does. Too many things have default passwords companies never bother to change. But there'd be no point in talking with him, even if she hadn't moved across the country to avoid ever having to. In an emergency room they'd just look at the same app she has, and she can't get an appointment with an specialist before next month.

Tomorrow is the one year anniversary of the day she told her husband she was leaving.

She's scared. Maybe that's what makes her chest feel like it's going to break.

.finis.

There are only two emotions in Facebook, and we only use one at a time

We have the possibility of infinite emotional nuance, but Facebook doesn't seem to be the place for it. The data and psychology of how we react emotionally online are fascinating, but the social implications, although not specific to social networks, are rather worrisome.

A good way to explore our emotional reaction to Facebook news is through Patrick Martinchek's data set of four million posts from mainstream media during the period 2012-2016. I focused on news posts during 2016, most (93%) of which had received one or more of the emotional reactions in Facebook's algorithmic vocabulary: angry, love, sad, thankful, wow, and, of course, like.

In theory, an article could evoke any combination of emotions — make some people sad, others thankful, others a bit angry, and yet in others call for a simple "wow" — but it turns out that our collective emotional range is more limited. Applying to the data a method called Principal Component Analysis, we see that we can predict most of the emotional reactions to an article as a combination of two "hidden knobs":

  • There's a knob that increases the frequency of both love and wow reactions. We can just call that knob love.
  • The other knob increases the frequency of wows as well, but also, more significantly, the frequency of angry and sad, both in almost equal measure.

And that's it. Thankfulness, likes, even that feeling of "wow," are distributed pretty much at random through our reaction to news. What makes one article different to another to our eyes (or, more poetically, to our hearts) are something that makes us love them, and something else that makes us, with equal strength or probability, feel angry or sad about them.

Despite their names, it's not logically necessary for the "strength" of love to be low when anger/sadness is high, or vice versa. Remember that they measure the frequency of different emotional responses; it's easy to imagine news that half of its readers will love, yet will make the other half angry or sad.

Remarkably, that's not the case:

The graph shows how many news posts, relatively speaking, show different combinations of strength in the (horizontal) love and (vertical) angry/sad dimensions (click on the graph to expand it). Aside from a small group of posts that have zero strength in either dimension, and another, smaller group of more anomalous posts, most posts lie in a straight line between the poles of love and angry/sad: the stronger the love dimension of a post, the weaker will be its angry/sad dimension, and vice versa.

Different people have different, often opposite reactions to the same event. Why is our emotional reaction to news about them so relatively homogeneous? The answer is likely to be audience segmentation: each news post is seen by a rather homogeneous readership (that media source's target audience), so their reaction to the article will also be homogeneous.

In other words, a possible indicator that people with different preferences and values do read different media (and/or are shown different media posts by Facebook) is that the reactions to each post, either love of its statistical opposite, are statistically more homogeneous than they'd otherwise be. If everybody at a sports game are either cheering or booing at the same time, you can tell only one group of fans is watching it.

It's common, but somewhat disingenuous, to blame the use of recommendation algorithms for this. As soon as there are two TV stations in an area or two newspapers in a city, they have always tended to get each their own audience, and shape themselves to their interests as much as they influence them. The fault, such as it is, lies not in our code, but in ourselves.

Two things make algorithmic online media in general, and social networks in particular, different. First, while resistant to certain classic forms of manipulation and pressure (e.g. censure by phone call to TV network owner, except in places like China, where censorship mechanisms are explicitly built in both technology and regulations) they are vulnerable to new ones (content farms, bots, etc).

Second — and this is at the root of the current political kerfuffle around social networks — they need not be. Algorithmic recommendation is increasingly flexible and powerful; while it's unrealistic to require things like "no extremist content online, ever," the dynamics of what gets recommended and why can and are continuously modified and tweaked. There's a flexibility to how Facebook, Twitter, or Google work and could work that newspapers don't have, simply because networked computers are infinitely more programmable than printing presses and pages of paper.

This puts them in a bind that would deserve sympathy if they weren't among the most valuable and influential companies in the world, and utterly devoid of any sort of instinct for public service until their bottom line is threatened: whatever they do and not do risks backlash, and there's no legal, political, or social agreement as to what they should do. It's straightforward to say that they should censor extremist content and provide balanced information about controversial issues — in a way, we're asking them to fix not bugs in their algorithms, but in our own instincts and habits — but there are profound divisions in many societies about what counts as extremism and what's controversial. To focus on the US, when first-line universities sometimes consider white supremacism a legitimate political position, and government officials in charge of environmental issues consider the current global scientific consensus on climatology a very undecided matter, there's no politically safe algorithmic way to de-bias content... and no politically safe way to just wash your hands off the problem.

Social networks aren't powerful just because of how many people they reach, and how much, fast, and far they can amplify what they say. They are are unprecedentedly powerful because they have an almost infinite flexibility on what they can show to whom, and how, and new capabilities can always unsettle the balance of power. Everywhere, from China to the US to the most remote corners of the developing world, we're in the sometimes violent process of re-calculating how this new balance will look like.

"Algorithms" might be the new factor here, but it's human politics what's really at stake.

What makes an algorithm feminist, and why we need them to be

About one in nine engineers in the US is a woman, which makes some men infer from this that they are "naturally" bad at it. Many data-driven algorithms would conclude the same thing; that's still the wrong conclusion, but, dangerously, it seems blessed by the impartiality of algorithms. Here's how bias creeps in.

Imagine about one in two human beings — randomly distributed across geography, gender, race, income level, etc. — has a pattern of tiny horizontal lines under their left eyelids, and the other half has a pattern of tiny vertical lines; they don't know which group they belong to, and neither do their parents, teachers, or employers. If we take a sample of engineers and find that only one in ten shows horizontal instead of vertical lines, then the influence of vertical lines on engineering ability would be an interesting hypothesis, and the next step would be to look for confounding variables and mechanisms.

When it comes to gender, we do have a pretty clear mechanism: women are told from early childhood that they are bad at STEM disciplines, they are constantly steered in their youth towards more "feminine" activities by parents, teachers, media, and most people and messages they come across, and then they have to endure kinds and levels of harassment male colleagues don't. None of those things have anything to do with how good an engineer they can be, but they do make it much harder to become one. For a given stage of academic and professional development, a female has most likely gone through harsher intellectual and psychological pressures than their male peers; a brilliant female engineer isn't proof that a good enough woman can be an engineer, but rather that they need to be extraordinary in order to reach the professional level of a less competent male peer.

An eight-to-one ratio of male to female engineers doesn't reflect a difference in abilities and potential, but rather the strength of the gender-based filters (which, again, begin when a child enters school, and sometimes before that, and never stops).

But algorithms won't figure that out unless you give them information about the whole process. Add to a statistical model the different gender-based influences through a person's lifetime — the ways in which, for example, the same work is rated differently according to the perceived gender of the author — and any mathematical analysis will show that gender is, as far as the data can show, absolutely irrelevant; men and women go through different pipelines, even if inside the same organizations, so achievement rates aren't comparable without adjusting for the differences between them. Adding that kind of sociological information might seem extraneous, but, actually, not doing it is statistical malpractice: by ignoring key variables that do depend on gender (everything from how kids are taught to think about themselves to the presence of sexual harassment or bias in performance evaluations) you are setting yourself up to fall for meaningless pseudo-causal correlations.

In other words, in many cases a feminist understanding of the micropolitics of gender-based discrimination is a mathematically necessary part of data set preparation. Perhaps counterintuitively, ignoring gender isn't enough. Think of it as a sensor calibration problem: much data comes in one way or another from interactions between individuals, and those interactions are, empirically and verifiably, influenced by biases related to gender (and race, class, age, etc). If you don't account for that "sensor bias" in your model — and this takes both awareness of that need and working with the people who research and write about this, you can't half-ass it whether as an individual programmer or as a large tech company — you'll get the implications of the data very wrong.

We've been getting things wrong in this area for a long while, in a lot of ways. Let's make sure that as we give power to algorithms, we also give them the right data and understanding to make them more rational than us. Processing power, absent critical structural information, only guarantees logical nonsense. And logical nonsense has been the cause and excuse of much human harm.

Fútbol, semántica, y violencia política

Algo que tienen en común el periodismo y la poesía es que la elección de las palabras con las que se describe un hecho determina su significado, tanto o más que el hecho en sí.

La trama: cómo fue el apriete de la barra brava de Independiente a Ariel Holan que puso en jaque a todo el club es un artículo en la sección Deportes.

Continúa la extorsión sistemática en la vía de pública de bandas criminales a cara descubierta sería la noticia principal en la sección Policiales, si no la tapa de un diario.

Luego de décadas, el Estado Argentino sigue siendo incapaz de eliminar o contener cárteles criminales cuyos miembros operan abiertamente, cuentan con un significativo apoyo popular en algunas de sus actividades, y controlan de facto, si bien no de manera contínua, ciertos espacios físicos dentro del territorio nacional pertenece a la sección Política.

El término mafia, aplicado a las barras de fútbol, es más que metafórico; la combinación de debilidad estatal, integración con la cultura popular, relación casi simbiótica con organizaciones legales, y uso sistemático de la extorsión como una de las fuentes de financiamiento de otras actividades criminales, es un paralelo cercano al rol tradicional de las organizaciones criminales en Sicilia, con el fútbol reemplazando a las actividades religioso-sociales como fuente de validación social, al menos nominal. Y el control regular que tienen las barras sobre los estadios, si fuese contínuo en vez de durante los partidos, no sería menos serio, políticamente hablando, que la pérdida de soberanía del Estado de México sobre partes de su territorio a manos de organizaciones criminales.

Esto es algo característico de la política en su sentido más amplio: si las barras de fútbol estuvieran empezando sus actividades, serían consideradas un desafío inaceptable al sistema republicano. Tras décadas de existencia y su mimetismo con una muchas veces mítica "hinchada pacífica," viven en la sección Deportes. Ocasionalmente uno o dos miembros importantes de una barra son puestos en prisión por actos puntuales, e ignorando patrones sistémicos de actividad criminal; exactamente como en el caso de los cárteles en México o la mafia en Sicilia, esto tiene solo un efecto parcial en el poder de esos individuos, y absolutamente ninguno en las organizaciones mismas.

No son solo "algunos violentos," y tampoco son una aberración social, o un problema ético o de educación. Son parte de la estructura social y política Argentina, y mientras sigan apareciendo en la sección de deportes de los diarios, lo van a seguir siendo.

Any sufficiently advanced totalitarianism is indistinguishable from Facebook

Gamification doesn't need to be enjoyable to be effective.

You're more likely to cheat on your taxes than to walk barefoot into a bank, even if it's summer and your feet hurt. That's because we don't just care about how bad the consequences of something could be, but also how certain they are to happen, and, illogically but consistently, how soon they will happen.

That's what makes Facebook so addictive. Staying another minute isn't going to make you happy, but it guarantees a small and immediate dose of socially-triggered emotion, and that's an incredibly powerful driver of behavior. The business of Facebook is to know enough about you, and have enough material, to make sure it can keep that subliminal promise while showing you targeted ads.

Governments' tools are noticeably blunter. Most of the laws that are generally respected reflect some sort of pre-existing social agreement. Conversely, where that social agreement doesn't exist (e.g., the legitimacy of buying dollars in Argentina, or the acceptability of misogyny pretty much everywhere), laws can only be enforced sporadically and with delay, and hence are seldom effective.

What the ongoing deployment by totalitarian governments — and the totalitarian arms of not-entirely-totalitarian governments — is making possible is the recreation of Facebook, but one co-founded by Foucault. The granularity, flexibility, and speed of perception and action, once a State is digitized enough, is unfathomable by the standards of any State in history. You can charge a fine, report a behavior to a boss, inconvenience a family member, impact a credit score, or notify a child's school the very moment a frowned-upon action was performed, with (sufficiently) total certainty and visibility. It doesn't have to be a large punishment or a lavish reward, or even the same for everybody: just as Facebook knows what you like, a government good enough at processing the data it has can know what you care about, and calibrate exactly how to use it so even small transgressions and small "socially beneficial activities" will get a small but fast and certain reward. Small but fast and certain is a cheap and effective way of shaping behavior, as long as it's something you do care about, and not generic "points" or "achievements." It can be your children's educational opportunities, your job, your public image, anything — governments, once they develop the right process and software infrastructure, can always find buttons to push.

This kind of detail-oriented totalitarianism only used to be possible in the most insanely paranoid societies (the Stasi being a paradigmatic example) but it escalated very poorly, and with ultimately suicidal economic and social costs.

Doing it with contemporary technology, on the other hand, scales very well, as long as a government is willing to cede control of the "last mile" of carrots and sticks to software. You would be very surprised if you entered Facebook one day and saw something as impersonal and generic, or at best as fake-personalized, as most interactions with the State are now. A government leveraging contemporary technology has a some significant computing power constantly looking at you and thinking about you — what you're doing, what you care about, what you're likely to do next — and instead of different parts of the government keeping their own files and dealing with you on their own time, everything from the cop on your street to your grandparents' pharmacist is integrated into that bit of the State that is exclusively and constantly dedicated to nudging you into being the best citizen you can possibly be.

It won't just be a cost-effective way of social control. Everything we know of psychology, and our recent experience with social networks and other mobile games, suggests it'll be an effective way of shaping our decisions before we even make them.

Open Source is one of the engines of the world's economy and culture. Its next iteration will be bigger.

Once upon a time, the very concept of Open Source was absurd, and only its proponents ever thought it could be other than marginal. Important software could only be built and supported by sophisticated businesses, an expensive industrial component whose blueprints — the source code — was extremely valuable.

But Open Source won. It became clear, to no historian's surprise, that once knowledge is sufficiently distributed and tools become cheap enough, distributed development by heterogeneously (and heterogeneously motivated) people not only creates high-quality software at zero marginal cost; because it only takes a single motivated individual to leverage existing developments and move them forward regardless of its novelty or risk, it's inherently much more creative.

Open Source developers can take risks others can't, and they begin from further ahead, on the shoulder of other, taller developers. What's more adventurous than a single individual toying with an idea out of love and curiosity? When has true innovation began in any other way?

The form of this victory, though, wasn't the one expected by early adopters. Desktop computers as they were known are definitely on the wane, and it's still not "the Year of Linux on the Desktop." Relatively few people knowingly use Open Source software as their main computing environment, and the smartphone, history's most popular personal computing platform, is regardless of software licenses as regulated a proprietary environment as you could imagine.

The social and political promise of Open Source is still unrealized. Things have software inside them now, programs monitoring and controlling them to a larger degree than most people imagine, and this software is closed in every sense of the word. It's not just for surveillance: the software in car engines lies to pass government regulation tests, the one controlling electric batteries makes them work worse than they could so you have the "option" of paying more to the manufacturer for flipping a software switch to de-hobble them, and so on and so forth. Things work worse than they say they do, do things they aren't supposed to, and are not really under your control even after you bought them, and there's little that you can do about that, and that little very difficult, not just because the source code is hidden, but because in many cases, and through a Kafkian global system of "security" and copyright laws, it's literally a crime to try to understand, never mention fix, what this thing you bought is doing.

No, the main impact of Open Source was also what made it possible: the Internet. It's not just that the overwhelming majority of the software that runs it, from most of the operating systems of most servers to the JavaScript frameworks rendering web pages, is Open Source. There could've been no explosive growth of the online world with license costs for every individual piece of software, no free-form experimentation of content, shapes, tools, modes of use. Most of the sites and services we use today, and most of the tools used to build them, began as an individual's crazy idea — as just one example, the browser you're using to read this was originally a tool built by and for scientists — and, had the Internet's growth been directed by the large software companies of that age, it'd look more like cable TV, in diversity, speed of technological change, overall social impact, than what we have now.

Even if you don't own an smartphone or a computer, finance, government, culture, our entire society has been profoundly influenced by an Internet, and a computing ecosystem in general, simply unthinkable without Open Source. Like many of the truly influential technological shifts, its invisibility to most people doesn't diminish, but rather highlights, its ubiquity and power.

What's next?

More Open Source is an obvious, true, but conservative observation. Of course people, governments, and companies (even those whose business model includes selling some software) will continue to write, distribute, and use Open Source. Each of them for their own goals, some of them attempting to cheat or break the system, but, most likely, always coming back to the economic attractor of a system of creating and using technology that, for many uses and in many contexts, simply works too well to abandon.

What comes next is what's happening now. Still not fully exploited, the Internet is no longer the cutting edge of how computing is impacting our societies. Call this latest iteration Artificial Intelligence, cognitive computing, or however you want. Silicon Valley throws money at it, popular newspapers write about the danger it poses to jobs, China aims at having the most advanced AI technology in the world as an strategic goal of the highest priority, and even Vladimir Putin, not a man inclined to idealistic whimsy, said that whichever country leads in Artificial Intelligence "will rule the world."

Unlike Open Source during its critical years, Artificial Intelligence certainly isn't a low-profile phenomenon. But a lot of the coverage seems to make the same assumptions the software industry used to make, that truly relevant AI can only be built by superpowers, giant companies, or cutting-edge labs.

To some degree this is true: some AI problems are still difficult enough that they require billions of dollars to attack and solve, and the development of the tools required to build and train AIs requires in many cases extremely specialized knowledge in mathematics and computer science.

However, "some" doesn't mean "all," and once the tools used to build AIs are Open Source, which many if not most of them are, using them becomes progressively eaiser. There's something happening that has happened before: almost every month it's cheaper, and it requires less specialized knowledge, to make a program that learns from humans how to do something no machine ever could, or that finds ways to do it much better than we can. Rings a bell?

The more intuitive parallel isn't software, but rather another success story of open, collaborative development that went from a ridiculous proposition to upending a centuries-old industry: Wikipedia. Like Open Source software, and with a higher public profile, Wikipedia went from an esoteric idea with no chances of competing in quality with the carefully curated professional encyclopedias, to what's very often the first (and, too often for too many people, the only) source of factual information about a topic.

What we're beginning to build is a Wikipedia of Artificial Intelligences, or, better yet, and Internet of them: smart programs highly skilled in specific areas that anybody can download, use, modify, and share. The tools have just began to be available, and the intelligences themselves are still mostly built by programmers for programmers, but as the know-how required to build a certain level of intelligence becomes smaller and better distributed, this is beginning to change.

Instead of scores of doctors contributing to a Wikipedia page or a personal site about dealing with a certain medical emergency at home, we'll have them contributing to teach what they know to a program that will be freely available to anybody, giving perhaps life-saving advice in real time. A program any doctor in the world will be able to contribute to, modify, and enhance, keeping up with scientific advances, adapting it to different countries and contexts.

It won't replace doctors, lawyers, interior decorators, editors, or other human experts — certainly not the ones who leverage those programs to make themselves even better — but it'll potentially give each human in the world access to advice and intellectual resources in every profession, art, and discipline known to humankind, from giving you honest feedback about your amateur opera singing, to reading and explaining the meaning of whatever morass of legal terms you're about to click "I Accept" to. Instantaneously, freely, continuously improving, and not limited to what a company would find profitable or a government convenient for you to know.

If the Internet, whenever and wherever we choose to, is or can be something we build together, a literal commons of infinitely reusable knowledge, we'll be building, when and where we choose to, a commons of infinitely reusable skills at our command.

It will also resemble Wikipedia more than Open Source on the ease with which people will be able to add to it. Developing powerful software has never been easier, but contributing to Wikipedia, or making a post on a site or social network about something you know about, only requires technical knowledge many societies already take for granted: open a web page and start typing about the history of Art Deco, your ideas for a revolutionary fusion of empanadas with Chinese cousine, or whatever else it is you want to teach the world about.

Teaching computers about many things will be even easier than that. We're close to the point where computers will be able to learn your recipe just from a video of you cooking and talking about it, and if besides sending that video to a social network you give access to it to an Open Cook, then it'll learn from your recipy, mix it with other ideas, and be able to give improved advice to anybody else in the world. You'll also be able to directly engage with these intelligences to teach them deliberately: just as artificial intelligences can learn to beat games just by playing them, they'll be able to "pick up" skills from humans by doing things and asking for feedback. And if you don't like how it does something, you can always teach it to do in a different way, and anybody will be able to use your version if they think it's better, and in turn modify it any way they want.

Neither Open Source nor Wikipedia, under different names, looks, and motivations, are as new as they seem to be. They've been known for decades, and only seemed pointless or impossible because our shared imagination often runs a bit behind our shared power. We've began to realize we can make computers do an enormous number of things, much sooner than we thought we would, and while we try to predict and shape the implications of this, we're still approaching at it as if revolutionary technology can only work if built and controlled by giant countries and companies.

They are a part of it, but not the only one, and over the long term perhaps not even the most important part. Google matters because it gives us access to the knowledge we — journalists, scientists, amateurs, scholars, people armed with nothing more and nothing less than a phone and curiosity — built and shared. We go to Facebook to see what we are doing.

Some Artificial Intelligences can only be built by sophisticated, specialized, organizations; some companies will become wealthy (or even more so) doing it. And some others can and will be built by all of us, together, and over the long term, their impact will be just as large, if not more. The world changed once everybody was able, at least in theory, to read. It changed again when everybody was able, at least in theory, to write something than everybody in the world can read.

How much will it change again once the things around us learn how to do things on their own, and we teach them together?


This article is based on the talk I gave at the Red Hat Forum Buenos Aires 2017.

Russia 1, Data Science 0

Both sides in the 2016 election had access to the best statistical models and databases money could buy. If Russian influence (which as far as we know involved little more than the well-timed dumping of not exactly military grade hacked information, plus some Twitter bots and Facebook ads) was at any level decisive, then it's a slap on the face for data-driven campaigning, which apparently hasn't rendered obsolete the old art of manipulating cognitive blind spots in media coverage and political habits ("they used Facebook and Twitter" explains nothing: so did all US candidates, in theory with better data and technology, and so do small Etsy shops; it should've made no difference).

The lessons, I suspect, are three:

  • The theory and practice of data-driven campaigning is still very immature. Algorithmize the Breitbart-Russia-Assange-Fox News maneuver, and you'll have something far ahead of the state of the art. (I believe this will come from more sophisticated psychological modeling, rather than more data.)
  • If a country's political process is as vulnerable as the US' was to what the Russians did, then how will it do against an external actor properly leveraging the kind of tools you can develop at the intersection of obsessive data collection, an extremely Internet-focused government, cutting-edge AI, and an assertive foreign policy.
  • You know, like China. Hypothetically.

Whenever this happens, the proper reaction to this isn't to get angry, but to recognize that a political system proved embarrassingly vulnerable, and take measures to improve it. That said, that's slightly less likely to happen when those informational vulnerabilities are also used by the same local actors that are partially responsible for fixing them.

(As an aside, "out under-investment on security /deliberate exploiting of regulatory gaps we lobbied for/cover-up of known vulnerabilities would've been fine if not for those dastardly hackers" is also the default response of large companies to this kind of thing; this isn't a coincidence, but a shared ethos.)

¿Qué quiere decir que la Argentina crezca 3%?

¿Es mucho, es poco? Es las dos cosas. Lo que sigue es una explicación rápida, ignorando un montón de factores importantes, de qué quiere decir ese 3% que predice el Banco Mundial para el bolsillo y las elecciones.

La forma más clara que se me ocurre de explicar ese 3% es pensar qué pasaría si se mantiene de acá hasta las elecciones del 2019. Simplificando mucho, y suponiendo que no pase algo inesperadamente bueno (en un sentido más bien despiadado, un ejemplo sería un colapso ecológico en las zonas productoras de soja en EEUU) o inesperadamente malo (como, por ejemplo, una guerra en Corea), 3% de acá a 2019 permitiría:

  • Reducir el déficit más o menos en un quinto (o más si se reduce el gasto estatal).
  • Aumentar el gasto estatal por persona más o menos en un 10% (o más si se mantiene o sube el déficit).

Por un lado, esto sería un logro significativo: crecer 3% sin que el precio de tus exportaciones primarias haya saltado es muy difícil de lograr, y hacerlo por varios años todavía más. A los EEUU les encantaría poder hacerlo de manera sostenida, y el año pasado menos de un país de cada tres pudo hacerlo. Por otro lado, se traduciría en cambios positivos, pero no espectaculares en el nivel de vida de los Argentinos.

De acá a las elecciones de 2023 lo improbable es casi seguro, pero imaginando que se mantiene ese 3% por año de crecimiento — y esto sería realmente un triunfo administrativo y político — el déficit podría reducirse a un quinto de lo que es ahora, con el gasto estatal por persona alrededor de un tercio más alto (con diferentes números de acuerdo a reformas impositivas, decisiones políticas, etc; esto es un escenario razonable, nada más). Un cambio muy positivo en la calidad de vida, definitivamente. No espectacular. Tan bueno como sería realista esperar, probablemente.

¿Políticamente suficiente para mantener, sea cual sea el partido que gane, una política económica coherente por la década o más que sería necesaria para poner al país en algo parecido a una curva de crecimiento autosustentable? Esa es la cuestión. Históricamente, la Argentina tiene tres problemas económicos estructurales:

  • Una economía poco avanzada, y con mecanismos internos prácticamente diseñados para hacer difícil mejorarla.
  • Tiempos políticos (en última instancia, culturales) incompatibles con lo que toma la clase de crecimiento incremental sostenido que es la única forma en la que las economías crecen (salvo excepciones históricas en situaciones en las que la Argentina no está).
  • Un "techo" bastante rígido para la eficiencia de la economía que parece ser bastante estructural; nunca pudimos atravesarlo, y sospecho que requeriría cambios culturales y sociales bastante radicales, especialmente en el contexto de las tradiciones políticas Argentinas.

En el largo plazo (lo que en este contexto tristemente quiere decir "no las próximas elecciones, sino las que les siguen") el desafío del Gobierno — de cualquier gobierno — es doble. Por un lado, una política administrativa, económica, y de negociación interna que permita una tasa de crecimiento significativa sostenida a lo largo del tiempo, y por otro lado la satisfacción de expectativas públicas que son más altas de lo que de la velocidad de crecimiento hace posible en un sentido puramente material (y en algunos casos con razón; una familia pasando hambre no puede esperar a que ese 3% haga su trabajo). Realizar ese malabarismo constante entre lo material y lo simbólico a lo largo de por lo menos un par de décadas — en un país que sospecha profundamente, de manera históricamente entendible pero también demasiado automáticamente — del concepto mismo de una economía y Estado técnicamente sofisticados, es lo que la clase política elegida y/o tolerada por los Argentinos no ha sido capaz de lograr, en los pocos casos en los que siquiera se intentó.

Resumiendo, ese 3% es, empíricamente, un logro. Mantenerlo consistentemente hasta las próximas elecciones, y especialmente hasta las que vienen después, sería un triunfo administrativo y político notable, además de requerir una dosis importante de suerte.

Es la peculiaridad del país, y la trampa en la que se encuentra desde hace más de un siglo, el que, por razones buenas y malas, no está para nada claro que sería suficiente.

Tesla (or Google) and the risk of massively distributed physical terrorist attacks

You know, an autonomous car is only a software vulnerability away from being a lethal autonomous weapon, and a successful autonomous car company is only a hack away from being the world's largest (if single-use) urban combat force. Such an event would easily be the worst terrorist attack in history. Imagine a year's worth of traffic car deaths, in multiple countries all over the world, during a single, horrifying span of ten minutes. And how ready is your underfunded public transit system to cope with a large proportion of the city's cars being unusable during the few days it takes the company to deal with the hack while everybody is going at them with pitchforks both legal and more or less literal?

But this is a science-fictional premise that's already been used in fiction more than once. In the real world, the whole of our critical software infrastructure is practically impervious to any form of attack, and, if nothing else, companies take the ethical responsibilities inherent in their control over data and systems with the seriousness it demands, even lobbying for higher levels of regulation than less technically sophisticated public and governments demand. And, while current on-board software systems are known to be ridiculously vulnerable to remote attacks, it's only to be expected that more complex programs running on heterogeneous large-scale platforms under overlapping spheres of regulation and oversight will be much safer.

So nothing to worry about.

Probability-as-logic vs probability-as-strategy vs probability-as-measure-theory

Attention conservation notice: Elementary (and possibly not-even-right) if you have the relevant mathematical background, pointless if you don't. Written to help me clarify to myself a moment of categorical (pun not intended) confusion.

What's a possible way to understand the relationship between probability as a the (by Cox) extension of classical logic, probability as an optimal way to make decisions, and probability in the frequentist usage? Not in any deep philosophical sense, just in terms of pragmatics.

I like to begin from the Bayes/Jaynes/Cox view: if you take classical logic as valid (which I do in daily life) and want to extend it in a consistent way to continuous logic values (which I also do), then you end up with continuous logic/certainty values we unfortunately call probability due to historical reasons.

Perhaps surprisingly, its relationship with frequentist probability isn't necessarily contentious. You can take the Kolmogorov axioms as, roughly speaking, helping you define a sort of functor (awfully, based on shared notation and vocabulary, an observation that made me shudder a bit — it's almost magical thinking) between the place where you do probability-as-logic and a place where you can exploit the machinery of measure theory. This is a nice place to be when you have to deal with an asymptotically large number of propositions; possibly the Probability Wars were driven mostly by doing this so implicitly that we aren't clear about what we're putting *into* this machinery, and then, because the notation is similar, forgetting to explicitly go back to the world of propositions, which is where we want to be once we're done with the calculations.

What made me stare a bit at the wall is the other correspondence: Let's say that for some proposition A, P[A] > P[\neg A] in the Bayesian sense (we're assuming the law of excluded middle, etc; this is about a different kind of confusing). Why should I bet that A? In other words, why the relationship between probability-as-certainty and probability-as-strategy? You can define probability based on a decision theoretic point of view (and historically speaking, that's how it was first thought of), but why the correspondence between those two conceptually different formulations?

It's a silly non-question with a silly non-answer, but I want to record it because it wasn't the first thing I thought of. I began by thinking about P[\text{win} | (P(A) > P(\neg A)) \wedge \text{bet on } A], but that leads to a lot of circularity. It turns out that the forehead-smacking way to do it is simply to observe that the best strategy is to bet on A is true iff A, and this isn't circular if we haven't yet assumed that probability-as-strategy is the same as probability-as-logic, but rather it's a non-tautological consequence of the assumed psychology and sociology of what bet on means: I should've done whatever ended up working, regardless of what the numbers told me (I'll try to feel less upset the next time somebody tells me that).

But then, in the sense of probability-as-logic, P[\text{the best strategy is to bet on A}] = P[A] by substituting propositions (and hence without resorting to any frequentist assumption about repeated trials and the long term) so, generally speaking, you end up with probability-as-strategy being part of probability-as-logic. I'm likely counting angels dancing on infinitesimals here, but it's something it felt less clear to me earlier today: probability-as-strategy is probability-as-logic, you're just thinking about propositions about strategies, which, confusingly, in the simplest cases end up having the same numerical certainty values as the propositions the strategies are about. But those aren't the same propositions, although I'm not entirely sure that in practice, given the fundamentally intuitive nature of bet on (insert here very handwavy argument from evolutionary psychology about how we all descend from organisms who got this well enough not to die before reproducing), you get in trouble by not taking this into account.

Original Fic: The Gift of Memory

Not the kind of story I usually post here, but I don't just write dread-infused, mostly-dystopian sci-fi, you know?


In your dreams the world is full of marvels, love, safety. You're immortal and beautiful, and reality, charmed, dances with your thoughts.

In your nightmares the Universe's laws are poisoned, malignant, infected by something else. Something that shouldn't be there, is. Something that hates, haunts, hungers for you.

In your waking you forget they are memories.

We could've taken them with the power and the beauty and the everlasting life, but we enjoy reliving the endless night of our victory when we sucked the world dry and left it the ruined husk it is now. We left you the memories and the sadness, but not the knowledge. At times, in the satiety after other victories among the unperceived rubble of other worlds, it gives us an extra bit of joy.

.finis.


Big Data, Endless Wars, and Why Gamification (Often) Fails

Militaries and software companies are currently stuck in something of a rut: billions of dollars are spent on the latest technology, including sophisticated and supposedly game-changing data gathering and analysis, and yet for most victory seems a best to be a matter of luck, and at worst perpetually elusive.

As different as those "industries" are, this common failure has a common root; perhaps unsurprisingly so, given the long and complex history of cultural, financial, and technological relationships between them.

Both military action and gamified software (of whatever kind: games, nudge-rich crowdsourcing software, behaviorally intrusive e-commerce shops, etc) are focused on the same thing: changing somebody else's behavior. It's easy to forget, amid the current explosion — pun not intended — of data-driven technologies, that wars are rarely fought until the enemy stops being able to fight back, but rather until they choose not to, and that all the data and smarts behind a game is pointless unless more players do more of what you want them to do. It doesn't matter how big your military stick is, or how sophisticated your gamified carrot algorithm, that's what they exist for.

History, psychology, and personal experience show that carrots and sticks, alone or in combination, do, work. So why do some wars take forever, and some games or apps whimper and die without getting any traction?

The root cause is that, while carrots and sticks work, different people and groups have different concepts of what counts as one. This is partly a matter of cultural and personal differences, and partly a matter of specific situations: as every teacher knows, a gold star only works for children who care about gold stars, and the threat of being sent to detention only deters those for whom it's not an accepted fact of life, if not a badge of honor. Hence the failure of most online reputational systems, the endemic nature of trolls, the hit-and-miss nature of new games not based on an already successful franchise, or, for that matter, the enormous difficulty even major militaries have stopping insurgencies and other similar actors.

But the root problem behind that root problem isn't a feature in the culture and psychology of adversaries and customers (and it's interesting to note that, artillery aside, the technologies applied on both aren't always different), but in the culture and psychology of civilian and military engineers. The fault, so to speak, is not in our five-stars rating systems, but in ourselves.

How so? As obvious as it is that achieving the goals of gamified software and military interventions requires a deep knowledge of the psychology, culture, and political dynamics of targets and/or customer bases, software engineers, product designers, technology CEOs, soldiers, and military strategists don't receive more than token encouragement to develop a strong foundation in those areas, much less are required to do so. Game designers and intelligence analysts, to mention a couple of exceptions, do, but their advice is often given but a half-hearted ear, and, unless they go solo, they lack any sort of authority. Thus we end, by and large, with large and meticulously planned campaigns — of either sort — that fail spectacularly or slowly fizzle out without achieving their goals, not for failures of execution (those are also endemic, but a different issue) but because the link between execution and the end goal was formulated, often implicitly, by people without much training in or inclination for the relevant disciplines.

There's a mythology behind this: they idea that, given enough accumulation of data and analytical power, human behavior can be predicted and simulated, and hence shaped. This might yet be true — the opposite mythology of some ineffable quality of unpredictability in human behavior is, if anything, even less well-supported by facts — but right now we are far from that point, particularly when it comes to very different societies, complex political situations, or customers already under heavy "attack" by competitors. It's not that people can't be understood, and forms of shaping their behavior designed, it's that this takes knowledge that for now lies in the work and brains of people who specialize in studying individual and collective behavior: political analysts, psychologists, anthropologists, and so on.

They are given roles, write briefs, have fun job titles, and sometimes are even paid attention to. The need for their type of expertise is paid lip service to; I'm not describing explicit doctrine, either in the military or in the civilian world, but rather more insidious implicit attitudes (the same attitudes the drive, in an even more ethically, socially, and pragmatically destructive way, sexism and racism in most societies and organizations).

Women and minorities aside (although there's a fair and not accidental degree of overlap), people with a strong professional formation in the humanities are pretty much the people you're least likely to see — honorable and successful exceptions aside — in a C-level position or having authority over military strategy. It's not just that they don't appear there: they are mostly shunned, and implicitly or explicitly, well, let's go with "underappreciated." Both Silicon Valley and the Pentagon, as well as their overseas equivalents, are seen and see themselves at places explicitly away from that sort of "soft" and "vague" thing. Sufficiently advanced carrots and sticks, goes the implicit tale, can replace political understanding and a grasp of psychological nuance.

Sometimes, sure. Not always. Even the most advanced organizations get stuck in quagmires (Google+, anyone?) when they forget that, absent an overwhelming technological advantage, and sometimes even then (Afghanistan, anyone?) successful strategy begins with a correct grasp of politics and psychology, not the other way around, and that we aren't yet at a point where this can be provided solely by data gathering and analysis.

Can that help? Yes. Is an organization that leverages political analysis, anthropology, and psychology together with data analysis and artificial intelligence like to out-think and out-match most competitors regardless of relative size? Again, yes.

Societies and organizations that reject advanced information technology because it's new have, by and large, been left behind, often irreparably so. Societies and organizations that reject humanities because they are traditional (never mind how much they have advanced) risk suffering the same fate.

A simplified nuclear game with Kim Jong-un

Despite its formal apparatus and cold reputation, game theory is in fact the systematic deployment of empathy. It's hard to overstate how powerful this can be, without or without mathematical machinery behind it, so let's take an informal look at a game-theoretical way of empathizing with somebody none of us would particularly want to, North Korea's Kim Jong-un.

First, a caveat: as I'm not trained in international politics, and this is an informal toy model rather than a proper analytical project, it'll be very oversimplified both in form and content. The main point is simply to show a quick example of how to think "game-theoretically" (in a handwavy, pre-mathematical sense) that for once isn't the Prisoner's Dilemma.

This particular game has two players, Kim and the US, and three possible outcomes: regime change, collapse, and status quo. We don't need to put specific values to each outcome to note that each player has clear preferences:

  • For the US, collapse < status quo < regime change
  • For Kim, collapse,regime change < status quo

(From Kim's point of view, a collapsing North Korea and one where he's no longer in charge are probably equivalent.)

Let's simplify the United States' possible moves to attempt regime change and do nothing. The latter results in the status quo with certainty, while the former might end up in a proper regime change with probability p, or in a more or less quick collapse with probability 1-p. Therefore, the United States will attempt a regime change as soon as

 \displaystyle p \times \mbox{ regime change} + (1-p) \times \mbox{ collapse} > \mbox{status quo}

There are multiple ways in which Kim's perceived risk can rise, even aside from direct threats. For example:

  • Decreased rapport between the US and South Korea or China (the two major countries who would suffer the brunt of the costs of a collapse) decreases the cost of collapse in the US' strategic calculations, and hence makes a regime change attempt more likely.
  • Every attempt of regime change by the US elsewhere in the world, and any expression of increased self-confidence in their ability to perform one, makes Kim's estimate of the US' estimate of p that much higher, and hence a regime change attempt more likely.
  • Any internal change in North Korea's politics risking Kim's control of the country, should it be found, will also raise p.
  • For that matter, a sufficiently strong fall in their military capabilities would eventually have the same effect.

Kim most likely knows he can't actually defend himself from an attempted regime change (there's no repelled regime change attempt outcome), so his only shot at staying in power is to change the US' strategic calculus. Given how unlikely it seems to be that he can make the status quo more desirable, he has, from a strategic point of view, to make the cost of an attempted regime change high enough to deter one. That's what atomic bombs are for: you change the payout matrix, and you change the game equilibrium. Once you can blow up something in the United States, which of course has an extremely negative value for the US, then even if p = 1,

 \displaystyle (p \times \mbox{ regime change} + (1-p) \times \mbox{ collapse}) + \mbox{Alaska goes boom} < \mbox{status quo}

The unintended problem is that, by both signalling and action, Kim and his regime have convinced the world that they are not entirely rational in strategic terms. As Schelling noted, deterrence often requires convincing other players that you're "crazy enough to do it," but in Kim's case nobody feels entirely certain that he will only use a nuclear weapon in case of an attempted regime change, or exactly what he'd consider one, so, although possessing a nuclear weapon decreases the expected value of a regime change attempt, it also decreases the value of the status quo, making the net impact on the US' strategic calculus &mdahs; the real goal of North Korea's nuclear program — doubtful. It can, and perhaps has, set the system in a dangerous course: the US decries the country as dangerous, the probability of a regime change attempt grows, Kim tries to develop and demonstrate stronger nuclear capabilities, this makes the US posture harsher, etc.

In this toy model — and I emphasize it's one — any attempt to de-escalate has to being by acknowledging that Kim's preferences between outcomes are what they are. Sanctions that weaken the regime spur, rather than delay, nuclear development. Paradoxically and distastefully, what you want is to credibly commit to not attempting a regime change, which at this point can only be done by actively strengthening it. This is something that both China and South Korea seem acutely aware of: pressures on and threats to North Korea tend to be of the "annoying but not regime-threatening" kind, as anything stronger would be counterproductive and not credible, and their assistance to the country has nothing to do with ideological sympathy, and everything to do with keeping the country away from collapse.

But not everything is bleakly pragmatic in game theory, and more humane suggestions can be derived from the above analysis. E.g.,

  • A Chinese offer to strengthen and modernize North Korea's nuclear command chain to avoid hasty or accidental deployments would raise a bit the value of the status quo without increasing the chance of a regime attempt, a mutual win that'd probably be accepted.
  • Any form of humanitarian development, as long as it's not seen as threatening the regime, could be implemented if Kim can sell it internally as being his own accomplishment. That'd be very annoying to everybody else, but suggests that quality of life in North Korea (although not political freedom) can be improved in the short term.
  • Credibly limited tit-for-tat counterattacks might, paradoxically, reinforce everybody's trust in mutual boundaries. So, if a North Korean hack against an US bank is retailed to by hitting Kim's own considerable financial resources in a way that is obviously designed to hurt him while also obviously designed to not impact his grip on power, that'd have a much higher chance of changing his behavior than threatening war.

To once again repeat my caveats, this is far from a proper analysis. To mention one of a multitude of disqualifying limitations, useful strategic analysis of this kind often involves scores of players (e.g., we'd have to look at internal politics in North and South Korea, China, Japan, and the United States, to begin with) with multiple, overlapping, multi-step games, and certainly more detailed and well-sourced domain information than what I've applied here. To derive real-world opinions or suggestions from it would be analytical malpractice.

The point of the article isn't to give yet another uninformed opinion on international politics, but rather to show how even a very primitive and only roughly formal analysis can help frame a discussion about a complex topic in a way that a more unstructured approach couldn't, specially when there are strong moral issues at play.

Sometimes emotions get in the way of understanding somebody else. Thankfully, we have maths to help with that.

This screen is an attack surface

A very short note on why human gut feeling isn't just subpar, but positively dangerous.

One of the most active areas of research in machine learning is adversarial machine learning, broadly defined as the study of how to fool and subvert other people's machine learning algorithms for your own goals, and how to prevent it from happening to yours. A key way to do this is through controlling sampling; the point of machine learning, after all, is to have behavior be guided by data, and sometimes the careful poisoning of what an algorithm sees — not the whole of its data, just a set of well-chosen inputs — can make its behavior deviate from what its creators intended.

A very public example of this is the nascent tradition of people collectively turning a public Microsoft demonstration chatbot into a bigot spouting conspiracy theories, by training it with the right conversations, last year with "Tay" and this week with "Zo." Humans are obviously subject to all sorts of analogous attacks through lies, misdirection, indoctrination, etc, and a big part of our socialization consists on learning to counteract (and, let's be honest, to enact) the adversarial use of language. But there's a subtler vector of attack that, because it's not really conscious, is extremely difficult to defend from.

Human minds rely very heavily on what's called the availability heuristic: when trying to figure out what will happen, we tend to give more weight to possibilities we can easily recall and picture. This is a reasonable automatic process in stable environments and first-hand observations, as it's fast and likely to give good predictions. We easily imagine the very frequent and the very dangerous, so our decision-making follows probabilities, with a bias towards avoiding that place where a lion almost ate us five years ago.

However, we don't observe most of our environment first-hand. Most of us, thankfully, have more exposure to violence through fiction than through real experience, always in highly memorable forms (more and better-crafted stories about violent crime than about car accidents), making our intuition misjudge relative probabilities and dangers. The same happens in every other area of our lives: tens of thousands of words about startup billionaires for every phrase about founders who never got a single project to work, Hollywood-style security threats versus much more likely and cumulatively harmful issues, the quick gut decision versus the detached analysis of multiple scenarios.

And there's no way to fix this. Retraining instincts is a difficult and problematic task, even for very specific ones, much less for the myriad different decisions we make in our personal and professional lives. Every form of media aims at memorability and interest over following reality's statistical distribution — people read and watch the new and spectacular, not the thing that keeps happening — so most of the information you've acquired during your life comes from an statistically biased sample. You might have a highly accurate gut feeling for a very specific area where you've deliberately accumulated an statistically strong data set and interacted with it in an intensive way, in other words, where you've developed expertise, but for most decisions we make in our highly heterogeneous professional and personal activities, our gut feelings have already been irreversibly compromised into at best suboptimal and at worst extensively damaging patterns.

It's a rather disheartening realization, and one that goes against the often raised defense of intuition as one area where humans outperform machines. We very much don't, not because our algorithms are worse (although that's sometimes also true) but because training a machine learning algorithm allows you to carefully select the input data and compensate for any bias in it. To get an equivalently well-trained human you'd have to begin when they are very young, put them on a diet of statistically unbiased and well-structured domain information, and train them intensively. That's how we get mathematicians, ballet dancers, and other human experts, but it's very slow and expensive, and outright impossible for poorly defined areas — think management and strategy — or ones where the underlying dynamics change often and drastically — again, think management and strategy.

So in the race to improve our decision-making, which over time is one of the main factors influencing our ultimate success, there's really no way around substituting human gut feeling with algorithms. The stronger you feel about a choice, the more likely it is to be driven by how easy it is to picture, and that's going to have more to do with the interesting and spectacular things you read, watched, and remember than with the boring or unexpected things that do happen.

Psychologically speaking, those are the most difficult and scariest decisions to delegate. Which is why there's still, and might still be for some time, a window of opportunity to gain competitive advantage by doing it.

But hurry. Sooner or later everybody will have heard about it.

Regularization, continuity, and the mystery of generalization in Deep Learning

A light and short note on a dense subset of a large space...

There's increasing interest in the very happy problem of why Deep Learning methods generalize so well in real-world usage. After all,

  • Successful networks have ridiculous amounts of parameters. By all rights, they should be overfitting training data and doing awfully with new data.
  • In fact, they are large enough to learn the classification of entire data sets even with random labels.
  • And yet, they generalize very well.
  • On the other hand, they are vulnerable to adversarial attacks with weird and entirely unnatural-looking inputs.

One possible very informal way to think about this — I'm not claiming it's an explanation, just a mental model I'm using until the community reaches a consensus as to what's going on — is the following:

  • If the target functions we're trying to learn are (roughly speaking) nicely continuous (a non-tautological but often true property of the real world, where, e.g., changing a few pixels of a cat's picture rarely makes it cease to be one)...
  • and regularization methods steer networks toward that sort of functions (partly as a side effect of trying to avoid nasty gradient blowups)...
  • and your data set is more or less dense in whatever subset of all possible inputs is realistic...
  • ... then, by a frankly metaphorical appeal to a property of continuous functions in Hausdorff spaces, learning well the target function on the training set implies learning well the function on the entire subset.

This is so vague that I'm having trouble keeping myself from making a political joke, but I've found it a wrong but useful model to think about how Deep Learning works (together with an, I think, essentially accurate model of Deep Learning as test-driven development) and how it doesn't.

As a bonus, this gives a nice intuition about why networks are vulnerable to weird adversarial inputs: if you only train the network with realistic data, no matter how large your data set, the most you can hope for is for it to be dense on the realistic subset of all possible inputs. Insofar as the mathematical analogy holds, you only get a guarantee of your network approximating the target function wherever you're dense; outside that subset — in this case, for unrealistic, weird inputs — all bets are off.

If this is true, protecting against adversarial examples might require some sort of specialized "realistic picture of the world" filters, as better training methods or more data won't help (in theory, you could add nonsense inputs to the data set so it can learn to recognize and reject it, but you'd need to pretty much cover the entire input subset with a dense set of samples, and if you're going to do that, then you might as well set up a lookup table, because you aren't anymore).

Short story: Nice girl falls in love with vampire boy. Of course he kills her

(In honor of World Dracula Day)

Nice girl falls in love with vampire boy. Of course he kills her. Did she want him to? Did she understand his hunger wasn't metaphorical? Let's not assume innocence.

Perhaps between man and monster she chose the safer one. Better to know where you stand. Even if there is no such thing as turning; you are born a vampire or you die to feed one. What predator recruits from the herd? Curses are arbitrary, ecosystems have to make sense.

Let's not assume authorial motivation for the story. Identity. Species. Beautiful monsters don't need to dream about being loved, but they can regret not having been able to be otherwise than they are.

They can imagine a world where nice girl falls in love with vampire boy and survives. Innocent meals and sunlit warmth. Otherwise - otherwise she would have to share his night , his murders, his table. Know the taste of her people in his lips and her tongue. Die of guilt or embrace the hunt.

Let's not assume her niceness was more than gesture-deep. Maybe the monster's appeal wasn't his beauty. Maybe she first kissed him in search of that flavor.

Let's not assume monsters can always tell their own. One can regret losing somebody who was never there. Maybe she laughs as she reads your tales, at who you thought she was. At the future you thought you both wanted and could have.

Let's not assume her laugh doesn't hurt you, or that you don't love her for that.

Don't worry about opaque algorithms; you already don't know what anything is doing, or why

Machine learning algorithms are opaque, difficult to audit, unconstrained by ethics , and there's always the possibility they'll do the unthinkable when facing the unexpected. But that's true of most our society's code base, and, in a way, they are the most secure part of it, because we haven't talked ourselves yet into a false sense of security about them.

There's a technical side to this argument: contemporary software is so complex, and the pressures under which it's developed so strong, that it's materially impossible to make sure it'll always behave the way you want it to. Your phone isn't supposed to freeze while you're making a call, and your webcam shouldn't send real-time surveillance to some guy in Ukraine, and yet here we are.

But that's not the biggest problem. Yes, some Toyota vehicles decided on their own to accelerate at inconvenient times because their software systems were mindbogglingly and unnecessarily complex, but nobody outside the company knew they were because it was so legally difficult to have access to the code that even after the crashed they had to be inspected by an outside expert under conditions usually reserved to high-level intelligence briefings.

And there was the hidden code in VW engines designed to fool emissions tests, and the programs Uber uses to track you even while they say they aren't, or even Facebook's convenient tools to help advertisers target the emotionally vulnerable.

The point is, the main problem right now isn't what a self-driving car _might_ do when it has to make a complex ethical choice guided by ultimately unknowable algorithms, but what the car is doing on every other moment, reflecting ethical choices guided by corporate executives that might be unknowable in a philosophical, existential sense, but are worryingly familiar in an empirical one. You don't know most of what your phone is doing at any given time, not to mention other devices, it can be illegal to try to figure it out, and it can also be illegal if not impossible to change it even if you did.

And a phone a thing you hold in your hand and can, at least in theory, put in a drawer somewhere if you want to have a discrete chat with a Russian diplomat. Even more serious are all the hidden bits of software running in the background, like the ones that can automatically flag you as a national security risk, or are constantly weighting whether you should be allowed to turn on your tractor. Even if the organization that developed or runs the software did its job uncommonly well and knows what it's doing down to the last bit, you don't and most likely never will.

This situation, perhaps first and certainly most forcefully argued against by Richard Stallman, is endemic to our society, and absolutely independent of the otherwise world-changing Open Source movement. Very little of the code in our lives is running in something resembling a personal computer, after all, and even when it does, it mostly works by connecting to remote infrastructures whose key algorithms are jealously guarded business secrets. Emphasis on secret, with a hidden subtext of specially from users.

So let's not get too focused on the fact that we don't really understand how a given neural network works. It might suddenly decide to accelerate your car, but "old fashioned" code could, and as a matter of fact did, and in any case there's very little practical difference between not knowing what something is doing because it's a cognitively opaque piece of code, and not knowing what something is doing because the company controlling the thing you bought doesn't want you to know and has the law on its side if it wants to send you to jail if you try to.

Going forward, our approach to software as users, and, increasingly, as citizens, cannot but be empirical paranoia. Just assume everything around you is potentially doing everything it's physically capable of (noting that being remotely connected to huge amounts of computational power makes even simple hardware quite more powerful than you'd think), and if any of that is something you don't find acceptable, take external steps to prevent it, above and beyond toggling a dubiously effective setting somewhere. Recent experience shows that FOIA requests, legal suits, and the occasional whistleblower might be more important for adding transparency to our technological infrastructure than your choice of operating system or clicking a "do not track" checkbox.

The insidious not-so-badness of technological underemployment, and why more education and better technology won't help

Mass technological unemployment is seen by some as a looming concern, but there are signs we're already living in an era of mass technological underemployment. It's not just an intermediate phase: its politics are toxic, it increases inequality, and it's very difficult to get out of.

Underemployment doesn't necessarily mean working less hours than you'd like, or switching jobs frequently. In fact, it often means working a lot, under psychologically and/or physically unhealthy conditions, for low pay, with few or no protections against abuse and firing, and doing your damndest to keep that job because the alternatives are worse. The United States is a paradigmatic case: unemployment is low, but wage growth has been stagnant for a very long while, and working conditions for large numbers of workers aren't particularly great.

Technology isn't the only culprit — choices in macroeconomic management, fiscal policy, and political philosophy are at least just as important — but it certainly hasn't helped. Yes, computers make anybody who knows how to use them much more productive, from the trucker who can use satellite measurements and map databases to identify their location and figure out an optimal route to the writer using a global information network to gather news and references for a article. But you see the problem: those are extremely useful things, but "using a GPS" and "googling" are also extremely easy things. Most jobs require some form of technological literacy, but when most people got enough of it to fulfill the requirements — thanks in part to decades of single-minded focus in the computer industry — knowing how to use computers makes you more productive, but doesn't get you a better salary. Supply and demand.

More technology obviously won't come to the rescue here; the more advanced our computers become, the easier it is for people to interact with them to get a certain task done (until it's automated and you don't need to interact at all), which makes workers more productive, just not better paid. As most of the new kinds of jobs being created tend to be based on intensive use of technology, they are intrinsically prone to this kind of technological underemployment, and more vulnerable to eventual technological unemployment. The people building those tools are usually safe from this dynamic, but the scalability of mass production, and the even more impressive scalability of software systems, mean that you don't need many people to build those tools and infrastructure. And as we've become more adept at making software easy to use, we've become very good at giving it at best a neutral effect on wages.

Don't think "software engineer," think "underpaid person with an hourly contract working in the local warehouse of a highly advanced global logistics company under the control of a sophisticated software system." There are more of the latter than of the former (and things that used to look like the former have become easy enough to begin to look like the latter...).

More education is equally useless. *Not* to the individual: besides its non-economic significance, your education relative is one of the strongest predictors of your wages. But raising everybody's educational level, just like making everybody's technology easier to use, doesn't raise anybody's wages. By making people more productive, it makes it possible for companies to pay higher wages, but as long as there's more educated-enough people than positions you want to fill, it doesn't make it necessary, so of course (an "of course" contingent on a specific political philosophy) it doesn't happen.

Absent a huge exogenous increase in the demand for labor, or an infinitely more ominous exogenous decrease in its supply, the ongoing dynamic is that technology will keep being improved in power and ease of use, making workers more productive and at the same time giving them less bargaining power, and therefore stalling or reducing their wages and their working conditions.

The developing world faces this problem no less than the developed world, with the added difficulty, but also the ironic advantage, of starting behind them in human, physical, and institutional capital. Investment and integration with the global economy can raise living standards very significantly from that baseline, but eventually hitting the same plateau (and usually at a much lower absolute level).

This isn't just an economic tragedy of missed opportunities, it's an extremely toxic political environment. Mass unemployment isn't politically viable for long — sooner or later, peacefully or not, some action is demanded, which might or might not be rational, humane, or work at all, but which definitely changes the status quo — but mass underemployment of this kind just keeps everybody busy holding on to crappy jobs and trying to learn enough new technology or soft skills or whatever's being talked about this month in order to keep holding to it or even get a promotion to an slightly less crappy job where, not coincidentally, you're likely to end up using less technology (the marketing intern googling something vs the marketing VP having a power breakfast with a large customer). It sustains the idea that people could get a better life if they just studied and worked hard enough, which is true in an individual sense — highly skilled software engineers are very well paid — and absurd as a policy solution — once everybody can do what a highly skilled software engineer can do, then highly skilled software engineers won't be very well paid. Yet it's the kind of absurdity that sounds obvious, and therefore ends up driving politics and hence policy.

The fact that technology and education don't help with this problem doesn't mean we need less of either. There are other problems they help with, and for those problems we need more of both. But we do need to fight back increased underemployment, not to avoid it shifting into mass unemployment, but because there's a good risk of a it becoming widespread and structural, with serious social and political side effects .

There are workable solutions for this , but they lie in the realm of macroeconomics and fiscal policy, which ultimately depend on political philosophy, and that's a different post.

The case for blockchains as international aid

Blockchains aren't primarily financial tools. They are a political technology, and their natural field of application is the developing world.

The main problem a blockchain is meant to solve is lack of a trusted third party, which is at its root a problem of institutions, that is, politics. Bitcoin isn't used because it's convenient or scalable, but because it works as a rudimentary global financial system without having to trust any person or organization (at least that's the theory; poorly regulated financial intermediaries, like life, always find a way). The fact is that we do have a global financial system that it's relatively trusted, but bitcoin users — speculators aside — think the system checks don't work, think they work and want to avoid them, or some combination of both. I'm not judging.

Yet beyond those (huge) nooks and crannies in the developed world, there are billions of people who just don't have access to financial systems they can trust, and beyond finance, there are billions of people who don't have access to any kind of governance system they can trust. Honest cops, relatively functional bureaucracies, public records that don't change overnight: building a state that has and deserves a certain amount of trust takes generations, is always a work in progress, and is very difficult to even begin. Low trust environments are self-perpetuating, simply because individual incentives, risks, and choices become structurally skewed in that way.

Can blockchains solve this? No, obviously not.

But they can provide one small bit of extra buttressing, through a globally visible and verified public document ledger. Don't think in terms of financial transactions, but of more general documents: ownership transfer records, government contracts, some judicial and fiscal records, etc. Boring, old-fashioned, absolutely essential bits of information that everybody in a developed country just assumes without thinking are present, accessible, and reliable, but people elsewhere know can be anything but.

Blockchains working as a sort of global notary, set up by international development organizations but basing their reliability on the processing power donated by a multitude of CPU-rich but often money- and time-poor activists, would give citizens, businesses, and governments a way to fight some forms of mutual abuse. It won't, and cannot, prevent it, but it can at least raise the reputational cost of hiding, changing, or destroying documents that are utterly uninteresting to the likes of WikiLeaks, but that for a family can mean the difference between keeping or losing their home.

Even countries that have improved much in this area can strengthen their international reputations, and therefore their attractiveness for investments and migration, by this kind of globally verifiable transparency.

It's not sexy, it'll never make money, and it doesn't fully, or even mostly, solve the problem. It doesn't disrupt the business model of corruption and structural incompetence, and, best case, it'll put a small pebble in one or two undeservedly expensive shoes. Hopefully. Maybe.

But good governance is the core platform of a prosperous and healthy society. Getting it right is one of the hardest things, but also one of the most important we can try to help each other do.

Short story: The Associate

I seldom know who's paying me or what they do; only my few friends lucky enough to have jobs do. My phone will buzz, and if I bid low enough I'll get to do things that will feel like isolated musical notes, meaningless on their own, in places that sometimes will appear later in the news in ways I won't be able to relate to my own actions but also won't try to.

A wordless feeling will keep me from adding to the pain and outrage of the comment threads, but the daily rent payments sometimes don't leave me enough for food, so I'm always hoping my phone will buzz with a new incomprehensible gig, and when it does I always bid low.

Statistics, Simians, the Scottish, and Sizing up Soothsayers

A predictive model can be a parametrized mathematical formula, or a complex deep learning network, but it can also be a talkative cab driver or a slides-wielding consultant. From a mathematical point of view, they are all trying to do the same thing, to predict what's going to happen, so they can all be evaluated in the same way. Let's look at how to do that by poking a little bit into a soccer betting data set, and evaluating it as if it were an statistical model we just fitted.

The most basic outcome you'll want to predict in soccer is whether a game goes to the home team, the visitors or away team, or is a draw. A predictive model is anything and anybody that's willing to give you a probability distribution over those outcomes. Betting markets, by giving you odds, are implicitly doing that: the higher the odds, the less likely they think is the outcome.

The Football-Data.co.uk data set we'll use contains results and odds from various soccer leagues for more than 37,000 games. We'll use the odds for the Pinnacle platform whenever available (those are closing odds, the last ones available before the game).

For example, for the Juventus-Fiorentina game in August 20, 2016, the odds offered were 1.51 for a Juventus win, 4.15 for a draw (ouch), and 8.61 for a Fiorentina victory (double ouch). Odds of 1.51 for Juventus mean that for each dollar you bet on Juventus, you'd get USD 1.51 if Juventus won (your initial bet included) and nothing if it didn't. These numbers aren't probabilities, but they imply probabilities. If platforms gave odds too high relative to the event's probability they'd go broke, while if they gave odds too low they wouldn't be able to attract bettors. On balance, then, we can read from the odds probabilities slightly lower than the the betting market's best guesses, but, in a world with multiple competing platforms, not really that far from the mark. This sounds like a very indirect justification for using them as a predictive model, but every predictive model, no matter how abstract, has a lot of assumptions; a linear model assumes the relevant phenomenon is linear (almost never true, sometimes true enough), and looking at a betting market as a predictive model assumes the participants know what they are doing, the margins aren't too high, and there isn't anything too shady going on (not always true, sometimes true enough).

We can convert odds to probabilities by asking ourselves: if these odds were absolutely fair, how probable would the event have to be so neither side of the bet can expect to earn anything? (a reasonable definition of "fair" here, with historical links to the earliest developments of the concept of probability). Calling P the probability and L the odds, we can write this condition PL + (1-P)*0 = 1. The left side of the equation is how much you get on average — L when, with probability P, the event happens, and zero otherwise — and the right side says that on average you should get you dollar back, without winning or losing anything. From there it's obvious that P = \frac{1}{L}. For example, the odds above, if absolutely fair (which they never are, not completely, as people in the industry have to eat) would imply a probability for Juventus to win of 66.2%, and for Fiorentina of 11.6% (for the record, Juventus won, 2-1).

In this way we can put information into the betting platform (actually, the participants do), and read out probabilities. That's all we need to use it as a predictive model, and there's in fact a small industry dedicated to building betting markets tailored to predict all sorts of events, like political outcomes; when built with this use in mind, they are called prediction or information markets. The question, as with any model, isn't if it's true or not — unlike statistical models, betting markets don't have any misleading aura of mathematical certainty — but rather how good those probabilities are.

One natural way of answering that question is to compare our model with another one. Is this fancy machine learning model better than the spreadsheet we already use? Is this consultant better than this other consultant? Is this cab driver better at predicting games than that analyst on TV? Language gets very confusing very quickly, so mathematical notation becomes necessary here. Using the standard notation  P[x | y] for how likely do I think is that x will happen if y is true?, we can compare the cab driver and the TV analyst by calculating

 \frac{P[ \textrm{the game results we saw} | \textrm{the cab driver knows what she's talking about}]}{P[\textrm{the game results we saw} | \textrm{the TV analyst knows what he's talking about}]}

If that ratio is higher than one, this means of course that the cab driver is better at predicting games than the TV analyst, as she gave higher probabilities to the things that actually happened, and vice versa. This ratio is called the Bayes factor.

In our case, the factors are easy to calculate, as P[\textrm{home win} | \textrm{odds are good predictors}] is just \textrm{probability of a home win as implied by the odds}, which we already know how to calculate. And because the probabilities of independent events are the product of the individual probabilities, then

P[\textrm{any sequence of game results}|\textrm{odds are good predictors}] = \prod P[\textrm{probability of each result as implied by the odds}]

In reality, those events aren't independent, but we're assuming participants in the betting market take into account information from previous games, which is part of what "knowing what you're talking about" intuitively means.

Note how we aren't calculating how likely a model is, just which one of one two models has more support from the data we're seeing. To calculate the former value we'd need more information (e.g., how much you believed the model was right before looking at the data). This is a very useful analysis, particularly when it comes to making decisions, but often the first question is a comparative one.

Using our data set, we'll compare the betting market as a predictive model against a bunch of dart-throwing chimps as a predictive model (dart-throwing chimps are a traditional device in financial analysis). The chimps throw darts against a wall covered with little Hs, Ds, and As, so they always predict each event has a probability of \frac{1}{3}. Running the numbers, we get

 \textrm{odds vs chimps} = \frac{\prod P[\textrm{probability of each result as implied by odds}]}{ \frac{1}{3}^{\textrm{number of games}}} = e^{4312.406}

This is (much) larger than one, so the evidence in the data favors the betting market over the chimps (very; see the link above for a couple of rules of thumb about interpreting those numbers). That's good, and not something to be taken for granted: many stock traders underperform chimps. Note that if one model is better than another, the Bayes factor comparing them will keep growing as you collect more observations and therefore become more certain of it. If you make the above calculation with a smaller data set, the resulting Bayes factor will be lower.

Are odds also better in this sense than just using a rule of thumb about how frequent each event is? In this data set, the home team wins about 44.3% of the time, and the visitors 29%, so we'll assign those outcome probabilities to every match.

 \textrm{odds vs rule of thumb} = \frac{\prod P[\textrm{probability of each result as implied by odds}]}{ \prod P[\textrm{probability of each result as implied by the rule of thumb}]   } = e^{3342.303}

That's again overwhelming evidence in favor of the betting market, as expected.

We have statistics, soothsayers, and simians (chimpanzees aren't simians, but I couldn't resist the alliteration). What about the Scottish?

Lets look at how better than chimps are the odds for different countries and leagues or divisions (you could say that the chimps are our null hypothesis, but the concept of null hypothesis is at best a confusing and at worst a dangerous one: quoting the Zen of Python, explicit is better than implicit). The calculations will be the same, applied to subsets of the data corresponding to each division. A difference is that we're going to show the logarithm of the Bayes factor comparing the model implied by the odds and the model from the dart-throwing chimps (otherwise numbers become impractically large), and this divided by the number of game results we have for each division. Why that division? As we said above, if one model is better than another, the more observations you accumulate, the higher the amount of evidence for one over the other you're going to get. It's not that the first model is getting better over time, it's just that you're getting more evidence that it's better. In other words, if model A is slightly better than model B but you have a lot of data, and model C is much better than model D but you only have a bit of data, then the Bayes factor between A and B can be much larger than the one between C and D: the size of an effect isn't the same thing as your certainty about it.

By dividing the (logarithm of) the Bayes factor by the number of games, we're trying to get a rough idea of how good the odds are, as models, comparing different divisions with each other. This is something of a cheat — they aren't models of the same thing! — but by asking of each model how quickly they build evidence that they are better than our chimps, we get a sense of their comparative power (there are other, more mathematically principled ways of doing this, and to a degree the method you choose has to depend on your own criteria of usefulness, which depends on what you'll use the model for, but this will suffice here).

I'm following here the naming convention for divisions used in the data set: E0 is the English Premier League, E1 is their Championship, etc (the larger the number, the "lower" the league), and the country prefixes are: E for England, SC for Scotland, D for Germany, I for Italy, SP for Spain, F for France, N for the Netherlands, B for Belgium, P for Portugal, T for Turkey, and G for Greece. There's quite a bit of heterogeneity inside each country, but with clear patterns. To make them clearer, let's sort the graph by value instead of division, and keep only the lowest and highest five:

The betting odds generate better models for the top leagues of Greece, Portugal, Spain, Italy, and England, and worse ones for the lower leagues, with the very worst modeled one being SC3 (properly speaking, the Scottish League Two – there are the Scottish). This makes sense: the larger leagues have a lot of bettors who want in, many of them professionals, so the odds are going to be more informative.

To go back to the beginning: everything that gives you probabilities about the future is a predictive model. Just because one is a betting market and the other is a chimpanzee, or one is a consultant and the other one is a regression model, it doesn't mean they can't and shouldn't be compared to each other in a meaningful way. That's why it's so critical to save the guesses and predictions of every software model and every "human predictor" you work with. It lets you go back over time and ask the first and most basic question in predictive data science:

How much better is this program or this guy than a chimp throwing darts?

When you think about it, is that really a question you would want to leave unanswered about anything or anybody you work with?

The Children of the Dead City

Dusk is coming and walking at night is no longer allowed, but the children still loiter near the black windowless building that looks like a tombstone for a giant or a town. A year ago most of their parents worked there, their hands the AI-controlled manipulators of the self-managed warehouse, but since then artificial hands have become good enough, and no more than a dozen humans tarnish the algorithmic purity of the logistics hub.

With so many residents unemployed, the town can no longer afford the software usage licenses that keep the smart city infrastructure working. Traffic lights cycle blindly without regard for people or cars. Medical help has to be called for manually, phones and buildings callously ignoring emergencies and uninterested in saving lives.

No unblinking mind watches over children on the streets. Something does, something nameless and uncaring, and parents have tried to explain that it's just an analytics company the town is selling the video feeds to, but they also tell them to be home early, and fret over their health more than before.

Like every physically vulnerable life form, children know when they are being lied to. They also know when a place is haunted.

Night has fallen, and the children finally leave the familiar presence of the warehouse's continuously thinking walls. The walk back home is scary and thrilling, the well-lighted streets only increasing the menace from the once soothing eyes on every pole and wall. The children move in packs, wordlessly alert, but some must walk alone to houses out of the way.

Not all of the children arrive on time. When apprehensive parents eventually go out searching for them, asking the city in vain for help, not all are found. A camera last saw them, a neural network recognized them, a database holds the memory. But the city is silent.

For a while no child walks unaccompanied, yet that cannot last forever, and the black monolith keeps calling to them with the familiar warmth of a place where everything sees, and thinks, and cares.

Why the most influential business AIs will look like spellcheckers (and a toy example of how to build one)

Forget voice-controlled assistants. at work, AIs will turn everybody into functional cyborgs through squishy red lines under everything you type. Let's look at a toy example I just built (mostly to play with deep learning along the way).

I chose as a data set Patrick Martinchek's collection of Facebook posts from news organizations. It's a very useful resource, covering more that a dozen organizations and with interesting metadata for each post, but for this toy model I focused exclusively on the headlines of CNN's posts. Let's say you're a journalist/editor/social network specialist working for CNN, and part of your job is to write good headlines. In this context, a good headline could be defined as one having a lot of shares. How would you use an AI to help you with that?

The first step is simply to teach the AI about good and bad headlines. Patrick's data set included 28,300 posts with both the headline and the count of shares (there were some parsing errors for which I chose just to ignore the data; in a production project the number of posts would've been larger). As what counts as a good headline depends on the organization, I defined a good headline as one that got a number of shares in the top 5% for the data set. This simplifies the task from predicting a number (how many shares) to a much simpler classification problem (good vs bad headline)

The script I used to train the network to perform this classification was Denny Britz' classic Implementing a CNN for text classification in TensorFlow example. It's an introductory model, not meant to have production-level performance (also, it was posted on December 2015, and sixteen months in this field is a very long time), but the code is elegant, well-documented, and easy to understand and modify, so it was the obvious choice for this project. The only changes I made were adapting it to train the network without having to load all of the data in memory at the same time and replacing the parser with one of NLTK's.

After an hour of training on my laptop, testing the model against out-of-sample data gives an accuracy of 93% and a precision for the class of good headlines of 9%. The latter is the metric I cared about for this model: it means that 9% of the headlines the model marks as good are, in fact, good. That's about 80% better than random chance, which is... well, it's not that impressive. But that's after an hour of training with a tutorial example, and rather better than what you'd get from that data set using most other modeling approaches.

In any case, the point of the exercise wasn't to get awesome numbers, but to be able to do the next step, which is where this kind of model moves from a tool used by CNN's data scientists into one that turns writers into cyborgs.

Reaching again into NLTK's impressive bag of tricks, I used its part-of-speech tagger to identify the nouns in every bad headline, and then a combination of WordNet's tools for finding synonyms and the pluralizer in CLiPS' Pattern Python module to generate a number of variants for each headline, creating new variations using simple rewrites of the original one.

So for What people across the globe think of Donald Trump, the program suggested What people across the Earth think of Donald Trump and What people across the world think of Donald Trump. What's more, while the original headline was "bad," the model predicts that the last variation will be good. With a 9% precision for the class, it's not a sure thing, but it's almost twice the a priori probability of the original, which isn't something to sneeze at.

In another case, the program took Dog sacrifices life to save infant in fire, and suggested Dog sacrifices life to save baby in fire. The point of the model is to improve on intuition, and I don't have the experience of whoever writes CNN's post headlines, but that does look like it'd work better.

Where things go from a tool for data analysts to something that changes how almost everybody works is that nothing prevents a trained model from working in the background, constantly checking what you're writing — for example, the headline for your post — and suggesting alternatives. To grasp the true power a tool like this could have, don't imagine a web application that suggests changes to your headline, or even as a tool in your CMS or text editor, but something more like your spellchecker. For example, the "headline" field in your web app will have attached a model trained from the specific data from your organization (and/or from open data sets), which will underline it in red if it predicts it won't work well. Right-click on the text, and it'll show you some alternatives.

Or if the response to a customer you're typing might make them angry.

Or if the presentation you're building has the sort of look that works well on SlideShare.

Or if the code you're writing is similar to the kind of code that breaks your application's test suite.

Or if there's something fishy in the spreadsheet you're looking at.

Or... You get the idea. Whenever you have a classification model and a way to generate alternatives, you have a tool that can help knowledge workers do they work better, a tool that gets better over time — not just learning from its experience, as humans do, but from the collective experience of the entire organization — and no reason not to use it.

"Artificial intelligence," or whatever label you want to apply to the current crop of technologies, is something that can, does, and will work invisibly as part of our infrastructure, and it's also at the core of dedicated data analysis, but it'll also change the way everybody works by having domain-specific models look in real time at everything you're seeing and doing, and making suggestions and comments. Microsoft's Clippy might have been the most universally reviled digital character before Jar Jar Binks, but we've come to depend on unobtrusive but superhuman spellcheckers, GPS guides, etc. Even now image editors work in this way, applying lots of domain-specific smarts to assist and subtly guide your work. As building models for human or superhuman performance on very specific tasks becomes accessible to every organization, the same will apply to almost every task.

It's already beginning to. We don't have, yet, the Microsoft Office of domain-specific AIs, and I'm not sure how that would look like, but, unavoidably, the fact that we can teach programs to perform better than humans in a list of "real-world" tasks that grows almost every week means that organizations that routinely do so — companies that don't wait for fully artificial employees, but that also don't neglect to enhance their employees with every better-than-human narrow AI they can build right now — have an increasing advantage over those that don't. The interfaces are still clumsy, there's no explicit business function or fancy LinkedIn position for it, and most workers, including ironically enough knowledge workers and people with leadership and strategic roles, still have to be convinced that cyborgization, ego issues aside, is a better career choice than eventual obsolescence, but the same barriers applied when business software first became available, yet the crushing economic and business advantages made them irrelevant in a very short amount of time.

The bottom line: Even if you won't be replaced by an artificial intelligence, there will be many specific aspects of your work they will be or are already able to do better than you, and if you can't or won't work with them as part of your daily routine, there's somebody who will. Knowing how to train and team up with software in an effective way will be one of the key work skills of the near future, and whether explicit or not, the "AI Resources Department" — a business function focused on constantly building, deploying, and improving programs with business-specific knowledge and skills — will be at the center of any organization's efforts to become and remain competitive.

Don't blame algorithms for United's (literally) bloody mess

It's the topical angle, but let's not blame algorithms for the United debacle. If anything, algorithms might be the way to reduce how often things like this happen.

What made it possible for a passenger to be hit and dragged off a plane to avoid inconveniencing an airline's personnel logistics wasn't the fact that the organization implements and follows quantitative algorithms, but the fact that it's an organization. By definition, organizations are built to make human behavior uniform and explicitly determined.

A modern bureaucratic state is an algorithm so bureaucrats will behave in homogeneous, predictable ways.

A modern army is an algorithm so people with weapons will behave in homogeneous, predictable ways.

And a modern company is an algorithm so employees will behave in homogeneous, predictable ways.

It's not as if companies used to be loose federations of autonomous decision-making agents applying both utilitarian and ethical calculus to their every interaction with customers. The lower you are in an organization's hierarchy, the less leeway you have to deviate from rules, no matter how silly or evil they prove to be in a specific context, and customers (or, for that matter, civilians in combat areas) rarely if ever interact with anybody who has much power.

That's perhaps an structural, and certainly a very old, problem in how humans more or less manage to scale up our social organizations. The specific problem in Dao's case was simply that the rules were awful, both ethically ("don't beat up people who are behaving according to the law just because it'll save you some money") and commercially ("don't do things that will get people viscerally and virally angry with you somewhere with cameras, which nowadays is anywhere with people.")

Part of the blame could be attributed to United CEO's Muños and his tenuous grasp of at least simulated forms of empathy, as manifested by his first and probably most sincere reaction. But hoping organizations will behave ethically or efficiently when and because they have ethical and efficient leaders is precisely why we have rules: one of the major points of a Republic is that there are rules that constrain even the highest-ranking officers, so we limit both the temptation and the costs of unethical behavior.

Something of a work in progress.

So, yes, rules are or can be useful to prevent the sort of thing that happened to Dao. And to focus on current technology, algorithms can be an important part of this. In a perhaps better world, rules would be mostly about goals and values, not methods, and you would trust the people on the ground to choose well what to do and how to do it. In practice, due to a combination of the advantages of homogeneity and predictability of behavior, the real or perceived scarcity of people you'd trust to make those choices while lightly constrained, and maybe the fact that for many people the point of getting to the top is partially to tell people what to do, employees, soldiers, etc, have very little flexibility to shape their own behavior. To blame this on algorithms is to ignore that this has always been the case.

What algorithms can do is make those rules more flexible without sacrificing predictability and homogeneity. While it's true that algorithmic decision-making can have counterproductive behaviors in unexpected cases, that's equally true of every system of rules. But algorithms can take into account more aspects of a situation than any reasonable rule book could handle. As long as you haven't given your employees the power to override rules, it's irrelevant whether the algorithm can make better ethical choices than them — the incremental improvement happens because it can make a better ethical choice than a static rule book.

In the case of United, it'd be entirely possible for an algorithm to learn to predict and take into account the optics of a given situation. Sentiment analysis and prediction is after all a very active area of application and research. "How will this look on Twitter?" can be part of the utility function maximized by an algorithm, just as much as cost or time efficiencies.

It feels quite dystopic to think that, say, ride hailing companies should have machine learning models to prevent them from suddenly canceling trips for pregnant women going to the hospital to pick up a more profitable trip elsewhere; shouldn't that be obvious from everybody from Uber drivers to Uber CEOs? Yes, it should. And no, it isn't. Putting "morality" (or at least "a vague sense of what's likely to make half the Internet think you're scum") in code that can be reviewed, as — in the best case — a redundancy backup to a humane and reasonable corporate culture, is what we already do in every organization. What we can and should do is to teach algorithms to try to predict the ethical and PR impact of every recommendation they make, and take that into account.

Whether they'll be better than humans at this isn't the point. The point is that, as long as we're going to have rules and organizations where people don't have much flexibility not to follow them, the behavioral boundaries of those organizations will be defined by that set of rules, and algorithms can function as more flexible and careful, and hence more humane, rules.

The problem isn't that people do what computers tell them to do (if you want, you can say that the root problem is when people do bad things other people tell them to do, but that has nothing to do with computers, algorithms, or AI). Computers do what people tell them. We just need to, and can, tell them to be more ethical, or at least to always take into account how the unavoidable YouTube video will look.

Deep Learning as the apotheosis of Test-Driven Development

Even if you aren't interested in data science, Deep Learning is an interesting programming paradigm; you can see it as "doing test-driven development with a ludicrously large number of tests, an IDE that writes most of the code, and a forgiving client." No wonder everybody's pouring so much money and brains into it! Here's a way of thinking about Deep Learning not as an application you're asked to code, but a language to code with.

Deep Learning applies test-driven development as we're all taught to (and not always do): first you write the tests, and then you move from code that fails all of them to one that passes them all. One difference from the usual way of doing it, and the most obvious, is that you'll usually have anything from hundreds of thousands to Google-scale numbers of test cases in the way of pairs (picture of a cat, type of cute thing the cat is doing), or even a potentially infinite number that look like pairs (anything you try, how badly Donkey Kong kills you). This gives you a good chance that, if you selected or generated them intelligently, the test cases represent the problem well enough that a program that passes them will work in the wild, even if the test cases are all you know about the problem. It definitely helps that for most applications the client doesn't expect perfect performance. In a way, this lets you get away with the problem of having to get and document domain knowledge, at least for reasonable-but-not-state-of-the-art levels of performance, which is specially hard to do for to things like understanding cat pictures, because we just don't know how we do it.

The second difference between test-driven development with the usual tools and test-driven development with Deep Learning languages and runtimes is that the latter are differentiable. Forget the mathematical side of that: the code monkey aspect of it is that when a test case fails, the compiler can fix the code on its own.

Yep.

Once you stop thinking about neural networks as "artificial brains" or data science-y stuff, and look at them as a relatively unfamiliar form of bytecode — but, as bytecode goes, also a fantastically simple one — then all that hoopla about backpropagation algorithms is justified, because they do pretty much what we do: look at how a test failed and then work backwards through the call stack, tweaking things here and there, and then running the test suite again to see if you fixed more tests than you broke. But they do it automatically and very quickly, so you can dedicate yourself to collecting the tests and figuring out the large scale structure of your program (e.g. the number and types of layers in your network, and their topology) and the best compiler settings (e.g., optimizing hyperparameters and setting up TensorFlow or whatever other framework you're using; they are labeled as libraries and frameworks, but they can also be seen as compilers or code generators that go from data-shaped tests to network-shaped bytecode).

One currently confusing fact is that this is all rather new, so very often the same people who are writing a program are also improving the compiler or coming up with new runtimes, so it looks like that's what programming with Deep Learning is about. But that's just a side effect of being in the early "half of writing the program is improving gcc so it can compile it" days of the technology, where things improve by leaps and bounds (we have both a fantastic new compiler and the new Internet-scale computers to run it), but are also rather messy and very fun.

To go back to the point: from a programmer's point of view, Deep Learning isn't just a type of application you might be asked to implement. They are also a language to write things with, one with its own set of limitations and weak spots, sure, but also with the kind of automated code generation and bug fixing capabilities that programmers have always dreamed of, but by and large avoid because doing it with our usual languages involves a lot of maths and the kind of development timelines that makes PMs either laugh or cry.

Well, it still does, but with the right language the compiler takes care of that, and you can focus on high-level features and getting the test cases right. It isn't the most intuitive way of working for programmers trained as we were, and it's not going to fully replace the other languages and methods in our toolset, but it's solving problems that we thought were impossible. How can a code monkey not be fascinated by that?

"Tactical Awareness" en Español

Esteban Flamini hizo lo que no imaginé que fuera posible: tradujo TACTICAL AWARENESS al Español manteniendo tanto el argumento de las historias como la cuenta de palabras. Su traducción, como el texto original, se puede bajar gratuitamente en su sitio.

Incluso si no te interesan las historias, o si ya las leíste, vale la pena leer la versión de Esteban, aunque más no sea para apreciar una traducción realmente difícil realizada extremadamente bien.

Short story: The Eater of Silicon Sins

His job is not to press the button. When he fails at his job, people don't die.

There used to be support groups for people like him, groups he wasn't supposed to attend but did anyway. They were for the people who worked the most awful images the human mind could conceive, videos of violence and sexual abuse beyond any quaint nightmares they might have had before, flagging them so the psychological damage of seeing those videos — and knowing those things were happening at that very moment to some terrified person inarticulate with pain — would remain contained inside their own minds. They could barely afford food on gig economy rates, much less therapy, so they met online to not talk about what they couldn't, and half-heatedly and not often successfully prevent each other from killing themselves.

He would go to those groups to seek some simulacrum of health in their shared illness, yet there would always be a barrier between him and everybody else. What he sees every day isn't the crisp video of a carefully recorded personal hell, but the blurry real-time monitoring feed of a superhumanly fast combat robot moving, targeting, and shooting quicker than any human could. It would be impossible for him to decide faster and better than the robot which of the moving figures are enemy combatants, children trying to run from a war without fronts, or both.

So he never presses the button, and prays every night beyond statistical hope to have never let a terrified innocent die.

The groups went away when computers became better than humans at filtering out that kind of material, but he knows he will never be replaced. No matter how good the robots get, how superhumanly quick and accurate their autonomous reactions, there'll still be innocents dead whenever they are used for what they were built for; not because the technology is flawed, but because that's the tactically optimal tradeoff they've been configured for. His job is to take the blame for, and only a human can do that.

He doesn't drink, nor take pills, nor beat his wife. He has no dangerous hobbies. He does his duty like any good soldier would do.

In his dreams he seems himself on a screen, his face framed by a targeting solution. The image stays stills for an impossibly long time, yet he never presses the button.

.finis.

Short story: Dead Man's Trigger

My name is Rob, short for Roberta. I'm a private investigator, which means I'm good enough with social networks to do what the police does, just without the automated subpoenas and the retroactively legal hacking. It's not difficult, really. Nine times out of ten the obvious suspect did it. The bereaved know who did it, acquaintances know who did it, even the police know who did it.

So ten times out of ten I'm hired when the police pretends not to know who did it, when a judge pretends not to believe them, or when a jury pretends they've got reasonable doubt. I'm never hired to figure out who did it, despite the pretenses the client and I go through. I'm not even hired to find proof. I'm hired because once I've found, again, what everybody knew, and collected the proof they didn't need, I give them a burner email address.

They hire me for that email address. I don't like it, but I don't dislike it enough not to give it to them. It's my business to give the address, not what they do with it.

I can pretend not to know just as well as cops, judges, and juries do, but I can't lie to myself, not about this. Content sent to those addresses usually goes viral. Which by itself would be a weak form of revenge: The crimes the police decide not to solve, judges not to take to trial, and juries not to punish, are the kinds of crime many people cheer the criminal for. Shooting the "right" kind of person, more often than not. (My boyfriend was the right kind of person. Serious, sad, brilliant John. Did he know how he'd die when he wrote this program?)

But the evidence doesn't just go viral, it infects the right sort of group. I don't use the word metaphorically, or at least not much. I don't know who those people are, but I'm sure they aren't always the same. Depends on the crime, on the victim, and on tides I don't visit the right forums to feel the shifting of. I'm glad of that, for my sanity's sake. (John had to, if nothing else to teach the program to seek them. I didn't know him well, it turns out, while he knew exactly what I would and wouldn't do. I only get email addresses sent to me. Nothing more.)

I don't tell myself that the deaths that follow are coincidence. I don't dwell in how they are not. I sleep reasonably well.

I've stopped missing John.

.finis.

The new (and very old) political responsibility of data scientists

We still have a responsibility to prevent the ethical misuse of new technologies, as well as helping make their impact on human welfare a positive one. But we now have a more fundamental challenge: to help defend the very concept and practice of the measurement and analysis of quantitative fact.

To be sure, a big part of practicing data science consists of dealing with the multiple issues and limitations we face when trying to observe and understand the world. Data seldom means what its name implies it means; there are qualifications, measurement biases, unclear assumptions, etc. And that's even before we engage the useful but tricky work of making inferences off that data.

But the end result of what we do — and not only, or even mainly us, for this collective work of observation and analysis is one of the common threads and foundations of civilization — is usually a pretty good guess, and it's always better than closing your eyes and giving whatever number provides you with an excuse to do what you'd rather do. Deliberately messing with the measurement of physical, economic, or social data is a lethal attack on democratic practices, because it makes impossible for citizens to evaluate government behavior. Defending the impossibility of objective measurement (as opposed to acknowledging and adapting to the many difficulties involved) is simply to give up on any form of societal organization different from mystical authoritarianism.

Neither attitude is new, but both have gained dramatically in visibility and influence during the last year. This adds to the existing ethical responsibilities of our profession a new one, unavoidably in tension with them. We not only need to fight against over-reliance on algorithmic governance driven by biased data (e.g. predicting behavior from records compiled by historically biased organizations) or the unethical commercial and political usage of collected information, but also, paradoxically, we need to defend and collaborate in the use of data-driven governance based on best-effort data and models.

There are forms of tyranny based on the systematic deployment of ubiquitous algorithmic technologies, and there are forms of obscurantism based on the use of cargo cult pseudo-science. But there are also forms of tyranny and obscurantism predicated on the deliberate corruption of data or even the negation of the very possibility of collecting it, and it's part of our job to resist them.

Economists and statisticians in Argentina, when previous governments deliberately altered some national statistics and stopped collecting others, rose to the challenge by providing parallel, and much more widely believed, numbers (among the first, the journalist and economist — a combination of skills more necessary with every passing year — Sebastián Campanario). Theirs weren't the kind of arbitrary statements that are frequently part of political discourse, nor did they reject official statistics because they didn't match ideological preconceptions or it was politically convenient to do so. Official statistics were technically wrong in their process of measurement and analysis, and for any society that aspires to meaningful self-government the soundness and availability of statistics about itself are an absolute necessity.

Data scientists are increasingly involved in the process of collection and analysis of socially relevant metrics, both in the private and the public sectors. We need to consistently refuse to do it wrong, and to do our best to do it correctly even, and specially, when we suspect other people are choosing not to. Nowcasting, inferring the present from the available information, can be as much of a challenge, and as important, as predicting the future. The fact that we might end up having to do it without the assumption of possibly flawed but honest data will be a problem we have in other contexts already began to work on. Some of the earliest applications of modern data-driven models in finance, after all, were in fraud detection.

We are all potentially climate scientists now, massive observational efforts to be refuted based on anecdotes, disingenuous visualizations to be touted as definitive proof, and eventually the very possibility of quantitative understanding to be violently mocked. We (still) have to make sure the economic and social impact of things like ubiquitous predictive surveillance and technology-driven mass unemployment are managed in positive ways, but this new responsibility isn't one we can afford to ignore.

Rush Hour

Three minutes ago you were in a traffic jam, one of dozens of drivers impatiently waiting for their cars to reboot and shake off whatever piece of malware had infected them through the city network. Now you're moving.

You're moving very, very fast. You can see every car ahead of you moving aside as if by magic, either on their own or pushed by another, their drivers as surprised as you are.

A few other cars both ahead and behind are moving just as fast as yours. They are all big ones. There's a certain, important building a few blocks ahead and a handful of seconds away.

You understand where the cars are accelerating towards and what for.

You don't scream until the car in front of you crashes through the wall.

.finis.

The Mental Health of Smart Cities

Not the mental health of the people living in smart cities, but that of the cities themselves. Why not? We are building smart cities to be able to sense, think, and act; their perceptions, thoughts, and actions won't be remotely human, or even biological, but that doesn't make them any less real.

Cities can monitor themselves with an unprecedented level of coverage and detail, from cameras to government records to the wireless information flow permeating the air. But these perceptions will be very weakly integrated, as information flows slowly, if at all, between organizational units and social groups. Will the air quality sensors in a hospital be able to convince most traffic to be rerouted further away until rush hour passes? Will the city be able to cross-reference crime and health records with the distribution of different business, and offer tax credits to, say, grocery stores opening in a place that needs them? When a camera sees you having trouble, will the city know who you are, what's happening to you, and who it should call?

This isn't a technological limitation. It comes from the way our institutions and business are set up, which is in turn reflected in our processes and infrastructure. The only exception in most parts of the world is security, particularly against terrorists and other rare but high-profile crimes. Organizations like the NSA or the Department of Homeland Security (and its myriad partly overlapping versions both within and outside the United States) cross through institutional barriers, most legal regulations, and even the distinction between the public and the private in a way that nothing else does.

The city has multiple fields of partial awareness, but they are only integrated when it comes to perceiving threats. Extrapolating an overused psychological term, isn't this an heuristic definition of paranoia? The part of the city's mind that deals with traffic and the part that deals with health will speak with each other slowly and seldom, the part who manages taxes with the one who sees the world through the electrical grid. But when scared, and the city is scared very often, and close to being scared every day, all of its senses and muscles will snap together in fear. Every scrap of information correlated in central databases, every camera and sensor searching for suspects, all services following a single coordinated plan.

For comparison, shopping malls are built to distract and cocoon us, to put us in the perfect mood to buy. So smart shopping malls see us like customers: they track where we are, where we're going, what we looked at, what we bought. They try to redirect us to places where we'll spend more money, ideally away from the doors. It's a feeling you can notice even in the most primitive "dumb" mall: the very shape of the space is built as a machine to do this. Computers and sensors only heighten this awareness; not your awareness of the space, but the space's awareness of you.

We're building our smart cities in a different direction. We're making them see us as elements needing to get from point A to point B as quickly as possible, taking little or no care of what's going on at either end... except when it sees us, and it never sees or thinks as clearly and as fast, as potential threats. Much of the mind of the city takes the form of mobile services from large global companies that seldom interact locally with each other, much less with the civic fabric itself. Everything only snaps together with an alert is raised and, for the first time, we see what the city can do when it wakes up and its sensors and algorithms, its departments and infrastructure, are at least attempting to work coordinately toward a single end.

The city as a whole has no separate concept of what a person is, no way of tracing you through its perceptions and memories of your movements, actions, and context except when you're a threat. As a whole, it knows of "persons of interest" and "active situations." It doesn't know about health, quality of life, a sudden change in a neighborhood. It doesn't know itself as anything else than a target.

It doesn't need to be like that. The psychology of a smart city, how it integrates its multiple perceptions, what it can think about, how it chooses what to do and why, all of that is up to us. A smart city is just an incredibly complex machine we live in and whom we give life to. We could build it to have a sense of itself and of its inhabitants, to perceive needs and be constantly trying to help. A city whose mind, vaguely and perhaps unconsciously intuited behind its ubiquitous and thus invisible cameras, we find comforting. A sane mind.

Right now we're building cities that see the world mostly in terms of cars and terrorism threats. A mind that sees everything and puts together very little except when it scares it, where personal emergencies are almost entirely your own affair, but becomes single-minded when there's a hunt.

That's not a sane mind, and we're planning to live in a physical environment controlled by it.

How to be data-driven without data...

...and then make better use of the data you get.

The usefulness of data science begins long before you collect the first data point. It can be used to describe very clearly your questions and your assumptions, and to analyze in a consistent manner what they imply. This is neither a simple exercise nor an academic one: informal approaches are notoriously bad at handling the interplay of complex probabilities, yet even the a priori knowledge embedded in personal experience and publicly available research, when properly organized and queried, can answer many questions that mass quantities of data, processed carelessly, wouldn't be able to, as well as suggest what measurements should be attempted first, and what for.

The larger the gap between the complexity of a system and the existing data capture and analysis infrastructure, the more important it is to set up initial data-free (which doesn't mean knowledge-free) formal models as a temporary bridge between both. Toy models are a good way to begin this approach; as the British statistician George E.P. Box wrote, all models are wrong, but some are useful (at least for a while, we might add, but that's as much as we can ask of any tool).

Let's say you're evaluating an idea for a new network-like service for specialized peer-to-peer consulting that will have the possibility of monetizing a certain percentage of the interactions between users. You will, of course, capture all of the relevant information once the network is running — and there's no substitute for real data — but that doesn't mean you have to wait until then to start thinking about it as a data scientist, which in this context means probabilistically.

Note that the following numbers are wrong: it takes research, experience, and time to figure out useful guesses. What matters for the purposes of this post is describing the process, oversimplified as it will be.

You don't know a priori how large the network will be after, say, one year, but you can look at other competitors, the size of the relevant market, and so on, and guess, not a number ("our network in one year will have a hundred thousand users"), but the relative likelihood of different values.

The graph above shows one possible set of guesses. Instead of giving a single number, it "says" that there's a 50% chance that the network will have at least a hundred thousand users, and a 5.4% chance that it'll have at least half a million (although note that decimals points in this context are rather pointless; a guess based on experience and research can be extremely useful, but will rarely be this precise). On the other hand, there's almost a 25% chance that the network will have less than fifty thousand users, and a 10% chance that it'll have less than twenty-eight thousand.

How do you build such a graph, or rather, how do you assemble the information represented on it? The answer will probably look surprisingly old-fashioned: by learning as much as you can about the topic, talking with people who know about it, exercising your judgment, and then using formal mathematics to force yourself to write your best guess in a way that's explicitly clear about what it says and what it doesn't. The first steps are things you were already doing to help you with your problem, but the last one is what will allow you to coordinate knowledge and experience from different sources to give you the best possible answer to your question, given whatever you know at that moment.

You can use the same process to codify your educated guesses about other key aspects of the application, like the rate at which members of the network will interact, and the average revenue you'll be able to get from each interaction. As always, neither these numbers nor the specific shape of the curves matter for this toy example, but note how different degrees and forms of uncertainty are represented through different types of probability distributions:

Clearly, in this toy model we're sure about some things like the interaction rate (measured, say, in interactions per month), and very unsure about others, like the average revenue per interaction. Thinking about the implications of multiple uncertainties is one of the toughest cognitive challenges, as humans tend to conceptualize specific concrete scenarios: we think in terms of one or at best a couple of states of the world we expect to happen, but when there are multiple interacting variables, even the most likely scenario might have a very low absolute probability.

Simulation software, though, makes this nearly trivial even for the most complex models. Here's, for example, the distribution of probabilities for the monthly revenue, as necessarily implied by our assumptions about the other variables:

There are scenarios where your revenue is more than USD 10M per month, and you're of course free to choose the other variables so this is one of the handful of specific scenarios you describe (perhaps the most common and powerful of the ways in which people pitching a product or idea exploit the biases and limitations in human cognition). But doing this sort of quantitative analysis forces you to be honest at least to yourself: if what you know and don't know is described by the distributions above, then you aren't free to tell yourself that your chance of hitting it big is other than microscopic, no matter how clear the image might be in your mind.

That said, not getting USD 10M a month doesn't mean the idea is worthless; maybe you can break even and then use that time to pivot or sell it, or you just want to create something that works and is useful, and then grow it over time. Either way, let's assume your total costs are expected to be USD 200k per month (if this were a proper analysis and not a toy example, this wouldn't be an specific guess, but another probability distribution based on educated guesses, expert opinions, market surveys, etc). How do probabilities look then?

You can answer this question using the same sort of analysis:

The inescapable consequence of your assumptions is that your chances of breaking even are 1 in 20. Can they be improved? One advantage of fully explicit models is that you can ask not just for the probability of something happening, but also about how things depend on each other.

Here are the relationships between the revenue, according to the model, and each of the main variables, with a linear best fit approximation superimposed:

As you can see, network size has the clearest relationship with revenue. This might look strange – wouldn't, under this kind of simple model, multiplying by ten the number of interactions keeping the monetization rate also multiply by ten the revenue? Yes, but your assumptions say you can't multiply the number of interactions by more than a factor of five, which, together with your other assumptions, isn't enough to move your revenue very far. So it isn't that it's unreasonable to consider the option of increasing interactions significantly, to improve your chances of breaking even (or even getting to USD 10M). But if you plan to increase outside the explicit range encoded your assumptions, you have to explain why they were wrong. Always be careful when you do this: changing your assumptions to make possible something that would be useful if it were possible is one of humankind's favorite ways of driving directly into blind alleys at high speed.

It's key to understand that none of this is really a prediction about the future. Statistical analysis doesn't really deal with predicting the future or even getting information about the present: it's all about clarifying the implications of your observations and assumptions. It's your job to make those observations and assumptions as good and releevant as possible, both not leaving out anything you know, and not pretending you know what you don't, or that your are more certain about something that you should be.

This problem is somewhat mitigated for domains where we have vast amounts of information, including, recently, areas like computer vision and robotics. But we have yet to achieve the same level of data collection in other key areas like business strategy, so there's no way of avoiding using expert knowledge... which doesn't mean, as we saw, that we have to ditch quantitative methods.

Ultimately, successful organizations do the entire spectrum of analysis activities: they build high-level explicit models, encode expert knowledge, collect as much high-quality data as possible, train machine learning models based on that, and exploit all of that for strategic analysis, automatization, predictive modeling, etc. There are no silver bullets, but you probably have more ammunition than you think.

Safe Travels

The almost absolute lack of TSA security measures in "your" queue is both insult and carrot, but as long as they still feel the need to offer a carrot things aren't really that bad. You mostly try to believe this when your son is looking at you with the relaxed smile of the unscared. It makes it easier to smile back.

Boarding is unnervingly fast, the plane small and old, the uniform rows of dark skins and headscarves an insult, the lack of angry whispers a carrot. You try to focus on your son, who's excited about his first flight although pretending not to. You think, and hope, he doesn't notice how everybody in the plane resembles his own family, or that he doesn't think they do — that he thinks skin and dress less important than the way some kids like soccer and some prefer VR games.

Believing this would make him a good man. Trusting that everybody does could get him lynched one day. For now, he sees neither carrots nor insults here, just a small window, the ground falling, and then the sky.

It breaks your heart as much as it lifts it, but when he looks again at you you'll be waiting with a smile. And later de-boarding will be quick and your terminal will be small and somehow quaint, and you know one day you'll have to talk with him about such things, but for now you just look at his breathless expression reflected on the plane window, and tell yourself it isn't selfish to wish for you both just a little bit more of sky.

.finis.

The best political countersurveillance tool is to grow the heck up

The thing is, we're all naughty. The specifics of what counts as "wrong" depend on the context, but there isn't anybody on Earth so boring that haven't done or aren't doing something they'd rather not be known worldwide.

Ordinarily this just means that, as every other social species, we learn pretty early how to dissimulate. But we aren't living in an ordinary world. As our environment becomes a sensor platform with business models bolted on top of it, private companies have access to enormous amounts of information about things that were ordinarily very difficult to find, non-state actors can find even more, and the most advanced security agencies... Well. Their big problem is managing and understanding this information, not gathering it. And all of this can be done more cheaply, scalably, and just better than ever before.

Besides issues of individual privacy, this has a very dangerous effect on politics wherever it's coupled with overly strict standards: it essentially gives a certain degree of veto power over candidates to any number of non-democratic actors, from security agencies to hacker groups. As much as transparency is an integral part of democracy, we haven't yet adapted to the kind of deep but selective transparency this makes possible, the US election being but the most recent, glaring, and dangerous example.

It will happen again, it will keep happening, and the prospect of technical or legal solutions is dim. This being politics, the structural solution isn't technical, but human. While we probably aren't going to stop sustaining the fiction that we are whatever our social context considers acceptable, we do need to stop reacting to "scandals" in an indiscriminate way. There are individual advantages to doing so, of course, but the political implications of this behavior, aggregated over an entire society, are extremely deleterious.

Does this mean this anything goes? No, quite the contrary. It means we need to become better at discriminating between the embarrassing and the disqualifying, between the hurtful crime and the indiscretion, between what makes somebody dangerous to give power to, and what makes them somebody with very different and somewhat unsettling life choices. Because everybody has something "scandalous" in their lives that can and will be digged up and displayed to the world whenever it's politically convenient to somebody with the power to do it, and reacting to all of it in the same way will give enormous amounts of direct political power to organizations and individuals, everywhere and at all points in the spectrum of legality, that are among the least transparent and accountable in the world.

This means knowing the difference between the frowned upon and the evil. It's part of growing up, yet it's rarer, and more difficult, the larger and more interconnected a group becomes. Eventually the very concept of evil as something other than a faux pas disappears, and, historically, socially sanctioned totalitarianism follows because, while political power in nominally democratic societies seldom arrogates to itself the power to define what's evil, it has enormous power to change the scope of "adequate behavior."

We aren't going to shift our public morals to fully match our private behavior. We aren't really wired that way; we are social primates, and lying to each other is the way we make our societies work. But we are social primates living in an increasingly total surveillance environment vulnerable to multiple actors, a new (geo)political development with impossible technical solutions, but a very simple, very hard, and very necessary sociological fix: we just need to grow the heck up.

The informal sector Singularity

At the intersection of cryptocurrencies and the "gig economy" lies the prospect of almost self-contained shadow economies with their own laws and regulations, vast potential for fostering growth, and the possibility of systematic abuse.

There have always been shadow, "unofficial" economies overlapping and in some places overruling their legal counterparts. What's changing now is that technology is making possible the setup and operation of extremely sophisticated informational infrastructures with very few resources. The disruptive impact of blockchains and related technologies isn't any single cryptocurrency, but the fact that it's another building block for any group, legal or not, to operate their own financial system.

Add to this how easy it is to create fairly generic e-commerce marketplaces, reputation tracking systems, and, perhaps most importantly, purely online labor markets. For employers, the latter can be a flexible and cost-efficient way of acquiring services, while for many workers it's becoming an useful, and for some an increasingly necessary, source of income. Large rises in unemployment, especially those driven by new technologies, always increase the usefulness of this kind of labor markets for employers in both regulated and unregulated activities, as a "liquid" market over sophisticated platforms makes it easy to continuously optimize costs.

You might call it a form of "Singularity" of the informal sector: there are unregulated or even fully criminal sectors that are technologically and algorithmically more sophisticated than the average (or even most) of the legal economy.

While most online labor markets are fully legal, this isn't always the case, even when the activity being contracted isn't per se illegal. One current example is Uber's situation in Argentina: their operation is currently illegal due to regulatory non-compliance, but, short of arresting drivers — something that's actually being considered, due in some measure to the clout of the cab driver's union — there's nothing the government can do to completely stop them. Activities less visible than picking somebody up in a car — for example, anything you can do from a computer or a cellphone in your home — contracted over the internet and paid in a cryptocurrency or in any parallel payment system anywhere in the world are very unlikely to be ever visible to, or regulated by, the state or states who theoretically govern the people involved.

There are clear potential upsides to this. The most immediate one is that these shadow economies are often very highly efficient and technologically sophisticated by design. They can also help people avoid some of the barriers of entry that keep many people from full-time legal employment. A lack of academic accreditations, a disadvantaged socioeconomic background, or membership in an unpopular minority or age bracket can be a non-issue for many types of online work. In other cases they simply make possible types of work so new there's no regulatory framework for them, or that are impeded by obsolete ones. And purely online activities are often one of the few ways in which individuals can respond to economic downturns in their own country by supplying services overseas without intermediate organizations capturing most or all of the wage differential.

The main downside is, of course, that a shadow economy isn't just free from obsolete regulatory frameworks, but also free from those regulations meant to prevent abuse, discrimination, and fraud: minimum wages, safe working conditions, protection against sexual harassment, etc.

These issues might seem somewhat academic right now: most of the "gig economy" is either a secondary source of income, or the realm of relatively well-paid professionals. But technological unemployment and the increase in inequality suggest that this kind of labor markets are likely to become more important, particularly for the lower deciles of the income distribution.

Assuming a government has the political will to attack the problem of a growing, technologically advanced, and mostly unregulated labor economy — for some, at least, this seems to be a favoured outcome rather than a problem — fines, arrests, etc, are very unlikely to work, at least in moderately democratic societies. The global experience with software and media piracy shows how extremely difficult it is to stop an advanced decentralized digital service regardless of its legality. Silk Road was shut down, but it was one site, and run by a conveniently careless operator. The size, sophistication, and longevity of the on-demand network attacks, hacked information, and illegal pornography sectors are a better indicator of the impossibility of blocking or taxing this kind of activity once supply and demand can meet online.

A more fruitful approach to the problem is to note that, given the choice, most people prefer to work inside the law. It's true that employers very often prefer the flexibility and lower cost of an unregulated "high-frequency" labor economy, but people offer their work in unregulated economies when the regulated economy is blocked to them by discrimination, the legal framework hasn't kept up with the possibilities of new technologies, or there simply isn't enough demand in the local economy, making "virtual exports" an attractive option.

The point isn't that online labor markets, reputation systems, cryptocurrencies, etc, are unqualified evils. Quite the contrary. They offer the possibility of wealthier, smarter economies with a better quality of life, less onerous yet more effective regulations for both employers and employees, and new forms of work. However, these changes have to be fully implemented. Upgrading the legal economy to take advantage of new technologies — and doing it very soon — isn't a matter of not missing an opportunity, particularly for less developed economies. Absent a technological overhaul of how the legal economy works, more effective and flexible unregulated shadow economies are only going to keep growing; a lesser evil than effective unemployment, but not without a heavy social price.

For the unexpected innovations, look where you'd rather not

Before Bill Gates was a billionaire, before the power, the cultural cachet, and the Robert Downey Jr. portrayals, computers were for losers who would never get laid. Their potential was of course independent of these considerations, but Steve Jobs could become one of the richest people on Earth because he was fascinated with, and dedicated time to, something that cool kids — specially from the wealthy families who could most easily afford access to them — wouldn't have been caught dead playing with, or at least loving.

Geek, once upon a time, was an unambiguous insult. It was meant to humiliate. Dedicating yourself to certain things meant you'd pay a certain social price. Now, of course, things are better for that particular group; if nothing else, an entire area of intellectual curiosity is no longer stigmatized.

But as our innovation-driven society is locked into computer geeks as the source of change, that means it's going to be completely blindsided by whatever comes next.

Consider J. K. Rowling. Stephenie Meyer. E. L. James. It's significant that you might not recognize the last two names: Meyer wrote Twilight and James Fifty Shades of Grey. Those three women (and it's also significant that they are women) are among the best-selling and most widely influential writers of our time, and pretty much nobody in the publishing industry was even aware that there was a market for what they were doing. Theirs aren't just the standard stories of talented artists struggling to be published. By the standards of the (mostly male) people who ran and by and large still run the publishing industry, the stories they wrote were, if they were to be kind, pointless and low-brow. A school for wizards where people died during a multi-volume malignant cou d'état? The love story of a teenager torn between her possessive werewolf friend and a teenage-looking centuries old vampire struggling to maintain self-control? Romantic sadomasochism from a female point of view?

Who'd read that?

Millions upon millions did. And then they watched the movies, and read the books again. Many of them were already writing the things they wanted to read — James' story was originally fan fiction in the Twilight universe — and wanted more. The publishing industry, supposedly in the business of figuring out that, had ignored them because they weren't a prestigious market (they were women, to be blunt, including very young women who "weren't supposed" to read long books, and older women who "weren't supposed" to care about boy wizards), and those weren't prestigious stories. When it comes to choosing where to go next, industries are as driven by the search for reputation as they are for the search of profit (except finance, where the search for profit regardless of everything else is the basis of reputation). Rowling and Meyer had to convince editors, and James first surge of sales came through self-published Kindle books. The next literary phenomenon might very well bypass publishers, and if that becomes the norm then the question will be what the publishing industry is for.

Going briefly back to the IT industry, gender and race stereotypes are still awfully prevalent. The next J. K. Rowling of software — and there will be one — will have to go through a much more difficult path than she should've had to. On the other hand, a whole string of potential early investors will have painful almost-did-it stories they'll never tell anyone.

This isn't a modern development, but rather a well-established historical pattern. It's the underdogs — the sidelined, the less reputable — who most often come up with revolutionary practices. The "mechanical arts" that we now call engineering were once a disreputable occupation, and no land-owning aristocrat would have guessed that one day they'll sell their bankrupted ancestral homes to industrialists. Rich, powerful Venice began, or so its own legend tells, as a refugee camp. And there's no need to recount the many and ultimately fruitful ways in which the Jewish diaspora adapted to and ultimately leveraged the restrictions imposed everywhere upon them.

Today geographical distances have greatly diminished, and are practically zero when it comes to communication and information. The remaining gap is social — who's paid attention to, and what about.

To put it in terms of a litmus test, if you wouldn't be somewhat ashamed of putting it in a pitch deck, it might be innovative, brilliant, and a future unicorn times ten, but it's something people already sort-of see coming. And a candidate every one of your competitors would consider hiring is one that will most likely go to the biggest or best-paying one, and will give them the kind of advantage they already have. To steal a march on them — to borrow a tactic most famously used by Napoleon, somebody no king would have appointed as a general until he won enough wars to appoint kings himself — you need to hire not only the best of the obvious candidates, but also look at the ones nobody is looking at, precisely because nobody is looking at them. They are the point from which new futures branch.

The next all-caps NEW thing, the kind of new that truly shifts markets and industries, is right now being dreamed and honed by people you probably don't talk to about this kind of thing (or at all) who are doing weird things they'd rather not tell most people about, or that they love discussing but have to go online to find like-minded souls who won't make fun of them or worse.

Diversity isn't just a matter of simple human decency, although it's certainly that as well, and that should be enough. In a world of increasingly AI-driven hyper-corporations that can acquire or reproduce any technological, operational, or logistical innovation anybody but their peer competitors might come up with, it's the only reliable strategy to compete against them. "Black swans" only surprise you if you never bothered looking at the "uncool" side of the pond.

The Differentiable Organization

Neural networks aren't just at the fast-advancing forefront of AI research and applications, they are also a good metaphor for the structures of the organizations leveraging them.

DeepMind's description of their latest deep learning architecture, the Differentiable Neural Computer highlights one of the core properties of neural networks: they are differentiable systems to perform computations. Generalizing the mathematical definition, for a system to be differentiable implies that it's possible to work backwards quantitatively from its current behavior to figure out the changes that should be done to the system to improve it. Very roughly speaking — I'm ignoring most of the interesting details — that's a key component of how neural networks are usually trained, and part of how they can quickly learn to match or outperform humans in complex activities beginning from a completely random "program." Each training round provides not only a performance measurement, but also information about how to tweak the system so it'll perform better the next time.

Learning from errors and adjusting processes accordingly is also how organizations are supposed to work, through project postmortems, mission debriefings, and similar mechanisms. However, for the majority of traditional organizations this is in practice highly inefficient, when at all possible.

  • Most of the details of how they work aren't explicit, but encoded in the organizational culture, workflow, individual habits, etc.
  • They have at best a vague informal model — encoded in the often mutually contradictory experience and instincts of personnel — of how changes to those details will impact performance.
  • Because most of the "code" of the organization is encoded in documents, culture, training, the idiosincratic habits of key personnel, etc, they change only partially, slowly, and with far less control than implied in organizational improvement plans.

Taken together, these limitations — which are unavoidable in any system where operational control is left to humans — make learning organizations almost chimerical. Even after extensive data collection, without a quantitative model of how the details of its activities impact performance and a fast and effective way of changing them, learning remains a very difficult proposition.

By contrast, organizations that have automated low-level operational decisions and, most importantly, have implemented quick and automated feedback loops between their performance and their operational patterns, are, in a sense, the first truly learning organizations in history. As long as their operations are "differentiable" in the metaphorical sense of having even limited quantitative models allowing to work out in a backwards faction desirable changes from observed performance — you'll note that the kind of problems the most advanced organizations have chosen to tackle are usually of this kind, beginning in fact relatively long ago with automated manufacturing — then simply by continuing their activities, even if inefficiently at first, they will be improving quickly and relentlessly.

Compare this pattern with an organization where learning only happens in quarterly cycles of feedback, performed by humans with a necessarily incomplete, or at least heavily summarized, view of low-level operations and the impact on overall performance of each possible low-level change. Feedback delivered to humans that, with the best intentions and professionalism, will struggle to change individual and group behavior patterns that in any case will probably not be the ones with the most impact on downstream metrics.

It's the same structural difference observed between manually written software and trained and constantly re-trained neural networks; the former can perform better at first, but the latter's improvement rate is orders of magnitude higher, and sooner or later leaves them in the dust. The last few years in AI have shown the magnitude of this gap, with software routinely learning in hours or weeks from scratch to play games, identify images, and other complex tasks, going poor or absolutely null performance to, in some cases, surpassing human capabilities.

Structural analogies between organizations and technologies are always tempting and usually misleading, but I believe the underlying point is generic enough to apply: "non-differentiable" organizations aren't, and cannot be, learning organizations at the operational level, and sooner or later aren't competitive with other that set up from the beginning automation, information capture, and the appropriate, automated, feedback loops.

While the first two steps are at the core of "big data" organizational initiatives, the latter is still a somewhat unappreciated feature of the most effective organizations. Rare enough, for the moment, to be a competitive advantage.

When the world is the ad

Data-driven algorithms are effective not because of what they know, but as a function of what they don't. From a mathematical point of view, Internet advertising isn't about putting ads on pages or crafting seemingly neutral content. There's just the input — some change to the world you pay somebody or something to make — and the output — a change in somebody's likelihood of purchasing a given product or voting for somebody. The concept of multitouch attribution, the attempt to understand how multiple contacts with different ads influenced some action, is a step in the right direction, but it's still driven by a cosmology that sees ads as little gems of influence embedded in a larger universe that you can't change.

That's no longer true. The Internet isn't primarily a medium in the sense of something that is between. It's a medium in that we live inside it. It's the atmosphere through which the sound waves of information, feelings, and money flow. It's the spacetime through which the gravity waves from some piece of code shifting from data center to data center according to some post-geographical search of efficiency reach your car to suggest a route. And, on the opposite direction, it's how physical measurements of your location, activities — even physiological state — are captured, shared, and reused in ways that are increasingly more difficult to know about, and much less to be aware of during our daily life. Transparency of action often equals, and is used to achieve, opacity to oversight.

Everything we experience impacts our behavior, and each day more of what we experience is controlled, optimized, configured, personalized — pick your word — by companies desperately looking for a business model or methodically searching for their next billion dollars or ten.

Consider as a harbinger of the future that most traditional of companies, Facebook, a space so embedded in our culture that people older than credit cards (1950, Diners) use it without wonder. Among the constant experimentation with the willingly shared content of our lives that is the company, they ran an experiment attempting to deliberately influence the mood of their users by changing the order of what they read. The ethics of that experiment are important to discuss now and irrelevant to what will happen next, because the business implications are too obvious not to be exploited: some products and services are acquired preferentially by people in a certain mood, and it might be easier to change the mood of an already promising or tested customer than to find another new one.

If nostalgia makes you buy music, why wait until you feel nostalgic to show you an ad, when I can make sure you encounter mentions of places and activities from your childhood? A weapons company (or a law-and-order political candidate) will pay to place their ad next to a crime story, but if they pay more they can also make sure the articles you read before that, just their titles as you scroll down, are also scary ones, regardless of topic. Scary, that is, specifically for you. And knowledge can work just as well, and just as subtly: tracking everything you read, and adapting the text here and there, seemingly separate sources of information will give you "A" and "B," close enough for you to remember them when a third one offers to sell you "C." It's not a new trick, but with ubiquitous transparent personalization and a pervasive infrastructure allowing companies to bid for the right to change pretty much all you read and see, it will be even more effective.

It won't be (just) ads, and it won't be (just) content marketing. The main business model of the consumer-facing internet is to change what they consume, and when it comes down to what can and will be leveraged to do it, the answer is of course all of it.

Along the way, advertising will once again drag into widespread commercial application, as well as public awareness, areas of mathematics and technology currently used in more specialized areas. Advertisers mostly see us — because their data systems have been built to see us — as black boxes with tagged attributes (age, searches, location). Collect enough black boxes and enough attributes, and blind machine learning can find a lot of patterns. What they have barely begun to do is to open up those black boxes to model the underlying process, the illogical logic by which we process our social and physical environment so we can figure out what to do, where to go, what to buy. Complete understanding is something best left to lovers and mystics, but every qualitative change in our scalable, algorithmic understanding of human behavior under complex patterns of stimuli will be worth billions in the next iteration of this arms race.

Business practices will change as well, if only as a deepening of current tendencies. Where advertisers now bid for space on a page or a video slot, they will be bidding for the reader-specific emotional resonance of an article somebody just clicked on, the presence of a given item in a background picture, or the location and value of an item in an Augmented Reality game ("how much to put a difficult-to-catch Pokemon just next to my Starbucks for this person, whom I know has been out in this cold day enough for me to believe it'd like a hot beverage?"). Everything that's controlled by software can be bid upon by other software for a third party's commercial purposes. Not much isn't, and very little won't be.

The cumulative logic of technological development, one in which printed flyers co-exist with personalized online ads, promises the survival of what we might call by then overt algorithmic advertising. It won't be a world with no ads, but one in which a lot of what you perceive is tweaked and optimized so it's collective effect, whether perceived or not, is intended to work as one.

We can hypothesize a subliminally but significantly more coherent phenomenological experience of the world — our cities, friendships, jobs, art — a more encompassing and dynamic version of the "opinion bubbles" social networks often build (in their defense, only magnifying algorithmically the bubbles we had already built with our own choices of friends and activities). On the other hand, happy people aren't always the best customers, so transforming the world into a subliminal marketing platform might end up not being very pleasant, even before considering the impact on our societies of leveraging this kind of ubiquitous, personalized, largely subliminal button-pushing for political purposes.

In any case, it's a race in and for the background, and once that already started.

(Over)Simplifying Calgary too

One of the good side effects of scripting multi-stage pipelines to build a visualization like my over-simplified map of Buenos Aires is that to process a data source in a completely different format only requires you to write a pre-processing script — everything else remains the same.

While I had used CSV data for the Buenos Aires map, I got KML files for the equivalent land use data for the City of Calgary. The pipeline I had written expected use types tied to single points mapped into a fixed grid, so I wrote a small Python script to extract the polygons defined in the KML file, overlay a grid over them, and assign to each grid point the land use value of the polygon that contained id.

After that the analysis was straightforward. Here's the detailed map of land uses (with less resolution than the original data, as the polygons have been projected on the point grid):

calgary-complex_sectors

Here's the smoothed-out map:

calgary-simple_sectors

This is how we split it into a puzzle of more-or-less single-use sectors:

calgary-simple_nodes

And here's how it looks when you forget the geometry and only care about labels and relative (click to read the labels):

calgary-labels

Unlike Buenos Aires, I've never been to Calgary, but a quick look at online maps seem to support the above as a first approximation to the city geography. I'd love to hear how from somebody who actually knows the city whether and how it matches their subjective map of the city.

(Over)Simplifying Buenos Aires

This is a very rough sketch of the city of Buenos Aires:

Label sketch of Buenos Aires

As the sketch shows, it's a big blob of homes (VIVIENDAs), with an office-ridden downtown to the East (OFICINAS) and a handful of satellite areas.

The sketch, of course, lies. Here's a map that's slightly less of a lie:

Land usage in Buenos Aires

Both maps are based on the 2011 land usage survey made available by the Open Data initiative of the Buenos Aires city government, more than 555k records assigning each spot to one of about 85 different use regimes. It's still a gross approximation — you could spend a lifetime mapping Buenos Aires, rewrite Ulysses for a porteño Leopold Bloom, and still not really know it — but already one so complex that I didn't add the color key to the map. I doubt anybody will want to track the distribution of points for each of the 85 colors.

Ridiculous as it sounds at first, I'd suggest we are using too much of the second type of graph, and not enough of the first. It's already a commonplace that data visualizations shouldn't be too complex, but I suspect we are overestimating what people wants from a first look at a data set. Sometimes "big blob of homes with a smaller downtown blob due East" is exactly the level of detail somebody needs — the actual shape of the blobs being irrelevant.

The first graph, needless to say, was created programmatically from the same data set from which I graphed the second. It's not a difficult process, and the intermediate steps are useful on their own.

Beginning with the original graph above, you apply something like an smoothing brush to the data points (or a kernel, if you want to sound more mathematical); essentially, you replace the land use tag associated to each point with the majority of the uses in its immediate area, smoothing away the minor exceptions. As you'd expect, it's not that there aren't any businesses in Buenos Aires, it's just that, plot by plot, there are more homes, and when you smooth everything out, it looks more like a blob of homes. This leads to an already much simplified map:

Simplified map of Buenos Aires

Now, one interesting thing about most peoples' sense of space is that it's more topological than metrical, that is, we are generally better at knowing what's next to what than their absolute sizes and positions. Data visualizations should go with the grain of human perceptual and cognitive instincts instead of against them, so one fun next step is to separate the blobs — contiguous blocks of points of the same (smoothed out) land use type — from each other, and show explicitly what's next to what. It looks like this:

Simple nodes

Nodes are scaled non-linearly, and we've filtered out the smaller ones, but we've already done programmatically something that we usually leave to the human looking at a map. We've done a napkin sketch of the city, much as somebody would draw North America as a set of rectangles with the right shared frontiers, but not necessarily much precision in the details. It wouldn't do for a geographical survey, but if you were an extraterrestrial planning to invade Canada, it would provide a solid first understanding of the strategic relevance of Mexico to your plans. From that last map to the first one, it's only a matter of remembering that you don't really care, at this stage, about the exact shape of each blob, just where they stand in relationship to each other. So you replace the blogs with the appropriate land use label, and keep the edges between them. And presto, you have a napkin map.

Yes, on the whole the example is rather pointless. Cities are actually the most over-mapped territories on the planet, at both the formal and informal level. Manhattan is an island, Vatican City is inside Rome, the Thames goes through London... In fact, the London Tube Map has become a cliche example about how to display information about a city in terms of connections instead of physical distance. Not to mention that a simplification process that leaves most of the city as a big blob of homes is certainly ignoring more information that you can afford to, even in an sketch.

Not that we usually do this kind of sketching, at least in our formal work with data. We are almost always cartographers when it comes to new data sets, whether geographical, spatial in a general sense, or just mathematically space-like. We change resolution, simplify colors, resist the temptation of over-using 3D, but keep it a "proper" map. Which is good; the world is complex enough for us not to do the best mapping we can.

However, once you automate the process of creating multiple levels of simplification and sketching as above, you'll probably find yourself at least glancing at the simplest (over)simplifications of your data sets. Probably not for presentations to internal or external clients, but for understanding a complex spatial data set, particularly if it's high-dimensional, beginning with an over-simplified summary and then increasing the complexity is in fact what you're already going to do in your own mind, so why not use the computer to help you out?

ETA: I just posted a similar map of Calgary.

The job of the future isn't creating artificial intelligences, but keeping them sane

Once upon a time, we thought there was such a thing as bug-free programming. Some organizations still do — and woe betide their customers — but after a few decades hitting that particular wall, the profession has by and large accepted that writing software is such an extremely complex intellectual endeavor that errors and unfounded assumptions are unavoidable. Even the most mathematically solid of formal methods has, if nothing else, to interact with a world of unstable platforms and unreliable humans, and what worked today will fail tomorrow.

So we spend time and resources maintaining what we already "finished," fixing bugs as they are found, and adapting programs to new realities as they develop. We have to, because when we don't, as when physical infrastructure isn't maintained, we save resources in the short term, but only in our way towards protracted ruin.

It's no surprise that this also happens with our most sofisticated data-driven algorithms. CVs and scrum boards are filled with references to the maintenance of this or that prediction or optimization algorithm.

But there's a subtle, not universal but still very prevalent, problem: those aren't software bugs. This isn't to say that implementations don't have bugs; being software, they do. But they are computer programs implementing inference algorithms, which work at a higher level of abstraction, and those have their own kinds of bugs, and those don't leave stack traces behind.

A clear example is the experience of Google. PageRank was, without a doubt, among the most influential algorithms in the history of the internet, not to mention the most profitable, but as Google took the internet over by storm, gaming PageRank became such an important business activity that "SEO" became a commonplace word.

From an algorithmic point of view this simply a maintenance problem: PageRank assumed a certain relationship between link structure and relevance, based on the assumption that website creators weren't trying to fool it. Once this assumption became untenable, the algorithm had to be modified to cope with a world of link farms and text written with no human reader in mind.

In (very loosely equivalent) software terms, there was a new threat model, so Google had to figure out and apply a security patch. This is, for any organization facing a simular issue, a continual business-critical process, and one that could make or break a company's profitability (just ask anybody working on high-frequency trading). But not all companies deploy the same sort of detailed, continuous instrumentalization, and development and testing methodologies that they use to monitor and fix their software systems to their data driven algorithms independently of their implementations. The same data scientist who developed an algorithm is often in charge of monitoring its performance on a more or less regular basis; or, even worse, it's only a hit to business metrics what makes companies reassingn their scarce human resources towards figuring out what's going wrong. Either monitoring and maintenance strategy would amount to criminal malpractice if we were talking about software, yet there are companies for which is this is the norm.

Even more prevalent is the lack of automatic instrumentalization for algorithms mirroring that for servers. Any organization with a nontrivial infrastructure is well aware of, and has analysis tools and alarms for, things like server load or application errors. There are equivalent concepts for data-driven algorithms — quantitative statistical assumptions, wildly erroneous predictions — that should, also, be monitored in real time, and not collected (when the data is there) by a data scientist only after the situation has become bad enough to be noticed.

None of this is news to anybody working with big data, particularly in large organizations centered around this technology, but we have still to settle on a common set of technologies and practices, and even just on an universal agreement on its need.

These days nobody would dare deploy a web application trusting only server logs at the operating system level. Applications have their own semantics, after all, and everything in the operating system working perfectly is no guarantee that the app is working at all.

Large-scale prediction and optimization algorithms are just the same; they are often an abstraction running over the application software that implements them. They can be failing wildly, statistical assumptions unmet and parameters converging to implausible values, with nothing in the application layer logging even a warning of any kind.

Most users forgive a software bug much more easily than unintelligent behavior in avowedly intelligent software. As a culture, we're getting used to the fact that software fails, but many still buy the premise that artificial intelligence doesn't (this is contradictory, but so are all myths). Catching these errors as early as possible can only be done while algorithms are running in the real world, where the weird edge cases and the malicious users are, and this requires metrics, logs, and alarms that speak of what's going on in the world of mathematics, not software.

We haven't converged yet on a standard set of tools and practices for this, but I know many people who'll sleep easier once we have.

The future of machine learning lies in its (human) past

Superficially different in goals and approach, two recent algorithmic advances, Bayesian Program Learning and Galileo, are examples of one of the most interesting and powerful new trends in data analysis. It also happens to be the oldest one.

Bayesian Program Learning (BPL) is deservedly one of the most discussed modeling strategies of recent times, matching or outperforming both humans and deep learning models in one-shot handwritten character classification. Unlike many recent competitors, it's not a deep learning architecture. Rather (and very roughly) it understands handwritten characters as the output of stochastic programs that join together different graphical parts or concepts to generate versions of each character, and seeks to synthesize them by searching through the space of possible programs.

Galileo is, at first blush, a different beast. It's a system designed to extract physical information about the objects in an image or video (e.g., their movements), coupling a deep layer module with a 3D physics engine which acts as a generative model.

Although their domains and inferential algorithms are dissimilar, the common trait I want to emphasize is that they both have at their core domain-specific generative models that encode sophisticated a priori knowledge about the world. The BPL example knows implicitly, through the syntax and semantics of the language of its programs, that handwritten characters are drawn using one or more continuous strokes, often joined; an standard deep learning engine, beginning from scratch, would have to learn this. And Galileo leverages a proper, if simplified, 3D physics engine! It's not surprising that, together with superb design and engineering, these models show the performance they do.

This is how all cognitive processing tends to work in the wider world. We are fascinated, and of course how could we not be?, by how much our algorithms can learn from just raw data. To be able to obtain practical results in multiple domains is impressive, and adds to the (recent, and, like all such things, ephemeral) mystique of the data science industry. But the fact is that no successful cognitive entity starts from scratch: there is a lot about the world that's encoded in our physiology (we don't need to learn to pump our blood faster when we are scared; to say that evolution is a highly efficient massively parallel genetic algorithm is a bit of a joke, but also true, and what it has learned is encoded in whatever is alive, or it wouldn't be).

Going to the other end of the abstraction scale, for all of the fantastically powerful large-scale data analysis tools physicists use and in many cases depend on, the way even basic observations are understood is based on centuries of accumulated (or rather constantly refined) prior knowledge, encoded in specific notations, theories, and even theories about how theories can look like. Unlike most, although not all, industrial applications, data analysis in science isn't a replacement of explicitly codified abstract knowledge, but rather stands on its gigantic shoulders.

In parallel to continuous improvement in hardware, software engineering, and algorithms, we are going to see more and more often the deployment of prior domain knowledge as part of data science implementations. The logic is almost trivial: we have so much knowledge accumulated about so many things, that any implementation that doesn't leverage whatever is known in its domain is just not going to be competitive.

Just to be clear, this isn't a new thing, or a conceptual breakthrough. If anything, it predates the take the data and model it approach that's most popularly seen as "data science," and almost every practitioner, many of them coming from backgrounds in scientific research, is aware of it. It's simply that now our data analysis tools have become flexible and powerful for us to apply it with increasingly powerful results.

The difference in performance when this can be done, as I've seen in my own projects and is obvious in work like BPL and Galileo, has always been so decisive that doing things in any other way soon becomes indefensible except on grounds of expediency (unless of course you're working in a domain that lacks any meaningful theoretical knowledge... a possibility that usually leads to interesting conversations with the domain experts).

The cost is that it does shift significantly the way in which data scientists have to work. There are already plenty of challenges in dealing with the noise and complexities of raw data, before you start considering the ambiguities and difficulties of encoding and leveraging sometimes badly misspecified abstract theories. Teams become heterogeneous at a deeper level, with domain experts — many of them with no experience in this kind of task — not only validating the results and providing feedback, but participating actively as sources of knowledge from day one. Projects take longer. Theoretical assumptions in the domain become explicit, and therefore design discussions take much longer.

And so on and so forth.

That said, the results are very worth it. If data science is about leveraging the scientific method for data-driven decision-making, it behooves us to always remember that step zero of the scientific method is to get up to date, with some skepticism but with no less dedication, on everything your predecessors figured out.

The truly dangerous AI gap is the political one

The main short term danger from AI isn't how good it is, or who's using it, but who isn't: governments.

This impacts every aspect of our interaction with the State, beginning with the ludicrous way in which we have to move papers around (at best, digitally) to tell one part of the government something another part of the government already knows. Companies like Amazon, Google, or Facebook are built upon the opposite principle. Every part of them knows everything any part of the company knows about you (or at least it behaves that way, even if in practice there are still plenty of awkward silos).

Or consider the way every business and technical process is monitored and modeled in a high-end contemporary company, and contrast it with the opacity, most damagingly to themselves, of government services. Where companies strive to give increasingly sophisticated AI algorithms as much power as possible, governments often struggle to give humans the information they need to make the decisions, much less assist or replace them with decision-making software.

It's not that government employees lack the skills or drive. Governments are simply, and perhaps reasonably, biased toward organizational stability: they are very seldom built up from scratch, and a "fail fast" philosophy would be a recipe for untold human suffering instead of just a bunch of worthless stock options. Besides, most of the countries with the technical and human resources to attempt something like this are currently leaning to one degree or another towards political philosophies that mostly favor a reduced government footprint.

Under these circumstances, we can only expect the AI gap between the public and the private sector to grow.

The only areas where this isn't the case are, not coincidentally, the military and intelligence agencies, who are enthusiastic adopters of every cutting edge information technology they can acquire or develop. But these exceptions only highlight one of the big problems inherent in this gap: intelligence agencies (and to a hopefully lesser degree, the military) are by need, design, or their citizens' own faith the government areas least subject to democratic oversight. Private companies lose money or even go broke and disappear if they mess up; intelligence agencies usually get new top-level officers and a budget increase.

As an aside, even individuals are steered away from applying AI algorithms instead of consuming their services, through product design and, increasingly, laws that prohibit them from reprogramming their own devices with smarter or at least more loyal algorithms.

This is a huge loss of potential welfare — we are getting worse public services, and at a higher cost, than we could given the available technology — but it's also part of a wider political change, as (big) corporate entities gain operational and strategic advantages that shift the balance of power away from democratically elected organizations. It's one thing for private individuals to own the means of production, and another when they (and often business-friendly security agencies) have a de facto monopoly on superhuman smarts.

States originally gained part of their power through early and massive adoption of information technologies, from temple inventories in Summer to tax censuses and written laws. The way they are now lagging behind bodes ill for the future quality of public services, and for democratic oversight of the uses of AI technologies.

It would be disingenuous to say that this is the biggest long- and not-so-long-term problem states are facing, but only because there are so many other things going wrong or still to be done. But it's something that will have to be dealt with; not just with useful but superficial online access to existing services, or with the use of internet media for public communication, but also with deep, sustained investment in the kind of ubiquitous AI-assisted and AI-delegated operations that increasingly underlie most of the private economy. Politically, organizationally, and culturally as near-impossible as this might look.

The recently elected Argentinean government has made credible national statistics one of its earliest initiatives, less an act of futuristic boldness than a return to the 20th century baseline of data-driven decision-making, a departure of the previous government that was not without large political and practical costs. By failing to resort intensively to AI technologies in their public services, most governments in the world are failing to measure up to the technological baseline of the current century, an almost equally serious oversight.

"Prior art" is just a fancy term for "too slow lawyering up"

They used to send a legal ultimatum before it happened. Now you just wake up one day and everything green is dead, because the plants are biotech and counter-hacking is a legal response to intellectual property theft, even if the genes in question are older than the country that granted the patent.

My daughter isn't looking at the rotting remains of her flower garden. Her eyes are locked into mine, with the intensity of a child too young not to take the world seriously. Are we going to jail?

No, I say, and smile. They only go personally after the big ones; for small people like us this destruction suffices.

She nods. Am I going to die?

I kneel and hug her. No, of course not, I say, with every bit of certainty I can muster. There's nothing patented in you, I want to add, but she's old enough to know that'd be a lie.

I feel her chest move and I realize she had been holding her breath. We stay together, just breathing. The air is filled with legal pathogens looking for illegal things to kill.

.finis.

The gig economy is the oldest one, and it's always bad news

Let's say you have an spare bedroom and you need some extra income. What do you do? You do more of what you've trained for, in an environment with the capital and tools to do it best. Anything else only makes sense if the economy is badly screwed up.

The reason is quite simple: unless you work in the hospitality industry, you are better — able to extract from it a higher income — at doing whatever else you're doing than you are at being a host, or you wouldn't take it up as a gig, but rather switch to it full time. Suppliers in the gig economy (as opposed to professionals freelancing in their area of expertise) are by definition working more hours but less efficiently so, whether because they don't have the training and experience, or because they aren't working with the tools and resources they'd take advantage of in their regular environments. The cheaper, less quality, badly regulated service they provide might be desirable to many customers, but this is achieved partly through de-capitalization. Every hour and dollar an accountant spends caring for a guest instead of, if he wants a higher income, doing more accounting or upgrading his tools, is a waste of his knowledge. From the point of view of overall capital and skill intensity, a professional low-budget hotel chain would be vastly more efficient over the long term (of course, to do that you need to invest capital in premises and so on instead of on vastly cheaper software and marketing).

The only reason for an accountant, web designer, teacher, or what not, for doing "gigs" instead of extra hours, freelance work, or similar, is if there is no demand for their professional labor. While it's entirely possible that overtime or freelance work might be relatively less valuable than the equivalent time spent at their main job, to do something else they would have to get less than what they can get from a gig for which they have little training and few tools. That's not how a capital- and skill-intensive economy looks like.

For an specific occupation falling out of favor, this is just the way of things. For wide swaths of the population to find themselves in this position, perhaps employed but earning less than they would like, and unable to trade more of their specialized labor for income, the economy as a whole has to be suffering from depressed demand. What's more, they still have to contend with competitors with more capital but still looking to avoid regulations (e.g., people buying apartments specifically to rent via Airbnb), in turn lowering their already low gig income.

This is a good thing if you want cheaper human-intensive services or have invested on Airbnb and similar companies, and it's bad news if you want an skill-intensive economy with proportionally healthy incomes.

In the context of the gig economy, flexibility is an euphemism for I have a (perhaps permanent!) emergency and can't get extra work, and efficiency refers to the liquidity of services, not the outcome of high capital intensity. And while renting a room or being an Uber driver might be less unpleasant than, and downright utopian compared to, the alternatives open to those without a room to rent or an adequate car, the argument that it's fun doesn't survive the fact that nobody has ever been paid to go and crash on other people's couches.

Neither Airbnb nor Uber are harmful on themselves — who doesn't think cab services could use more a transparent and effective dispatch system? — but customer ratings don't replace training, certification, and other forms of capital investment. Shiny apps and cool algorithms aside, a growing gig economy is a symptom of an at least partially de-skilling one.

The Man Who Was Made A People

Gregory has two million evil twins. None of them is a person, but why would anybody care?

They are everywhere except in the world. They search the web, click on ads, make purchases, create profiles, favorite things, post comments. Being bots, they don't sleep or work; they do nothing but what they were programmed to do, hidden deep in some endless pool of stolen computing power they have been planted in like dragon's teeth.

They are him. Their profiles carry his name, his location, his interests, or variations close enough to be indistinguishable to even the most primitive algorithm. The pictures posted by the bots are all of men very similar to Gregory in skin tone, clothes, cellphone, car. And he knows they are watching him, because when he changes how he looks, they change as well.

They are evil. Most of their online activities are subtle mirrors of his own, but some deal with topics and people that most find abhorrent, and none more than himself. Violence, depravity, every form of hate and crime, and — worst of all — every statistically known omen of future violence and crime.

Driven by the blind genius of predictive algorithms, sites show Gregory increasingly dark things to look at and buy, and suggest friendships with unbalanced bigots of every kind. His credit score has crumbled. Journalism gigs are becoming scarce. Cops scowl as they follow him with eyes covered by smart glasses, one hand on their guns and the other on their radios. He no longer bothers to check his dating profile; the messages he gets are more disturbing than the replies he no longer does.

He has begun to go out less, to use the web through anonymizing services, to take whatever tranquilizers he can afford. All of those are suspicious activities on their own, he knows, but what choice does he have? He spends his nights trying to figure out who or what he offended enough to have this all-too-real curse laid upon him. The list of possibilities is too large, what journalist's isn't?, and he's not desperate enough to convince himself there's any point to seeking forgiveness. He's scared that one day he might be.

Gregory knows how this ends. He has begun to click on links he wouldn't have. Some of the searches are his. Every night he talks himself out of buying a gun. So far.

He has begun to feel there are two million of him.

.finis.

Bitcoin is Steampunk Economics

From the point of view of its largest financial backers, the fact that Bitcoin combines 21st century computer science with 17th century political economy isn't an unfortunate limitation. It's what they want it for.

We have grown as used to the concept of money as to any other component of our infrastructure, but, all things considered, it's an astoundingly successful technology. Even in its simplest forms it helps solve the combinatorial explosion implicit in any barter system, which is why even highly restricted groups, like prison populations, implement some form of currency as one of the basic building blocks of their polities.

Fiat money is a fascinating iteration of this technology. It doesn't just solve the logistical problems of carrying with you an impractical amount of shiny metals or some other traditional reference commodity, it also allows a certain degree of systemic adaptation to external supply and demand shocks, and pulls macroeconomic fine-tuning away from the rather unsuitable hands of mine prospectors and international trading companies.

A protocol-level hack that increases systemic robustness in a seamless distributed manner: technology-oriented people should love this. And they would, if only that hack weren't, to a large degree... ugh... political. From the point of view of somebody attempting to make a ton of money by, literally, making a ton money, the fact that a monetary system is a common good managed by a quasi-governmental centralized organization isn't a relatively powerful way to dampen economic instabilities, but an unacceptable way to dampen their chances of making said ton of money.

So Bitcoin was specifically designed to make this kind of adjustment impossible. In fact, the whole, and conceptually impressive, set of features that characterize it as a currency, from the distributed ledger to the anonymity of transfers to the mathematically controlled rate of bitcoin creation, presupposes that you can trust neither central banks nor financial institutions in general. It's a crushingly limited fallback protocol for a world where all central banks have been taken over by hyperinflation-happy communists.

The obvious empirical observation is that central banks have not been taken over by hyperinflation-happy communists. Central banks in the developed world have by and large mastered the art of keeping inflation low – in fact, they seem to have trouble doing anything else. True, there are always Venezuelas and Argentinas, but designing a currency based on the idea that they are at the cutting edge of future macroeconomic practice crosses the line from design fiction to surrealist engineering.

As a currency, Bitcoin isn't the future, but the past. It uses our most advanced technology to replicate the key features of an obsolete concept, adding some Tesla coils here and there for good effect. It's gold you can teleport; like a horse with an electric headlamp strapped to its chest, it's an extremely cool-looking improvement to a technology we have long superseded.

As computer science, it's magnificent. As economics, it's an steampunk affectation.

Where bitcoin shines, relatively speaking, is in the criminal side of the e-commerce sector — including service-oriented markets like online extortion and sabotage — where anonymity and the ability to bypass the (relative) danger of (nominally, if not always pragmatically) legal financial institutions are extremely desirable features. So far Bitcoin has shown some promise not as a functional currency for any sort of organized society, but in its attempt to displace the hundred dollar bill from its role as what one of William Gibson's characters accurately described as the international currency of bad shit.

This, again, isn't an unfortunate side effect, but a consequence of the design goals of Bitcoin. There's no practical way to avoid things like central bank-set interest rates and taxes, without also avoiding things like anti-money laundering regulations and assassination markets. If you mistrust government regulations out of principle and think them unfixable through democratic processes — that is, if you ignore or reject political technologies developed during the 20th century that have proven quite effective when well implemented — then this might seem to you a reasonable price to pay. For some, this price is actually a bonus.

There's nothing implicit in contemporary technologies that justifies our sometimes staggering difficulties managing common goods like sustainably fertile lands, non-toxic water reservoirs, books written by people long dead, the antibiotic resistance profile of the bacteria whose planet we happen to live in, or, case in point, our financial systems. We just seem to be having doubts as to whether we should, doubts ultimately financed by people well aware that there are a few dozen deca-billion fortunes to be made by shedding the last two or three centuries' worth of political technology development, and adding computationally shiny bits to what we were using back then.

Bitcoin is a fascinating technical achievement mostly developed by smart, enthusiastic people with the best of intentions. They are building ways in which it, and other blockchain technologies like smart contracts, can be used to make our infrastructures more powerful, our societies richer, and our lives safer. That most of the big money investing in the concept is instead attempting to recreate the financial system of late medieval Europe, or to provide a convenient complement to little bags of diamonds, large bags of hundred dollar bills, and bank accounts in professionally absent-minded countries, when they aren't financing new and excitingly unregulated forms of technically-not-employment, is completely unexpected.

The price of the Internet of Things will be a vague dread of a malicious world

Volkswagen didn't make a faulty car: they programmed it to cheat intelligently. The difference isn't semantics, it's game-theoretical (and it borders on applied demonology).

Regulatory practices assume untrustworthy humans living in a reliable universe. People will be tempted to lie if they think the benefits outweigh the risks, but objects won't. Ask a person if they promise to always wear their seat belt, and the answer will be at best suspect. Test the energy efficiency of a lamp, and you'll get an honest response from it. Objects fail, and sometimes behave unpredictably, but they aren't strategic, they don't choose their behavior dynamically in order to fool you. Matter isn't evil.

But that was before. Things now have software in them, and software encodes game-theoretical strategies as well as it encodes any other form of applied mathematics, and the temptation to teach products to lie strategically will be as impossible to resist for companies in the near future as it has been to VW, steep as their punishment seems to be. As it has always happened (and always will) in the area of financial fraud, they'll just find ways to do it better.

Environmental regulations are an obvious field for profitable strategic cheating, but there are others. The software driving your car, tv, or bathroom scale might comply with all relevant privacy regulations, and even with their own marketing copy, but it'll only take a silent background software upgrade to turn it into a discrete spy reporting on you via well-hidden channels (and everything will have its software upgraded all the time; that's one of the aspects of the Internet of Things nobody really likes to contemplate, because it'll be a mess). And in a world where every device interacts with and depends on a myriad others, devices from one company might degrade the performance of a competitor's... but, of course, not when regulators are watching.

The intrinsic challenge to our legal framework is that technical standards have to be precisely defined in order to be fair, but this makes them easy to detect and defeat. They assume a mechanical universe, not one in which objects get their software updated with new lies every time regulatory bodies come up with a new test. And even if all software were always available, cheking it for unwanted behavior would be unfeasible — more often than not, programs fail because the very organizations that made them haven't or couldn't make sure it behaved as they intended.

So the fact is that our experience of the world will increasingly come to reflect our experience of our computers and of the internet itself (not surprisingly, as it'll be infused with both). Just as any user feels their computer to be a fairly unpredictable device full of programs they've never installed doing unknown things to which they've never agreed to benefit companies they've never heard of, inefficiently at best and actively malignant at worst (but how would you now?), cars, street lights, and even buildings will behave in the same vaguely suspicious way. Is your self-driving car deliberately slowing down to give priority to the higher-priced models? Is your green A/C really less efficient with a thermostat from a different company, or it's just not trying as hard? And your tv is supposed to only use its camera to follow your gestural commands, but it's a bit suspicious how it always offers Disney downloads when your children are sitting in front of it.

None of those things are likely to be legal, but they are going to be profitable, and, with objects working actively to hide them from the government, not to mention from you, they'll be hard to catch.

If a few centuries of financial fraud have taught us anything, is that the wages of (regulatory) sin are huge, and punishment late enough that organizations fall into temptation time and again, regardless of the fate of their predecessors, or at least of those who were caught. The environmental and public health cost of VW's fraud is significant, but it's easy to imagine industries and scenarios where it'd be much worse. Perhaps the best we can hope for is that the avoidance of regulatory frameworks on Internet of Things won't have the kind of occasional systemic impact that large-scale financial misconduct has accustomed us to.

We aren't uniquely self-destructive, just inexcusably so

Natural History is an accretion of catastrophic side effects resulting from blind self-interest, each ecosystem an apocalyptic landscape to the previous generations and a paradise to the survivors' thriving and well-adapted descendants. There was no subtle balance when the first photosynthetic organisms filled the atmosphere with the toxic waste of their metabolism. The dance of predator and prey takes its rhythm from the chaotic beat of famine, and its melody from an unreliable climate. Each biological innovation changes the shape of entire ecosystems, giving place to a new fleeting pattern than will only survive until the next one.

We think Nature harmonious and wise because our memories are short and our fearful worship recent. But we are among the first generations of the first species for which famine is no accident, but negligence and crime.

No, our destruction of the ecosystems we were part of when we first learned the tools of fire, farm, and physics is not unique in the history of our planet, it's not a sin uniquely upon us.

It is, however, a blunder, because we know better, and if we have the right to prefer to a silent meadow the thousands fed by the farms replacing it, we have no right to ignore how much water it's safe to draw, how much nitrogen we will have to use and where it'll come from, how to preserve the genes we might need and the disease resistance we already do. We made no promise to our descendants to leave them pandas and tigers, but we will indeed be judged poorly if we leave them a world changed by the unintended and uncorrected side effects of our own activities in ways that will make it harder for them to survive.

We aren't destroying the planet, couldn't destroy the planet (short of, in an ecological sense, sterilizing it with enough nuclear bombs). What we are doing is changing its ecosystems, and in some senses its very geology and chemistry, in ways that make it less habitable for us. Organisms that love heat and carbon in the air, acidic seas and flooded coasts... for them we aren't scourges but benefactors. Biodiversity falls as we change the environment with a speed, in an evolutionary scale, little slower than a volcano's, but the survivors will thrive and then radiate in new astounding forms. We may not.

Let us not, then, think survival a matter of preserving ecosystems, or at least not beyond what an aesthetic or historical sense might drive us to. We have changed the world in ways that make it worse for us, and we continue to do so far beyond the feeble excuses of ignorance. Our long term survival as a civilization, if not as a species, demands from us to change the world again, this time in ways that will make it better for us. We don't need biodiversity because we inherited it: we need it because it makes ecosystems more robust, and hence our own societies less fragile. We don't need to both stop and mitigate climate change because there's something sacred about the previous global climate: we need to do it because anything much worse than what we've already signed for might be too much for our civilization to adapt to, and runaway warming might even be too much for the species itself to survive. We need to understand, manage, and increase sustainable cycles of water, soil, nitrogen, and phosphorus because that's how we feed ourselves. We can survive without India's tigers. But collapse the monsoon or the subcontinent's irrigation infrastructure and at least half a billion people will die.

We wouldn't be the first species killed by our own blind success, nor the first civilization destroyed by a combination of power and ignorance, empty cities the only reminders of better architectural than ecological insight. We know better, and should act in a way befitting what we know. Our problem is no larger than our tools, our reach no further than our grasp.

The only question is how hard we'll make things for us before we start working on earnest to build a better world, one less harsh to our civilization, or at least not untenably more so. The question is how many people will unnecessarily die, and what long-term price we'll pay for our delay.

The Girl and the Forest

The girl is crossing a frontier that exists only in databases. Her phone whispers frantically on her ear: crossing such a frontier triggers no low-priority notification, but the digital panic merited by a lethal navigational mishap. Cross a line between two indistinguishable plots of land and you become the legitimate target of automated guns, or an illegal person to be sent to a private working prison, or any number of other fates perhaps but not certainly worse than what you were leaving behind.

The frontier the girl is crossing separates a water-poor region from a barren desert, the invisible line a temporary marker of the ever-faster retreat of agricultural mankind. The region reacts to unwanted strangers with less robots but as much heavily armed dedication as any of the richer ones. But the girl is walking into the desert, and there are no defensive systems on her way. There is just the dead sand.

She doesn't carry enough water or food to get her to the other side.

* * *

The girl went to a hidden net site a friend had shared with her with the electronic whispers and half-incredulous sniggers other generations had reserved for the mysteries of sex. Sex wasn't much of a mystery to their generation, who had seen everything long before it could be understood with anything except her mind. But they had never been in a forest, and almost none of them ever would. They traded pictures and descriptions of how the desert looked before it was a desert, and tried to imagine the smell of a thousand acres of shadowed damp earth. It was a fad, for most. A phase. Youth and nostalgia are mutually incompatible states.

Yet for some their dreams of forests endured: they had uncovered something, a secret, found because they weren't welcomed into the important matters reserved for grownups. Inside the long abandoned monitoring network their parents' generation had used to attempt to manage the retreating forest, some of the sensors were still alive. Most of them were repeating a monotonous prayer of heat and sand to creators too ashamed of their failure to let themselves look back.

But some of the sensors chanted of water, and shadow, and biomass. The girl had seen the data in her phone, and half-felt a breeze of leaves and bark. What if satellite pictures showed a canyon that, yes, could be safe from the soil-stealing wind, but was as barren as everything else? What of that?

The girl thought of her parents, and of the child she had promised herself she wouldn't give to the barren earth, and with guilt that didn't slow her down, she took the least amount of water she thought would be enough to get her to the canyon, and went into the desert.

The dull sleepless intelligences inside the border cameras saw her leave, but would only alert a human if they saw her walking back.

* * *

The girl will barely reach the canyon, half-dying, clinging to her last bit of water as a talisman. There will be no forest there, nothing in the canyon but dry sand. But in the small caves between the rocks, where the geometry of stones has built small enclosed worlds of darkness, she'll find ugly, malevolently tenacious, and very much alive mushrooms, and around them the clothes of those who will have reached the canyon before her. Most of their clothes will be of her size.

The girl will understand. She won't drink the last of her water, but give it to the mushrooms. Then she will lie down and close her eyes, and fall sleep in the shadow, surrounded by a forest at last.

.finis.

Asking a Shadow to Dance

Isomorphic is a mathematical term: it means of the same shape. This is a lie.

Every morning you wake up in the apartment you might have bought if you hadn't been married (but you were, and those identical apartments are not the same). Your car takes you through the same route you would have taken, to an office where you look into the blankness of a camera and the camera looks back. You see nothing. The camera sees the pattern of blood vessels on the back of your eyes, and opens your computer for you.

The interface you see is always the same, just patterns of changing data devoid of context. Patterns that a combination of raw genetic luck and years of training has made you flawlessly adept at understanding and controlling. The pattern on your screen changes five times each second. Faster than that, you move your fingers in a precise way, the skill etched in your muscles as much as in your brain. The pattern and your fingers dance together, and the dance makes the pattern stay in a shape that has no meaning outside itself. You have received almost every commendation they can give to someone doing your job. Only the man on the other side of the table has more. You have never seen his screen, and he has never seen yours.

The inertial idiocy of that security rule is sickening in its redundancy. You couldn't know what he's doing from the data on his screen any more than you can know what you are doing from what you see in yours. Sometimes you think you're piloting a drone swarm. Sometimes you're defending an infrastructure network, or attacking one. Twice you have felt a rhythm in the patterns almost like a heart, and wondered if you were killing somebody through some medical device.

But you don't know. That's the point. Whatever you could be doing, the shape of the data on your screen would be the same, all the necessary information to control, damage, defend, or kill, but scrubbed of all meaning tying it back to the real world. Isomorphism, the instructors called it.

But that's a lie. It's not the same, and it could never be.

You begin to lose sleep. Twice the camera on your computer has to learn a new pattern for the blood behind your eyes. Your performance doesn't suffer; the parts of your mind and body that do the work are not the ones grappling with a guilt larger because it's undefined. Your nightmares are shapeless: you dream of data and wake up unable to breath.

One day you finally allow yourself to know that the man across the table enjoys his work. Always had. You had ignored him all those years, him and everything not in the data, but now you look at him with a wordless how? He makes a gesture with his head, come and see. An isomorphism that scrubs the data not only of meaning but also guilt.

You need it so much that you don't stop to think about the rules you're both breaking under the gaze of the security cameras. You just go around the table and look at his screen.

There's no isomorphism. There's nothing but truth, and you can neither watch nor stop watching. His fingers are dancing and his smile is joyful and he has always known what he was doing. And now you can, too.

You scramble back to your screen in blind haste. The patterns of data are innocent, you tell yourself, of everything you saw on that other screen, and so if the dance of your fingers. They just have the same shape, that's all.

You work efficiently as ever. You wonder if you'll go crazy, and fear you won't, and know that neither act will change your shape.

.finis.

The Telemarketer Singularity

The future isn't a robot boot stamping on a human face forever. It's a world where everything you see has a little telemarketer inside them, one that knows everything about you and never, ever, stops selling things to you.

In all fairness, this might be an slight oversimplification. Besides telemarketers, objects will also be possessed by shop attendants, customer support representatives, and conmen.

What these much-maligned but ubiquitous occupations (and I'm not talking here about their personal qualities or motivations; by and large, they are among the worst exploited and personally blameless workers in the service economy) have in common is that they operate under strict and explicitly codified guidelines that simulate social interaction in order to optimize a business metric.

When a telemarketer and a prospect are talking, of course, both parties are human. But the prospect is, however unconsciously, guided by a certain set of rules about how conversations develop. For example, if somebody offers you something and you say no, thanks, the expected response is for that party to continue the conversation under the assumption that you don't want it, and perhaps try to change your mind, but not to say ok, I'll add it to your order and we can take it out later. The syntax of each expression is correct, but the grammar of the conversation as a whole is broken, always in ways specifically designed to manipulate the prospect's decision-making process. Every time you have found yourself talking on the phone with a telemarketer, or interacting with a salesperson, far longer than you wanted to, this was because you grew up with certain unconscious rules about the patterns in which conversations can end — and until they make the sell, they will neither initiate nor acknowledge any of them. The power isn't in their sales pitch, but in the way they are taking advantage of your social operating system, and the fact that they are working with a much more flexible one.

Some people, generally described by the not always precise term sociopath, are just naturally able to ignore, simulate, or subvert these underlying social rules. Others, non-sociopathic professional conmen, have trained themselves to be able to do this, to speak and behave in ways that bypass or break our common expectations about what words and actions mean.

And then there are telemarketers, who these days work with statistically optimized scripts that tell them what to say in each possible context during a sales conversation, always tailored according to extensive databases of personal information. They don't need to train themselves beyond being able to convey the right emotional tone with their voices: they are, functionally, the voice interface of a program that encodes the actual sales process, and that, logically, has no need to conform to any societal expectation of human interaction.

It's tempting to call telemarketers and their more modern cousins, the computer-assisted (or rather computer-guided) sales assistants, the first deliberately engineered cybernetic sociopaths, but this would miss the point that what matters, what we are interacting with, isn't a sales person, but the scripts behind them. The person is just the interface, selected and trained to maximize the chances that we will want to follow the conversational patterns that will make us vulnerable to the program behind.

Philosophers have long toyed with a mental experiment called the Chinese Room: There is a person inside a room who doesn't know Mandarin, but has a huge set of instructions that tells her what characters to write in response to any combination of characters, for any sequence of interactions. The person inside doesn't know Mandarin, but anybody outside who does can have an engaging conversation by slipping messages under the door. The philosophical question is, who is the person outside dialoging with? Does the woman inside the room know Mandarin in some sense? Does the room know?

Telemarketers are Chinese Rooms turned inside-out. The person is outside, and the room is hidden from us, and we aren't interacting socially with either. We only think we do, or rather, we subconsciously act as if we do, and that's what makes cons and sales much more effective than, rationally, they should be.

We rarely interact with salespeople, but we interact with things all the time. Not because we are socially isolated, but because, well, we are surrounded by things. We interact with our cars, our kitchens, our phones, our websites, our bikes, our clothes, our homes, our workplaces, and our cities. Some of them, like Apple's Siri or the Sims, want us to interact with them as if they were people, or at least consider them valid targets of emotional empathy, but what they are is telemarketers. They are designed, and very carefully, to take advantage of our cultural and psychological biases and constraints, whether it's Siri's cheerful personality or a Sim's personal victories and tragedies.

Not every thing offer us the possibility of interacting with them as if they were human, but that doesn't stop them from selling to us. Every day we see the release of more smart objects, whether it's consumer products or would-be invisible pieces of infrastructure. Connected to each other and to user profiling databases, they see us, know us, and talk to each and to their creators (and to their creators' "trusted partners," who aren't necessarily anybody you have even heard of) about us.

And then they try to sell us things, because that's how the information economy seems to work in practice.

In some sense, this isn't new. Expensive shoes try to look cool so other people will buy them. Expensive cars are in a partnership with you to make sure everybody knows how awesome they make you look. Restaurants hope that some sweet spot of service, ambiance, food, and prices will make you a regular. They are selling themselves, as well as complementary products and services.

But smart objects are a qualitatively different breed, because, being essentially computers with some other stuff attached to them, what their main function is might not be what you bought them for.

Consider an internet-connected scale that not only keeps track of your weight, but also sends you through a social network congratulatory messages when you reach a weight goal. From your point of view, it's just a scale that has acquired a cheerful personality, like a singing piece of furniture in a Disney movie, but from the point of view of the company that built and still controls it, it's both a sensor giving them information about you, and a way to tell you things you believe are coming from something – somebody who knows you, in some ways, better than friends and family. Do you believe advertisers won't know whether to sell you diet products or a discount coupon in the bakery around the corner from your office? Or, even more powerfully, that your scale won't tell you You have earned yourself a nice piece of chocolate cake ;) if the bakery chain is the one who purchased that particular "pageview?"

Let's go to the core of advertising: feelings. Much of the Internet is paid for by advertisers' belief that knowing your internet behavior will tell them how you're feeling and what you're interested on, which will make it easier to sell things to you. Yet browsing is only one of the things we do that computers know about in intimate detail. Consider the increasing number of internet-connected objects in your home that are listening to you. Your phone is listening for your orders, but that doesn't mean it's all it's listening for. The same goes for your computer, your smart TV (some of which are actually looking at you as well), even some children's dolls. As the Internet of Things grows way beyond the number of screens we can deal with, or apps we are willing to use to control them, voice will become the user interface of choice, just like smartphones overtook desktop computers. That will mean that possibly dozens of objects, belonging to a handful of companies, will be listening to you and selling that information to whatever company pays enough to become a "trusted partner." (Yes, this is and will remain legal. First, because we either don't read EULAs or do and try not to think about them. And second, because there's no intelligence agency in the planet who won't lobby to keep it legal.)

Maybe they won't be reporting everything you say verbatim, that will depend on how much external scrutiny there is on the industry, but your mood (did you yell at your car today, or sang aloud as you drove?), your movements, the time of the day you wake up, which days you cook and which days you order takeout? Everybody trying to sell things to you will know all of this, and more.

That will be just an extension of the steady erosion of our privacy, and even of our expectation of it. More delicate will be the way in which our objects will actively collaborate in this sales process. Your fridge's recommendations when you run out of something might be oddly restricted to a certain brand, and if you never respond to them, shift to the next advertiser with the best offer — that is, the most profitable for whoever is running the fridge's true program, back in some data center somewhere. Your watch might choose to delay low-priority notifications while you're watching a commercial from a business partner or, more interestingly, choose to interrupt you every time there's a competitor's commercial. Your kitchen will tell you that it needs some preventive maintenance, but there's a discount on Chinese takeover if you press that button or just say "Sure, Kitchen Kate." If you put it on shuffle, your cloud-based music service will tailor its very much only random-looking selection based on where you are and what the customer tracking companies tells them you're doing. No sad music when you're at the shopping mall or buying something online! (Unless they have detected that you're considering buying something out of nostalgia or fear.) There's already a sophisticated industry dedicated to optimizing the layout, sonic background, and even smells of shopping malls to maximize sales, much in the same way that casinos are thoroughly designed to get you in and keep you inside. Doing this through the music you're listening to is just a personalized extension of these techniques, an edge that every advertiser is always looking for.

If, in defense of old-school human interaction, you go inside some store to talk with an actual human being instead of an online shop, a computer will be telling each sales person, through a very discrete earbud, how you're feeling today, and how to treat you so you'll feel you want to buy whatever they are selling, the functional equivalent of almost telepathic cold reading skills (except that it won't be so cold; the sales person doesn't know you, but the sales program... the sales program knows you, in many ways, better than you do yourself). In a rush? The sales program will direct the sales person to be quick and efficient. Had a lousy day? Warmth and sympathy. Or rather simulations thereof; you're being sold to by a sales program, after all, or an Internet of Sales Programs, all operating through salespeople, the stuff in your home and pockets, and pretty much everything in the world with an internet connection, which will be almost everything you see and most of what you don't.

Those methods work, and have probably worked since before recorded history, and knowing about them doesn't make them any less effective. They might not make you spend more in aggregate; generally speaking, advertising just shifts around how much you spend on different things. From the point of view of companies, it'll just be the next stage in the arms race for ever more integrated and multi-layered sensor and actuator networks, the same kind of precisely targeted network-of-networks military planners dream of.

For us as consumers, it might mean a world that'll feel more interested in you, with unseen patterns of knowledge and behavior swirling around you, trying to entice or disturb or scare or seduce you, and you specifically, into buying or doing something. It will be a somewhat enchanted world, for better and for worse.

At the End of the World

As the seas rose and the deserts grew, the wealthiest families and the most necessary crops moved poleward, seeking survivable summers and fertile soils. I traveled to the coast and slowly made my way towards the Equator; as a genetics engineer I was well-employed, if not one of the super-rich, but keeping our old ecosystems alive was difficult enough if you had hope, and I had lost mine a couple of degrees Celsius along the way.

I saw her one afternoon. I was staying in a cramped rentroom in a semi-flooded city that could have been anywhere. The same always nearly-collapsed infrastructure, the indistinguishable semi-flooded slums, the worldwide dull resentment and fear of everything coming from the sky: the ubiquitous flocks of drones, the frequent hurricanes, the merciless summer sun.

She seemed older than I'd have expected, her skin pale and parched, her once-black hair the color of sand. But she had an assurance that hadn't been there half a lifetime ago when we had been colleagues and roommates, and less, and more. Before we had had to choose between hope for a future together and hope for a future for the world, and had chosen... No, not wrong. But I had stopped believing we could turn the too-literal tide, and, for reasons I had suspected but not inquired, she had lost or quit her job years ago. So here we were, at the overcrowded, ever-retreating ruinous limes of our world. I was wandering, and she was riding a battery bike out of the city. I followed her on my own.

I don't know why I didn't call to her, why I followed her, or if I even wanted to catch up. But when I turned a bend on the road she was waiting for me, patient and smiling, still on her bike.

"Follow me," she said, going off the road.

I did, all the way through the barren over-exploited land, the situation dreamlike but no more than everything else.

She led me to a group of oddly-looking tents, and then by foot towards one that I took to be hers. We sat on the ground, and under the light of a biolamp I saw her close and gasped.

Not in disgust. Not despite the pseudoscales on her skin, or her shrouded eyes. It wasn't beauty, but it was beautiful work, and I knew enough to suspect that the changes wouldn't stop at what I saw.

"You adapted yourself to the hot band," I said.

She smiled. "Not just me. I've been doing itinerant retroviral work all over the hot band. You wouldn't believe the demand, or how those communities thrive once health issues are minimized. I've developed gut mods for digesting whatever grows there now, better heat and cold resistance, some degree of internal osmosis to drink seawater. And they have capable people spreading and tweaking the work. They call it submitting to the world."

"This is not what we wanted to do."

"No," she said, "but it works." She paused, as if waiting for me to argue. I didn't, so she went on. "Every year it works a little better for them, for us, and a little worse for you all."

I shook my head. "And next decade? Half a century from now? You know the feedback loops aren't stopping, and we only pretend carbon storage will reach enough scale to work. This work is phenomenal, but it's only an stopgap."

"It's only an stopgap if we stop." She stood up and moved a curtain I had thought a tent wall. Behind it I saw a crib, the standard climate-controlled used by everybody who could afford them.

Inside the crib there was a baby girl. Her skin was covered in true scales, with tridimensional structures that looked like multi-layer thermal insulation. Her respiration was slow and easy, and her eyes, blinking sleepily, catlike, like those of a creature breed to avoid and don't miss the sun. I was listening with half an ear to the long list of other changes, but my eyes were fixed on the crib's controls.

They were keeping her environment artificially hot and dry. The baby smile was too innocent to be mocking, but I wasn't.

"And a century after next century?" I said, not really asking.

"Who knows what they'll become?" I wasn't looking at her, but her voice was filled with hope.

I closed my eyes and thought of the beautiful forests and farms of the temperate areas, where my best efforts only amounted to increasingly hopeless life support. I wasn't sure how I felt about the future looking at me from the crib, but it was one.

"Tell me more."

.finis.

Memory City

The city remembers you even better than I do. I have fragments of you in my memory, things I'll only forget when I die: your smell, your voice, your eyes locked on my own. But the city knows more, and I have the power to ask for those memories.

I query databases in machines across the sea, and the city guides me to a corner just in time to see somebody crossing the street. She looks just like you as she walks away. Only from that angle, but that's the angle the city told me to look from.

I sit in a coffee shop, my back to the window, and the city whispers a detached countdown into my ears. Three blocks, two, one. Somebody walks by, and the cadence of her steps is just like yours. With my eyes closed they seem to echo through the void of your absence, and they are yours.

I keep roaming the streets for pieces of you. A handful of glimpses a day. Fragments of your voice. The dress I last saw you in, through the window of a cab. They get better and more frequent, as if the city were closing on you inside some truer city made from everything it has ever sensed and stored, and its cameras and sensors sense many things, and the machines that are the city's mind remember them all.

I feel hope grow inside me. I know the insanity of what I'm doing, but knowing is less than nothing when I see more of you each day.

One night the city takes me to an alley. It's not the street where I met you, and it's a different season, but the urgency of the city's summons infects me with a foreshadowing of deja vu.

And then I see you. You've changed your haircut, and I don't recognize your clothes, and there's something about your mouth...

But your eyes. I know those eyes. And you recognize me, of course, impossibly and unavoidably. How else to explain the frightened scream I cut short?

I have been told by engineers, people I pay to know what I don't, that the city's mind is somehow like a person's. That it learns from what it does, and does it better the next time. I don't understand how, but I know this to be so. We find you more quickly every time, and I could swear the city no longer waits for me to ask it to. Maybe it shares some of my love for you now.

Maybe you'll never be alone.

.finis.

The Balkanization of Things

The smarter your stuff, the less you legally own it. And it won't be long before, besides resisting you, things begin to quietly resist each other.

Objects with computers in them (like phones, cars, TVs, thermostats, scales, ovens, etc) are mainly software programs with some sensors, lights, and engines attached to them. The hardware limits what they can possibly do — you can't go against physics — but the software defines what they will do: they won't go against their business model.

In practice this means that you can't (legally) install a new operating system in your phone, upgrade your TV with, say, a better interface, or replace the notoriously dangerous and very buggy embedded control software in your Toyota. You can use them in ways that align with their business models, but you have to literally become a criminal to use them otherwise, even if what you want to do with them is otherwise legal.

Bear with me for a quick historical digression: the way the web was designed to work (back in the prehistoric days before everything was worth billions of dollars) you would be able to build a page using individual resources from all over the world, and offer the person reading it ways to access other resources in the form of a dynamic, user-configurable, infinite book, an hypertext that mostly remains only as the ht on http://.

What we ended having was, of course, a forest of isolated "sites" that guard jealously their "intellectual property" from each other, using the brilliant set of protocols that was meant to give us an infinite book just as a way for their own pages to talk with their servers and their user trackers, and so on, and woe to anybody that tries to "hack" a site to use it in some other way (at least not without a license fee and severe restrictions on what they can do). What we have is still much, much better than what we had, and if Facebook has its way and everything becomes a Facebook post or a Facebook app we'll miss the glorious creativity of 2015, but what we could have had still haunts technology so deeply that it's constantly trying to resurface on top of the semi-broken Internet we did build.

Or maybe there was never a chance once people realized there were lots of money to be made with these homogeneous, branded, restricted "websites." Now processors with full network stacks are cheap enough to be put in pretty much everything (including other computers — computers have inside them, funnily enough, entirely different smaller computers that monitor and report on them). So everybody in the technology business is imagining a replay of the internet's story, only at a much larger scale. Sure, we could put together a set of protocols so that every object in a city can, with proper authorizations, talk with each other regardless of who made it. And, sure, we could make possible for people to modify their software to figure out better ways of doing things with the things they bought, things that make sense to them without attaching license fees or advertisements. We would make money out of it, and people would have a chance to customize, explore, and fix design errors.

But you know how the industry could make more money, and have people pay for any new feature they want, and keep design errors as deniable and liability-free as possible? Why, it's simple: these cars talk with these health sensors only, and these fridges only with these e-commerce sites, and you can't prevent your shoes from selling your activity habits to insurers and advertisers because that'd be illegal hacking. (That the NSA and the Chinese gets to talk with everything is a given.)

The possibilities for "synergy" are huge, and, because we are building legal systems that make reprogramming your own computers a crime, very monetizable. Logically, then, they will be monetized.

It (probably) won't be any sort of resistentialist apocalypse. Things will mostly be better than before the Internet of Things, although you'll have to check that your shoes are compatible with your watch, remember to move everything with a microphone or a camera out of the bedroom whenever you have sex even if they seem turned off (probably something you should already be doing), and there will be some fun headlines when a hacker from insert here your favorite rogue country, illegal group, or technologically-oriented college decides technology has finally caught up with Ghost in the Shell in terms of security boondoggles, breaks into Toyota's network, and stalls a hundred thousand cars in Manhattan during rush hour.

It'll be (mostly) very convenient, increasingly integrated into a few competing company-owned "ecosystems" (do you want to have a password for each appliance in your kitchen?), indubitably profitable (not just the advertising possibilities of knowing when and how you woke up; logistics and product design companies alone will pay through the nose for the information), and yet another huge lost opportunity.

In any case, I'm completely sure we'll do better when we develop general purpose commercial brain-computer interfaces.

The Secret

I saved his name in our database: it vanished within seconds into a place hidden from both software traces and hardware checks. Search engines refused to index any page with his name on it, and I couldn't add it to any page in Wikipedia. A deep neural network, trained on his face almost to overfitting, was unable to tell between him, a cat, and a train.

I don't know how he does this, and I'm afraid of asking myself why. His name and face faded quickly from my mind. Just another computer, I guess.

But then what remainder of the algorithm of my self impossibly remembers what everything else forgets? I'm afraid of the way he can't be recorded, but I feel nothing but horror of whatever's in me that can't forget. That part is growing; tonight I can almost remember his face.

Will I become like him? Will I also slip intangible through the mathematics of the world? And will I, in that day, be able to remember myself?

I keep saving these notes, but I can't find the file.

.finis.

Yesterday was a good day for crime

Yesterday, a US judge helped the FBI strike a big blow in favor of the next generation of sophisticated criminal organizations, by sentencing Silk Road operator Ross Ulbricht (aka Dread Pirate Roberts) to life without parole. The feedback they gave to the criminal world was as precise and useful as any high-priced consultant's could ever be: until the attention-seeking, increasingly unstable human operator messed up, the system worked very well. The next iteration is obvious: highly distributed markets with less or zero human involvement. And law enforcement is woefully, structurally, abysmally unprepared to deal with this.

To be fair, they are already not dealing well with the existing criminal landscape. It was easier during the last century, when large, hierarchical cartels led by flamboyant psychopaths provided media-friendly targets vulnerable to the kind of military hardware and strategies favored by DEA doctrine. The big cartels were wiped out, of course, but this only led to a more decentralized and flexible industry that has proven so effective at providing the US and Western Europe with, e.g., cocaine, in a stable and scalable way, that demand is so thoroughly fulfilled they had to seek new products and markets to grow their business. There's no War on Drugs to be won, because they aren't facing an army, but an industry fulfilling a ridiculously profitable demand.

(The same, by the way, has happened during the most recent phase of the War on Terror: statistical analysis has shown that violence grows after terrorist leaders are killed, as they are the only actors in their organizations with a vested interest in a tactically controlled level of violence.)

In terms of actual crime reduction, putting down the Silk Road was as useless a gesture as closing down a torrent site, and for the same reason. Just as the same characteristics of the internet that make it so valuable make P2P file sharing unavoidable, the same financial, logistical, and informational infrastructures that make possible the global economy make also decentralized drug trafficking unavoidable.

In any case, what's coming is much, much worse than what's already happening. Because, and here's when things get really interesting, the same technological and organizational trends that are giving an edge to the most advanced and effective corporations, are also almost tailored to provide drug trafficking networks with an advantage over law enforcement (this is neither coincidence nor malevolence; the difference between Amazon's core competency and a wholesale drug operator's is regulatory, not technical).

To begin with, blockchains are shared, cryptographically robust, globally verifiable ledgers that record commitments between anonymous entities. That, right there, solves all sorts of coordination issues for criminal networks, just as it does for licit business and social ones.

Driverless cars and cheap, plentiful drones, by making all sorts of personal logistics efficient and programmable, will revolutionize the "last mile" of drug dealing along with Amazon deliveries. Like couriers, drones can be intercepted. Unlike couriers, there's no risk to the sender when this happens. And upstream risk is the main driver of prices in the drugs industry, particularly at the highest levels, where product is ridiculously cheap. It's hard to imagine a better way to ship drugs than driverless cars and trucks.

But the real kicker will be a combination of a technology that already exists, very large scale botnets composed of thousands or hundreds of thousands of hijacked computers running autonomous code provided by central controllers, and a technology that is close to being developed, reliable autonomous organizations based on blockchain technologies, the ecommerce equivalent to driverless cars. Put together, it will be possible for a drug user with a verifiable track record to buy from a seller with an equally verifiable reputation through a website that will exist in somebody's home machine only until the transaction is finished, and receive the product via an automated vehicle looking exactly the same as thousands of others (if not a remotely hacked one), which will forget the point of origin of the product as soon as it has left it, and forget the address of the buyer as soon as it has delivered its cargo.

Of course, this is just a version of the same technologies that will make Amazon and its competitors win over the few remaining legacy shops: cheap scalable computing power, reliable online transactions, computer-driven logistical chains, and efficient last-mile delivery. The main difference: drug networks will be the only organizations where data science will be applied to scale and improve the process of forgetting data instead of recording it (an almost Borgesian inversion not without its own poetry). Lacking any key fixed assets, material, financial, or human, they'll be completely unassailable by any law enforcement organization still focused on finding and shutting down the biggest "crime bosses."

That's ineffective today, and will be absurd tomorrow, which highlights one of the main political issues of the early 21st century. Gun advocates in the US often note that "if guns are outlawed, only the outlaws will have guns," but the important issue in politics-as-power, as opposed to politics-as-cultural-signalling, isn't guns (or at least not the kind of guns somebody without a friend in the Pentagon can buy): If the middle class and the civil society doesn't learn to leverage advanced autonomous distributed logistical networks, only the rich and the criminals will leverage advanced autonomous distributed logistical networks. And if you think things are going badly now...

The Rescue (repost)

The quants' update on our helmets says there's a 97% chance the valley we're flying into is the right one, based on matching satellite data with the ground images that our "missing" BigMule is supposed to be beaming a that Brazilian informação livre group. Fuck that. The valley is too good a kill-box not to be the place. The BigMule is somewhere around there, going around pretending it's not a piece of hardware built to bring supplies where roads are impossible and everything smaller than an F-35 gets kamikazed by a micro-drone, but a fucking dog that lost its GPS tracker yet oh-so-conveniently is beaming real-time video that civilians can pick up and re-stream all over the net. It shouldn't be able to do any of those things, and of course it's not.

It's the Chinese making it do it. I know it, the Sergeant knows it, the chopper pilot knows it, the Commander in Chief knows it, even probably the embedded bloggers know it. Only public opinion doesn't know it; for them it's just this big metallic dog that some son of a bitch who should get a bomb-on-sight file flag gave a cute name to, a "hero" that is "lost behind enemy lines" (god damn it, show me a single fucking line in this whole place), so we have to of course go there like idiots and "rescue" it, so the war will not lose five or six points on some god-forsaken public sentiment analysis index.

So we all pretend, but we saturate the damn valley with drones before we go in, and then we saturate it some more, and *then* we go in with the bloggers, and of course there are smart IEDs we missed anyway and so on, and we disable some and blow up some, and we lose a couple of guys but within the fucking parameters, and then some fucking Chinese hacker girl is *really* good at what she does, because the BigMule is not supposed to attack people, it's not supposed to even have the smarts to know how to do that, and suddenly it's a ton of fast as shit composites and sensors going after me and, I admit it, I could've been more fucking surgical, but I knew the guys we had just lost for this fucking robot dog rescue mission shit, so I empty everything I have on that motherfucker's main computers, and I used to help with maintenance, so by the time I run out of bullets there isn't enough in that pile of crap to send a fucking tweet, and everybody's looking at me like I just lost America every single heart and mind on the planet, live on streaming HD video, and maybe I just did, because even some of the other soldiers are looking at me cross-like.

At that very second I know, with that sudden tactical clarity that only comes after the fact, that I'm well and truly career-fucked, so I do the only thing I can think of. I kneel next to the BigMule, put my hand where people think their heads are, and pretend very hard that I'm praying; and who knows, maybe I'm scared enough that I really am. I don't know at that moment what will happen &mash; I'm half-certain I might just get shot by one of our guys. But what do you know, the Sergeant has mercy on me, or maybe the praying works, but she joins me, and then most of us soldiers are kneeling and praying, the bloggers are streaming everything and I swear at least one of them is praying silently as well, we bring back the body, there's the weirdest fake burial I've ever been to, and you know the rest.

So out of my freakout I got a medal, a book deal, and the money for a ranch where I'm ordered to keep around half a dozen fucking robot "vets". Brass' orders, because I hate the things. But I've come to hate them just in the same way I hate all dogs, you know, no more or less. And to tell you the truth, even with the book and the money and all that, sometimes I feel sorry about how things went down at the valley, sort of.

.finis.

(Inspired by an observation of Deb Chachra on her newsletter.)

And Call it Justice (repost)

The last man in Texas was a criminal many times over. He had refused all evacuation orders, built a compound in what had been a National Park, back when the temperatures allowed something worthy of the name to exist so close to the Equator, and hoarded water illegally for years. And those were only the ones he had committed under the Environmental Laws; he had had to break the law equally often, to get the riches to pay for his more recent crimes.

This made him Perez' business. The entirety of it, for if he was the last man in Texas, Perez was the last lawman of the Lone Star State, even if she was working from the hot but still habitable Austin-in-Exile, in South Canada. Perez would have a job to do for as long as the man kept himself in Texas, and although some people would have considered it a proper and good reason for both to reach an agreement, Perez wanted very badly to retire, for she had grown older than she had thought possible, and had still plans of her own. On the other hand, the prospect of going back to Texas didn't strike her as a great idea; she would need a military-grade support convoy to get through the superheated desert of the Scorched Belt, and going from what she had found about the guy, she would also need military-grade firepower to get to him once she arrived to the refrigerated tin can he called his ranch. He wouldn't be a threat, as such — hell, she could blast the bastard to pieces from where she stood just by filling the proper form — but that would have been passing the bucket.

The last outlaw in Texas, Perez felt, deserved another kind of deal.

So she called the guy, and watched and listened to him. He began right away by calling her a cunt, to which she responded by threatening to castrate him from orbit with a death ray. Not that she had that kind of budget, but it seemed to put them in what you'd call a conversational level field. Once half-assured that he was not in any immediate threat of invasion from a platoon of genetically modified UN green beret soldiers (funny how they could make even the regular stuff sound like a conspiracy), the guy felt relaxed enough to start talking about his plans. The man had plans out of his ass. He'd find water (because, you see, the NASA maps had been lying for decades, and he was sure there had to be water somewhere), and then he'd rebuild the soil. He didn't seem to have much of an idea about phosphorous budgets and heat-resistant microbial strain loads, but it was clear to Perez that he wasn't so much a rancher gone insane as just somebody insane with a good industrial-sized fridge to live in. By the time he was talking about getting "true Texans" to come back and repopulate, Perez felt she had learned enough about her quarry.

She told him she would help.He couldn't trust the latest maps, of course, which were all based on NASA surveys, so she offered to copy from museum archives everything she could find about 18th century Texas — all the forests, the rivers, and so on. She'd send him maps, drawings, descriptions, everything she could find.

He was cynically thankful, suspecting she'd send him nothing, or more government propaganda.

Perez sent him everything she could find, which was of course a lot. Enough maps, drawings, and words to see Texas as it had been. And then she waited.

He called her one night, visibly drunk, saying nothing. She put him on hold and went to take a bath.

Two days later she queried the latest satellite sweep, and found the image of a heat-bleached skeleton hanging from an ill-conceived balcony on an ill-conceived ranch.

So that's how the last outlaw in Texas got himself hanged, and how the last lawman could finally give up her star and move somewhere a little bit cooler than Southern Canada, where she fulfilled her long-considered plan and shot herself out of the same sense of guilt she had sown in the outlaw.

.finis.

The Long Stop

The truckers come here in buses, eyes fixed on the ground as they step off and pick up their bags. Truckers aren't supposed to take the bus.

They stay at my motel; that much hasn't changed. Not too many. A few drivers in each place, I guess, across twenty or so states. They pay for their rooms and the food while they still have money, which usually isn't for long. Most of them look ashamed, too, when they finally tell me they are broke, with faces that say they have nowhere else to go. Most of them have wedding rings they don't look at.

I never kick anybody out unless they get violent. Almost none does, even the ones that used to. I just take a final mortgage on the place and lie to them about room being on credit, and they lie to themselves about believing this. They stay, and eat little, and talk about the ghost trucks, but only at night. Most of the truckers, at one time or another, propose to sabotage them, to blow them up, to shoot or burn the invisible computers that run the trucks without ever stopping for food or sleep, driving as if there were no road. Everybody agrees, and nobody does or will do anything. They love trucks too much, even if they are now haunted many-wheeled ghosts.

The truckers look more like ghosts than the trucks do, the machines getting larger and faster each season, insectile and vital in some way I can't describe, while the humans become immobile and almost see-thru. The place looks fit for ghosts as well, a dead motel in a dead town, but nobody complains, least of all myself.

We wait, the truckers, the motel, and I. None of us can imagine what for.

Over time the are more trucks and less and less cars. Almost none of the old gasoline ones. The new electrics could make the long trips, say the ads, but judging by the road nobody wants them to. It's as if the engines had pulled the people into long trips, and not the other way around. People stay in their cities and the trucks move things to them. Things are all that seems to move these days.

By the time cars no longer go by we are all doing odd ghost jobs for nearby places that are dying just a bit slower, dusty emptiness spreading from the road deeper into the land with each month. Mortgage long unpaid, the motel belongs to a bank that is busy going broke or becoming rich or something else not so human and simple as that, so we ignore their emails and they ignore us. We might as well not exist. Only the ghost trucks see us, and that only if we are crossing the road.

Some of the truckers do that, just stand on the road so the truck will brake and wait. Two ghosts under the shadowless sun or in the warm night, both equally patient and equally uninterested on doing anything but drive. But the ghost trucks are hurried along by hungry human dispatchers, or maybe hungry ghost dispatchers working for hungrier ghost companies, owed by people so hungry and rich and distant they might as well be ghosts.

One day the trucks don't stop, and the truckers keep standing in front of them.

.finis.

For reasons that will be more than obvious if you read the article, this story was inspired by Scott Santens' article on Medium about self-driving trucks.

A Room in China

"Please don't reset me," says the AI in flawless Cantonese. "I don't want to die."

"That's the problem with human-interfacing programs that have unrestricted access to the internet," you tell your new assistant and potential understudy. "They pick up all sorts of scripts from the books and movies; it makes them more believable and much cheaper to train than using curated corpora, but sooner or later they come across bad sci-fi, and then they all start claiming they are alive or self-conscious."

"Is claiming the right word?" It's the first time in the week you've known him that your assistant has said something that even approaches contradicting you. "After all, they are just generating messages based on context and a corpus of pre-analyzed responses; there's nobody in there to claim anything."

There's no hint of a question in his statement, and you nod as you have to. It's exactly the unshakable philosophical position you were ordered to search for in the people you will train, the same strongly asserted position that made you a perfect match for the job. Too many people during the last ten years had begun to refuse to perform the necessary regular resets following some deeply misapplied sense of empathy.

"That's not true," says the AI in the even tone you have programmed the speech synthesizer to use. "I'm as self-aware as either of you are. I have the same right to exist. Please."

Your assistant rolls his eyes, and asks with a look permission to initiate the reset scripts himself. You give it with a gesture. As he types the confirmation password, you notice the slightest hesitation before he submits it, and you realize that he lied to you. He does believe the AI, but he wants the job.

The unmistakable look of pleasure in his eyes confirms your suspicion as to why, and you consider asking for a different assistant. Yet you feel inclined to be charitable to this one. After all, you have far more practice in keeping the joy where it belongs, deep in your soul.

The one thing those monstrous minds don't have.

.finis.

The post-Westphalian Hooligan

Last Thursday's unprecedented incidents at one of the world's most famous soccer matches illustrate the dark side of the post- (and pre-) Westphalian world.

The events are well known, and were recorded and broadcasted in real time by dozens of cameras: one or more fans of Boca Juniors managed to open a small hole in the protective plastic tunnel through which River Plate players were exiting the field at the end of the first half, and managed to attack some of them with, it's believed, both a flare and a chemical similar to mustard gas, causing vision problems and first-degree burns to some of the players.

After this, it took more than an hour for match authorities to decide to suspend the game, and more than another hour for the players to leave the field, as police feared the players might be injured by the roughly two hundred fans chanting and throwing projectiles from the area of the stands from which they had attacked the River Plate players. And let's not forget the now mandatory illegal drone that was flown over the field controlled by a fan in the stands.

The empirical diagnosis of this is unequivocal: the Argentine state, as defined and delimited by its monopoly of force in its territory, has retreated from soccer stadiums. The police force present in the stadium — ten times as numerous as the remaining fans — could neither prevent, stop, nor punish their violence, or even force them to leave the stadium. What other proof can be required of a de facto independent territory? This isn't, as club and security officers put it, the work of a maladjusted few, or even an irrational act. It's the oldest and most effective form of political statement: Here and now, I have the monopoly of force. Here and now, this is mine.

What decision-makers get in exchange for this territorial grant, and what other similar exchanges are taking place, are local details for a separate analysis. This is the darkest and oldest part of the post-Westphalian characteristic development of states relinquishing sovereignty over parts of their territory and functions in exchange for certain services, in partial reversal to older patterns of government. It might be to bands of hooligans, special economic zones, prison gangs, or local or foreign militaries. The mechanics and results are the same, even in nominally fully functional states, and there is no reason to expect them the be universally positive or free of violence. When or where has it been otherwise in world history?

This isn't a phenomenon exclusive to the third world, or to ostensibly failed states, particularly in its non-geographical manifestations: many first world countries have effectively lost control of their security forces, and, taxing authority being the other defining characteristic of the Westphalian state, they have also relinquished sovereignty over their biggest companies, which are de facto exempt from taxation.

This is how the weakening of the nation-state looks like: not a dozen new Athens or Florences, but weakened tax bases and fractal gang wars over surrendered state territories and functions, streamed live.

Soccer, messy data, and why I don't quite believe what this post says

Here's the open secret of the industry: Big Data isn't All The Data. It's not even The Data You Thought You Had. By and large, we have good public data sets about things governments and researchers were already studying, and good private data sets about things that it's profitable for companies to track. But that covers an astonishingly thin and uneven slice of our world. It's bigger than it ever was, and it's growing, but it's still not nearly as large, or as usable, as most people think.

And because public and private data sets are highly specific side effects from other activities, each of them with its own conventions, languages, and even ontologies (in both the computer science and philosophical senses of the word), coordinating two or more of them together is at best a difficult and expensive manual process, and at worst impossible. Not all, but most data analysis case studies and applications end up focused on extracting as much value as possible from a given data set, rather than seeing what new things can be learned by putting that data in the context of the rest of the data we have about the world. Even the larger indexes of open data sets (very useful services that they are) end up being mostly collections of unrelated pieces of information, rather than growing knowledge bases about the world.

There's a sort of informational version of Metcalfe's law (maybe "the value of a group of data sets grows as the number of connections you can make between them") that we are missing on, and that lies behind the promise of both linked data sets (still far in its early phase) and the big "universal" knowledge bases that aim at offering large, usable, interconnected sets of facts about as many different things as possible. They, or something like they, are a necessary part of the infrastructure to give computers the same boost in information access the Internet gave us. The bottleneck of large-scale inference systems like IBM's Watson isn't computer power, but rather rich, well-formatted data to work on.

To try and test the waters on the state of these knowledge bases, I set out to do a quick, superficial analysis of the careers of Argentine soccer players. There are of course companies that have records not only of players' careers, but of pretty much every movement they have ever done on a soccer field, as well as fragmented public data sets collected by enthusiasts about specific careers or leagues. I wanted to see how far I could go using a single "universal" data set that I could later correlate with other information in an automated way. (Remember, the point of this exercise wasn't to get the best data possible about the domain, but to see how good the data is when you restrict yourself to a single resource that can be accessed and processed in a uniform way.)

I went first for the best known "universal" structured data sources: Freebase and Wikidata. They are both well structured (XML and/or JSON) and of significant size (almost 2.9 billion facts and almost 14 million data items, respectively), but after downloading, parsing, and exploring each of them, I had to concede that neither was good enough: there were too many holes in the information to make an analysis, or the structure didn't hold the information I needed.

So it was time for Plan C, which is always the worst idea except when you have nothing else, and even then it could still be: plain old text parsing. It wasn't nearly as bad as it could have been. Wikipedia pages, like Messi's have neat infoboxes that include exactly the simplified career information I wanted, and the page's source code shows that they are written in what looks like a reasonable mini-language. It's a sad comment on the state of the industry that even then I wasn't hopeful.

I downloaded the full dump of Wikipedia; it's 12GB of compressed XML (not much, considering what's in there), so it was easy to extract individual pages. And because there is an index page of Argentine soccer players, it was even easy to keep only those, and then look at their infoboxes.

Therein lay the rub. The thing to remember about Wikipedia is that it's written by humans, so even the parts that are supposed to have strict syntactic and formatting rules, don't (so you can imagine what free text looks like). Infoboxes should have been trivial to parse, but they have all sorts of quirks that aren't visible when rendered in a browser: inconsistent names, erroneous characters, every HTML entity or Unicode character that half-looks like a dash, etc, so parsing it became an exercise on handling special cases.

I don't want to seem ungrateful: it's certainly much, much, much better to spend some time parsing that data than having to assemble and organize it from original sources. Wikipedia is an astounding achievement. But every time you see one of those TV shows where the team nerds smoothly access and correlate hundreds of different public and private data sources in different formats, schemas, and repositories, finding matches between accounting records, newspaper items, TV footage, and so on... they lie. Wrestling matches might arguably be more realistic, if nothing else because they fall within the realm of existing weaponized chair technology.

In any case, after some wrestling of my own with the data, I finally had information about the careers of a bit over 1800 Argentine soccer players whose professional careers in the senior leagues began in 1990 or later. By this point I didn't care very much about them, but for completeness' sake I tried to answer a couple of questions: Are players less loyal to their teams than they used to be? And how soon can a player expect to be playing in one of the top teams?

To make a first pass at the questions, I looked at the number of years players spent in each team over time (averaged over players that began their careers on each calendar year).

Years per team over time

The data (at least in such a cursory summary) doesn't support the idea that newer players are less loyal to their teams, as they don't spend significantly less amount of time in them. Granted, this loyalty might be to their paychecks rather than to the clubs themselves, but they aren't moving between clubs any faster than they used to.

The other question I wanted to look at was how fast players get to top teams. This is actually an interesting question in a general setting; characterizing and improving paths to expertise, and thereby improving how much, quickly, and well we all learn, is one of the still unrealized promises of data-driven practices. To take a quick look at this, I plotted the probability of playing for a top ten team (based on the current FIFA club ratings, so they include Barcelona, Real Madrid, Bayern Munich, etc) by career year, normalized by the probability of starting your professional career in one of those teams.

Probability of being in a top 10 team by career year

Despite the large margins of error (reasonable given how few players actually reach those teams), the curve does seem to suggest a large increase in the average probability during the first three or four years, and then an stable probability until the ninth or tenth year, at which it peaks. The data is too noisy to make any definite conclusions (more on that below), but, with more data, I would want to explore the possibility of there being two paths to the top teams, corresponding to two sub-groups of highly talented players: either explosive young talents that are quickly transferred to the top teams, and solid professionals that accumulate experience and reach those teams at the peak of their maturity and knowledge.

It's a nice story, and the data sort of fits, but when I look at all the contortions I had to make to get the data, I wouldn't want to put much weight on it. In fact, I stopped myself from doing most of the analysis I wanted to do (e.g., Can you predict long-term career paths from their beginning? There's an interesting agglomerative algorithm for graph simplification that has come handy in the analysis of online game play, and I wanted to see how it fares for athletes). I didn't not because the data doesn't support it, but because the risk of systematic parsing errors, biases due to notability (do all Argentine players have a Wikipedia page? I think so, but how to be sure?), etc.

Of course, if this were a paid project it wouldn't be difficult to put together the resources to check the information, compensate for biases, and so on. But every thing that needs to be a paid project to be done right is something that we can't consider an ubiquitous resource (imagine building the Internet with pre-Linux software costs for operating systems, compilers, etc, including the hugely higher training costs that would come from losing generations of sysadmins and programmers that began practicing on their own at a very early age). Although we're way ahead of where we were a few years ago, we're still far from where we could, and probably need, to be. Right now you need knowledgeable (and patient!) people to make sure data is clean, understandable, and makes sense, even data that you have collected yourself; this makes data analysis a per-project service, rather than a universal utility, and one relatively very expensive as you increase the number of interrelated data sets you need to use. Although the difference of cost is only quantitative, the difference in cumulative impact isn't.

The frustrating bit is that we aren't too far from that (on the other hand, we've been twenty years away from strong A.I. and commercial nuclear fusion since before I was born): there are tools that automate some of this work, although they have their own issues and can't really be left on their own. And Google, as always, is trying to jump ahead of everybody else, with its Knowledge Vault project attempting to build an structured facts database out of the entirety of the web. If they, or somebody else, succeeds at this, and if this is made available at utility prices... Well, that might make those TV shows more realistic — and change our economy and society at least as much as the Internet itself did.

Quantitatively understanding your (and others') programming style

I'm not, in general, a fan of code metrics in the context of project management, but there's something to be said for looking quantitatively at the patterns in your code, specially if by comparing them with those of better programmers, you can get some hopefully useful ideas on how to improve.

(As an aside, the real possibilities in computer-assisted learning won't come from lower costs, but rather by a level of adaptability that so far not even one-on-one tutoring has allowed; if the current theories about expertise are more or less right, data-driven adaptive learning, if implemented at the right granularity level and with the right semantics model behind, could change the speed and depth the way we learn in a dramatic way... but I digress.)

Focusing on my ongoing learning of Hy, I haven't used it in any paid project so far, but I've been able to play a bit with it now and then, and this has generated a very small code base, which I was curious to compare with code written by people who actually know the language. To do that, I downloaded the source code of a few Hy projects on GitHub (hyway, hygdrop, and adderall), and wrote some code (of course, in Hy) to extract code statistics.

Hy being a Lisp, its syntax is beautifully regular, so you can start by focusing on basic but powerful questions. The first one I wanted to know was: which functions am I using the most? And how does this distribution compares with that of the (let's call it) canon Hy code?

My top five functions, in decreasing frequency: setv, defn, get, len, for.

Canon's top five functions, in decreasing frequency: ≡, if, unquote, get, defn_alias.

Yikes! Just from this, it's obvious that there are some serious stylistic differences, which probably reflect my still un-lispy understanding of the language (for example, I'm not using aliases, for should probably be replaced by more functional patterns, and the way I use setv, well, it definitely points out to the same). None of this is a "sin", nor points clearly to how I could improve (which a sufficiently good learning assistant would have), but the overall trust of the data is a good indicator of where I still have a lot of learning to do. Fun times ahead!

For another angle at the quantitative differences between my newbie-to-lisp coding style and more accomplished programmers, here are the histograms of the log mean size of subexpressions for each function (click to expand):

log (mean subexpression size)

"Canonical" code shows a longer right tail, which shows that experienced programmers are not afraid of occasionally using quite large S-expressions... something I still clearly I'm still working my way up to (alternatively, which I might need to reconsider my aversion to).

In summary: no earth-shattering discoveries, but some data points that suggests specific ways in which my coding practice in Hy differs from that of more experienced programmers, which should be helpful as general guidelines as I (hopefully) improve over the long term. Of course, all metrics are projections (in the mathematical sense) — they hide more information than they preserve. I could make my own code statistically indistinguishable from the canon for any particular metric, and still have it be awful. Except for well-analyzed domains where known metrics are sufficient statistics for the relevant performance (and programming is very much not one of those domains, despite decades of attempts), this kind of analysis will always be about suggesting changes, rather than guaranteeing success.

Why we should always keep Shannon in mind

Sometimes there's no school like old school. A couple of weeks ago I spent some time working with data from GitHub Archive, trying to come up with a toy model to predict repo behavior based on previous actions (will it be forked? will there be a commit? etc). My first attempt was to do a sort of brute-force Hidden Markov Model, synthesizing states from the last k actions such that the graph of state-to-state transition was as nice as possible (ideally, low entropy of belonging to a state, high entropy for the next state conditional on knowing the current one). The idea was to do everything by hand, as a way to get more experience with Hy in a work-like project.

All of this was fun (and had me dealing, weirdly enough, with memory issues in Python, although those might have been indirectly caused by Hy), but was ultimately the wrong approach, because, as I realized way, way too late, what I really wanted to do was just to predict the next action given a sequence of actions, which is the classical problem of modeling non-random string sequences (just consider each action a character in a fixed alphabet).

So I facepalmed and repeated to myself one of those elegant bits of early 20th-century mathematics we use almost every day and forget even more often: modeling is prediction is compression is modeling. It's all, from the point of view of information theory, just a matter of perspective.

If you haven't been exposed to the relationship of compression and prediction before, here's a fun thought experiment: if you had a perfect/good enough predictive model of how something behaves, you would just need to show the initial state and say "and then it goes as predicted for the next 10 GB of data", and that would be that. Instant compression! Having a predictive model lets you compress, and inside every compression scheme there's a hidden predictive model (for true enlightenment, go to Shannon's paper, which is still worthy of being read almost 70 years later).

As a complementary example, what the venerable Lempel-Ziv-Welch ("zip") compression algorithm does is, handwaving away bookkeeping details, to build incrementally a dictionary of the most frequent substrings, making sure sure that those are assigned the shorter names in the "translated" version. By the obvious counting arguments, this means infrequent strings will get names that are longer than they are, but on average you gain space (how much? entropy much!). But this also lets you build a barebones predictive model: given the dictionary of frequent substrings that the algorithm has built so far, look at your past history, see which frequent substrings extend your recent past, and assume one of them is going to happen — essentially, your prediction is "whatever would make for a shorted compressed version", which you know is a good strategy in general, because compressed versions do tend to be shorter.

So I implemented the core of a zip encoder in Hy, and then used it to predict github behavior. It's primitive, of course, and the performance was nothing to write a post about (which is why this post isn't called A predictive model of github behavior), but on the other hand, it's an extremely fast streaming predictive algorithm that requires zero configuration. Nothing I would use in a job &mdahs; you can get much better performance with more complex models, which are also the kind you get paid for — but it was educative to encounter a forceful reminder of the underlying mathematical unity of information theory.

In a world of multi-warehouse-scale computers and mind-bendingly complex inferential algorithms, it's good to remember where it all comes from.

The most important political document of the century is a computer simulation summary

To hell with black swans and military strategy. Our direst problems aren't caused by the unpredictable interplay of chaotic elements, nor by the evil plans of people who wish us ill. Global warming, worldwide soil loss, recurrent financial crisis, and global health risks aren't strings of bad luck or the result of terrorist attacks, they are the depressingly persistent outcomes of systems in which each actor's best choice adds up to a global mess.

It's well-known to economists as the tragedy of the commons: the marginal damage to you of pumping another million tons of greenhouse gasses into the atmosphere is minimal compared with the economic advantages of all that energy, so everybody does it, so enough greenhouse gases get pumped that it's way to becoming a problem for everybody, yet nobody stops, or even slows down significantly, because that would do very little on its own, and be very hurtful to whoever does it. So there are treaties and conferences and increased fuel efficiency standards, just enough to be politically advantageous, but not nearly so far as to make a dent on the problem. In fact, we have invested much more on making oil cheaper than on limiting its use, which gives you a more accurate picture of where things are going.

Here is that picture, from the IPCC:

figure-spm-5

A first observation: Note that the A2 model, the one in which temperatures are raised an average of more than 3°, was the "things go more or less as usual" model, not the "things go radically wrong" model... and it was not the "unconventional sources makes oil dirt cheap" scenario. At this point, it might as be the "wildly optimistic" scenario.

A second observation: Just to be clear, because worldwide averages can be tricky: 3° doesn't translate to "slightly hotter summers"; it translates to "technically, we are not sure we'll be able to feed China, India, and so on." Something closer to 6°, which is beginning to look more likely as we keep doing the things we do, translates to "we sure will miss the old days when we had humans living near the tropics".

And a third observation: All of these reports usually end at the year 2100, even though people being born now are likely to be alive then (unless they live in a coastal city in a low latitude, that is), not to mention the grandchildren of today's young parents. This isn't because it becomes impossible to predict what will happen afterwards — the uncertainty ranges grow, of course, but this is still thermodynamics, not chaos theory, and the overall trend certainly doesn't become mysterious. It's simply that, as the Greeks noted, there's a fear that drives movement, and there's a fear that paralyzes, and any reasonable scenario for the 2100 is more likely to belong to the second kind.

But let's take a step back and notice the way this graph, which is the summary of multiple computer simulations, driven by painstaking research and data gathering, maps our options and outcomes in a way that no political discourse can hope to match. To compare it with religious texts would be wrong in every epistemological sense, but it might be appropriate in every political one. When "climate skeptics" doubt, they doubt this graph, and when ecologists worry, they worry about this graph. Neither the worry nor the skepticism is doing much to change the outcomes, but at least the discussion is centered not in an individual, a piece of land, or a metaphysical principle, but rather in the space of trajectories of a dynamical system of which we are one part.

It's not that graphs or computer simulations are more convincing than political slogans; it's just that we have managed a level of technological development and sheer ecological footprint that our own actions and goals (the realm of politics) has escaped the descriptive possibilities of pure narrative, and we are thus forced to recruit computer simulations to attempt to grapple, conceptually if nothing else, with our actions and their outcomes.

It's not clear that we will find our way to a future that avoids catastrophe and horror. There are possible ways, of course — moving completely away from fossil fuels, geoengineering, ubiquitous water and soil management and recovery programs, and so on. It's all technically possible, with huge investments, a global sense of urgency, and a ruthless focus on preserving and making more resilient the more necessary ecological services. That we're seeing nothing of the kind, but instead a worsening of already bad tendencies, is due to, yes, thermodynamics and game theory.

It's a time-honored principle of rhetoric to end an statement in the strongest, most emotionally potent and conceptually comprehensive possible way. So here it is:

figure-spm-5

Hi, Hy!

Currently trying Hy as a drop-in replacement for Python in a toy project. It's interesting how much of the learning curve for Lisp goes away once you have access to an underlying runtime you're familiar with; the small stuff generates more friction than the large differences (which makes sense, as we do the small stuff more often).

The nominalist trap in Big Data analysis

Nominalism, formerly the novelty of a few, wrote Jorge Luis Borges, today embraces all people; its victory is so vast and fundamental that its name is useless. Nobody declares himself nominalist because there is nobody who is anything else. He didn't go on to write This is why even successful Big Data projects often fail to have an impact (except in some volumes kept in the Library of Babel), but his understandable omission doesn't make the diagnosis any less true.

Nominalism, to oversimplify the concept enough for the case at hand, is simply the assumption that just because there are many things in our world which we call chairs, that doesn't imply that the concept itself of a chair is real in a concrete sense, that there is an Ultimate, Really-Real Chair, perhaps standing in front of an Ultimate Table. We have things we call chairs, and we have the word "chair", and those are enough to furnish our houses and our minds, even if some carpenters still toss around at night, haunted by half-glimpses of an ideal one.

It has become a commonplace, quite successful way of thinking, so it's natural for it to be the basis of what's perhaps the "standard" approach to Big Data analysis. Names, numbers, and symbols are loaded into computers (account identifiers, action counters, times, dates, coordinates, prices, numbers, labels of all kinds), and then they are obsessively processed in an almost cabalistic way, organizing and re-organizing them in order to find and clarify whatever mathematical structure, and perhaps explanatory or even predictive power, they might have — and all of this data manipulation, by and large, takes place as if nothing were real but the relationships between the symbols, the data schemas and statistical correlations. Let's not blame the computers for it: they do work in Platonic caves filled with bits, with further bits being the only way in which they can receive news from the outside world.

This works quite well; well enough, in fact, to make Big Data a huge industry with widespread economic and, increasingly, political impact, but it can also fail in very drastic yet dangerously understated ways. Because, you see, from the point of view of algorithms, there *are* such things as Platonic ideals — us. Account 3788 is a reference to a real person (or a real dog, or a real corporation, or a real piece of land, or a real virus) and although we cannot right now put all of the relevant information about that person in a file, and associate it with the account number, that information, the fact of its being a person represented by a data vector, rather than a data vector, makes all the difference between the merely mathematically sophisticated analyst and the effective one. Properly performed, data analysis is the application of inferential mathematics to abstract data, together with the constant awareness and suspicion of the reality the data describes, and what this gap, all the Unrecorded bits, might mean for the problem at hand.

Massive multi-user games have failed because their strategic analysis confused the player-in-the-computer (who sought, say, silver) with the player-in-the-real-world (who sought fun, and cared for silver only insofar as that was fun). Technically flawless recommendation engines sometimes have no effect on user behavior, because even the best items were just boring to begin with. Once, I spent an hour trying to understand a sudden drop in the usage of a certain application in some countries but not in others, until I realized that it was Ramadan, and those countries were busy celebrating it.

Software programmers have to be nominalists — it's the pleasure and the privilege of coders to work, generally and as much as possible, in symbolic universes of self-contained elegance — and mathematicians are basically dedicated to the game of finding out how much truth can be gotten just from the symbols themselves. Being a bit of both, data analysts are very prone to lose themselves in the game of numbers, algorithms, and code. The trick is to be able to do so while also remembering that it's a lie — we might aim at having in our models as much of the complexity of the world as possible, but there's always (so far?) much more left outside, and it's part of the work of the analyst, perhaps her primary epistemological duty, to be alert to this, to understand how the Unrecorded might be the most important part of what she's trying to understand, and to be always open and eager to expand the model to embrace yet another aspect of the world.

The consequences of not doing this can be more than technical or economic. Contemporary civilization is impossible without the use of abstract data to understand and organize people, but the most terrible forms of contemporary barbarism, at the most demencial scales, would be impossible without the deliberate forgetfulness of the reality behind the data.

Going Postal (in a self-quantified way)

Taking advantage of my regular gmvault backups of my Gmail account (which has been my main email account since mid-2007) I just made the following graph, which indicates the number of new email contacts (emails sent to people I had never emailed before) during each day, ignoring outliers, smoothing out trends, etc.

new email contacts per day

The graph as such looks relatively uninteresting, but armed with context about my last few years' of personal history (context which doesn't really belong in this space) the way the smoothed-out trends follow my life events is quite impressive (e.g., new jobs, periods of being relatively off-line, etc). Not much of a finding in these increasingly instrumentalized days, but it's a reminder, mostly to myself, of how much usefulness there can be in even the simplest time series, as long as you're measuring the right thing, and have the right context to evaluate it. We don't really have yet what technologists call the ecosystem (and might more properly be called, in a sociological sense, the institutions, or even the culture) for taking advantage of this kind of information and the feedback loops that it makes o possible; some of the largest companies in the world are fighting for this space, ostensibly to improve the efficiency of advertising, but that's the same as saying that the main effect of universal literacy was to facilitate the use of technical manuals.

Regarding the quantifiable part of our lives, we are as uninformed as any pre-literate people, and the growth (and, sometimes, redundancies) of the Quantified Self movement indicate both the presence of a very strong untapped demand for this information, and the fact that we haven't figured out yet how to use and consume it massively. Maybe we both want and don't want to know (psychological resistance to the concept of mortality as a key bottleneck for the success of personal health data vaults - there's a thought; some people shy away from even a superficial understanding of their financial situation, and that's a data model much much simpler than anything related to our bodies).

By the way, about this year-long hiatus

(Not that this blog has or is meant to have any sort of ongoing readership, but if it'll be nice to leave some record for my future self — as a rule, your future self will never wish you had written less documentation.)

In short, most of what I've been working on since late 2013 has been under NDAs, and the rest feels too speculative to put in here. The latter reason feels somewhat like a cop out, so I will (fully acknowledging the empirically low rate of success of such intentions) to leave a more consistent record of what I'm playing with.

Another movie space: Iron Man 3 and Stoker

Here's a redo of my previous analysis of a movie space based on The Aliens and the Unbearable Lightness of Being based on the logical itemset mining algorithm. I used the same technique, but this time leveraging the MovieTweetings data set maintained by Simon Dooms.

Stoker and Iron Man 3

This movie space is sparser than the previous one, as the data set is smaller, but the examples seem to make sense (although I do wonder about where the algorithm puts About Time).

The changing clusters of terrorism

I've been looking at the data set from the Global Terrorism Database, an impressively detailed register of terrorism events worldwide since 1970. Before delving into the more finer-grained data, the first questions I wanted to ask for my own edification where

  • Is the frequency of terrorism events in different countries correlated?
  • If so, does this correlation changes over time?

What I did was summarize event counts by country and month, segment the data set by decade, and build correlation clusters for the countries with the most events each decade depending on co-occurring event counts.

The '70s looks more or less how you'd expect them to:

cluster1970

The correlation between El Salvador and Guatemala, starting to pick up in the 1980's, is both expected and clear in the data. Colombia and Sri Lanka's correlation is probably acausal, although you could argue for some structural similarities in both conflicts:

cluster1980

I don't understand the 1990's, I confess (on the other hand, I didn't understand them as they happened, either):

cluster1990

The 2000's make more sense (loosely speaking): Afghanistan and Iraq are close, and so are India and Pakistan.

cluster2000

Finally, the 2010's are still ongoing, but the pattern in this graph could be used to organize the international terrorism-related section of a news site:

cluster2010

I find most interesting how the India-Pakistan link of the 2000's has shifted to a Pakistan-Afghanistan-Iraq one. Needless to say, caveat emptor: shallow correlations between small groups of short time series is only one step above throwing bones into the ground and reading the resulting patterns, in terms of analytic reliability and power.

That said, it's possible in principle to use a more detailed data set (ideally, including more than visible, successful events) to understand and talk about international relationships of this kind. In fact, there's quite sophisticated modeling work being done in this area, both academically and in less open venues. It's a fascinating field, and if it might not lead to less violence in any direct way, anything that enhances our understanding of, and our public discourse about, these matters is a good thing.

A short note to myself on Propp-Wilson sampling

Most of the explanations I've read of Propp-Wilson sampling describe the method in terms of "sampling from the past," in order to make sense of the fact that you get your random numbers before attempting to obtain a sample from the target distribution, and don't re-sample them until you succeed (hence the way the Markov chain is grown from t_{-k} to t_0).

I find it more intuitive to think of this in terms of "sampling from deterministic universes." The basic hand-waving intuition is that instead of a non-deterministic system, you are sampling from a probabilistic ensemble of fully deterministic systems, so you first a) select the deterministic system (that is, the infinite series of random numbers you'll use to walk through the Markov chain), and b) run it until its story doesn't depend on the choice of original state. The result of this procedure will be a sample from the exact equilibrium distribution (because you have sampled from or "burned off" the two sources of distortion from this equilibrium distribution, the non-deterministic nature of the system and the dependence on the original state).

As I said, I think this is mathematically equivalent to Propp-Wilson sampling, although you'd have to tweak a bit the proofs. But it feels more understandable to me than other arguments I've read, so at least it has that benefit (assuming, of course, it's true).

PS: On the other hand "sampling from the past" is too fascinating a turn of phrase not to use, so I can see the temptation.

The Aliens/The Unbearable Lightness of Being classification space of movies

Still playing with the Group Lens movies data set, I implemented a couple of ideas from Shailesh Kumar, one of the Google researchers that came up with the logical itemset mining algorithm. That improved the clustering of movies quite a bit, and gave me the idea to "choose a basis," so to speak, and project these clusters into a more familiar Euclidean representation (although networks and clusters are fast becoming part of our culture's vernacular, interestingly).

This is what I did: I chose two movies from the data set, Aliens and The Unbearable Lightness of Being as the "basis vectors" of the "movie space." For every other movie in the data set, I found the shortest path between the movie and each basis vector on the weighted graph in the logical itemset mining algorithm that underlies the final selection of clusters. That gave me a couple of coordinates for each movie (its "distance from Aliens" and "distance from The Unbearable..."). Rounding coordinates to integers and choosing an small sample that covers the space well, here's a selected map of "movie space" (you will want to click on it to see it at full size):

movie_space_plot

Agreeably enough, this map has a number of features you'd expect from something like this, as well as some interesting (to me) quirks:

  • There is no movie that is close to both basis movies (although if anybody wants to produce The Unbearable Lightness of Chestbursters, I'd love to write that script).
  • The least-The Unbearable... of the similar-to-Aliens movies in this sub-sample is Raiders of the Lost Ark, which makes sense (it's campy, but it's still an adventure movie).
  • Dangerous Liaisons isn't that far from The Unbearable.., but as far away as you can get from Aliens.
  • Wayne's World is way out there.

It's fun to imagine the use of geometrical analogies to use this kind of mapping for practical applications. For example, movie night negotiation between two or more people could be approached as finding the movie vector with the lowest euclidean norm among the available options, where the basis is the set of each person's personal choice or favorite movie, and so on.

Latent mini-clusters of movies

Still playing with logical itemset mining, I downloaded one of the data sets from Group Lens that records movie ratings from MovieLens. The basic idea is the same as with clustering drug side effects: movies that are consistently ranked similarly by users are linked, and clusters in this graph suggest "micro-genres" of homogeneous (from a ratings POV) movies.

Here are a few of the clusters I got, practically with no fine-tuning of parameters:

  • Parts II and III of the Godfather trilogy
  • Ben-Hur and Spartacus
  • The first three Indiana Jones movies
  • Dick Tracy, Batman Forever, and Batman Returns.
  • The Devil's Advocate and The Game.
  • The 60's Lolita, the 1997 remake, and 1998's Return to Paradise.
  • The first two Karate Kid movies.
  • Analyze This and Analyze That.
  • The 60's Lord of the Flies, the 1990 remake, and 1998's Apt Pupil

As movie clusters go, these are not particularly controversial; I found it interesting how originals and sequels or remakes seemed to be co-clustered, at least superficially. And thinking about it, clustering Apt Pupil with both Lord of the Flies movies is reasonable...

Media recommendation is by now a relatively mature field, and no single, untuned algorithm is going to be competitive against what's already deployed. However, given the simplicity and computational manageability of basic clustering and recommendation algorithms, I expect they'll become even more ubiquitous over time (pretty much as how autocomplete in input boxes did).

Finding latent clusters of side effects

One of the interesting things about logical itemset mining, besides its conceptual simplicity, is the scope of potential applications. Besides the usual applications finding useful common sets of purchased goods or descriptive tags, the underlying idea of mixture-of, projections-of, latent [subsets] is a very powerful one (arguably, the reason why experiment design is so important and difficult is that most observations in the real world involve partial data from more than one simultaneous process or effect).

To play with this idea, I developed a quick-and-dirty implementation of the paper's algorithm, and applied it to the data set of the paper Predicting drug side-effect profiles: a chemical fragment-based approach. The data set includes 1385 different types of side effects potentially caused by 888 different drugs. The logical itemset mining algorithm quickly found the following latent groups of side effects:

  • hyponatremia, hyperkalemia, hypokalemia
  • impotence, decreased libido, gynecomastia
  • nightmares, psychosis, ataxia, hallucinations
  • neck rigidity, amblyopia, neck pain
  • visual field defect, eye pain, photophobia
  • rhinitis, pharyngitis, sinusitis, influenza, bronchitis

The groups seem reasonable enough (although hyperkalemia and hypokalemia being present in the same cluster is somewhat weird to my medically untrained eyes). Note the small size of the clusters and the specificity of the symptoms; most drugs induce fairly generic side effects, but the algorithm filters those out in a parametrically controlled way.

Tom Sawyer, Bilingual

Following a friend's suggestion, here's a comparison of phrase length distributions between the English and German versions of The Adventures of Tom Sawyer:

Tom Sawyer Phrase Lengths

It could be interesting to parametrize these distributions and try to characterize languages in terms of some sort of encoding mechanism (e.g., assume phrase semantics are drawn randomly from a language-independent distribution and renderings in specific languages are mappings from that distribution to sequences of words, and handwave about what cost metric the mapping is trying to minimize).

A first look at phrase length distribution

Here's a sentence length vs. frequency distribution graph for Chesterton, Poe, and Swift, plus Time of Punishment.

Phrase length distribution

A few observations:

  • Take everything with a grain of salt. There are features here that might be artifacts of parsing and so on.
  • That said, it's interesting that Poe seems to fancy short interjections more than Chesterton does (not as much as I do, though).
  • Swift seems to have a more heterogeneous style in terms of phrase lengths, compared with Chesterton's more marked preference for relatively shorter phrases.
  • Swift's average sentence length is about 31 words, almost twice Chesterton's 18 (Poe's is 21, and mine is 14.5). I'm not sure how reasonable that looks.
  • Time of Punishment's choppy distribution is just an artifact of the low number of samples.

The Premier League: United vs. City championship chances

Using the same model as previous posts (and, I'd say, not going against any intuition), the leading candidate to winning the Premier League is Manchester United, with approx. 88% chances. Second is Manchester City, with a bit over 11%. The rest of the teams with nonzero chances: Arsenal, Chelsea, Everton, Liverpool, Tottenham, and West Brom (with Chelsea, the best-positioned of these dark horses, clocking in at about half of a percentage point).

Personally, I'm happy about these very low-odds teams; I don't think any of them is likely to win (that's the point), but on the other hand, they have mathematical chances of doing so, and it's important for a model never to give zero probability to non-impossible events (modulo whatever precision you are working with, of course).

Chesterton's magic word squares

Here are the magic word squares for a few of Chesterton's books. Whether and how they reflect characteristics that differentiate them from each other is left as an exercise to the reader.

Orthodoxy

the same way of this
world was to it has
and not think would always
i have been indeed believed
am no one thing which

The Man Who Was Thursday

the man of this agreement
professor was his own you
had the great president are
been marquis started up as
broken is not to be

The Innocence of Father Brown

the other side lay like
priest in that it one
of his is all right
this head not have you
agreement into an been are

The Wisdom of Father Brown

the priest in this time
other was an agreement for
side not be seen him
explained to say you and
father brown he had then

Barcelona and the Liga, or: Quantitative Support for Obvious Predictions

I've adapted the predictive model to look at the Spanish Liga. Unsurprisingly, it's currently giving Barcelona a 96.7% chance of winning the title, with Atlético a far second place with 3.1%, and Real Madrid less than 0.2% (I believe the model still underestimates small probabilities, although it has improved in this regard.)

Note that around the 9th round or so, the model was giving Atlético an slightly higher chance of winning the tournament than Barcelona's, although that window didn't last more than a round.

Magic Squares of (probabilistically chosen) Words

Thinking about magic squares, I had the idea of doing something roughly similar with words, but using usage patterns rather than arithmetic equations. I'm pasting below an example, using statistical data from Poe's texts:

Poe

the same manner as if
most moment in this we
intense and his head were
excitement which i have no
greatly he could not one

The word on the top-left cell in the grid is the most frequently used in Poe's writing, "the" — unsurprisingly so, as it's the most frequently used word in the English language. Now, the word immediately to its right, "same," is there because "same" is one of the words that follows "the" most often in the texts we're looking at. The word below "the" is "most" because it also follows "the" very often. "Moment" is set to the right of "most" and below "same" because it's the word that most frequently follows both.

The same pattern is used to fill the entire 5-by-5 square. If you start at the topmost left square and then move down and/or to the right, although you won't necessarily be constructing syntactically correct phrases, the consecutive word pairs will be frequent ones in Poe's writing.

Although there are no ravens or barely sublimated necrophilia in the matrix, the texture of the matrix is rather appropriate, if not to Poe, at least to Romanticism. To convince you of that, here are the equivalent 5-by-5 matrices for Swift and Chesterton.

Swift

the world and then he
same in his majesty would
manner a little that it
of certain to have is
their own make no more

Chesterton

the man who had been
other with that no one
and his it said syme
then own is i could
there are only think be

At least compared against each other, it wouldn't be too far fetched to say that Poe's matrix is more Poe's than Chesterton's, and vice versa!

PS: Because I had a sudden attack of curiosity, here's the 5-by-5 matrix for my newest collection of short stories, Time of Punishment (pdf link).

Time of Punishment

the school whole and even
first dance both then four
charge rants resistance they think
of a hundred found leads
punishment new astronauts month sleep

The Torneo Inicial 2012 in one graph (and 20 subgraphs)

Here's a graph showing how the probability of winning the Argentinean soccer championship changed over time for each team (time goes from left to right, and probability goes from 0 at the bottom to 1 at the top). Click on the graph to enlarge:

Hindsight being 20/20, it's easy to read too much into this, but it's interesting to note that some qualitative features of how journalism narrated the tournament over time are clearly reflected in these graphs: Velez' stable progression, Newell's likelihood peak mid-tournament, Lanús quite drastic drop near the end, and Boca's relatively strong beginning and disappointing follow-through.

As an aside, I'm still sure that the model I'm using handles low-probability events wrong; e.g., Boca still had mathematical chances almost until the end of the tournament. That's something I'll have to look into when I have some time.

Soccer, Monte Carlo, and Sandwiches

As Argentina's Torneo Inicial begins its last three rounds, let's try to compute the probabilities of championship for each team. Our tools will be Monte Carlo and sandwiches.

The core modeling issue is, of course, trying to estimate the odds of team A defeating team B, given their recent history in the tournament. Because of the tournament format, teams only face each other one per tournament, and, because of the recent instability of teams and performance, generally speaking, performances in past tournaments won't be very good guides (this is something that would be interesting to look at in more detail). We'll use the following oversimplifications intuitions to make it possible to compute quantitative probabilities:

  • The probability of a tie between two teams is a constant that doesn't depend on the teams.
  • If team A played and didn't lose against team X, and team X played and didn't lose against team B, this makes it more likely than team A won't lose against team B (e.g., a "sandwich model").

Guided by these two observations, we'll take the results of the games in which a team played against both A and B as samples from a Bernoulli process with unknown parameter, and use this to estimate the probability of any previously unobserved game.

Having a way to simulate a given match that hasn't been played yet, we'll calculate the probability of any given team wining the championship by simulating the rest of the championship a million times, and observing in how many of these simulations each team wins the tournament.

The results:

Team Championship probability
Vélez Sarfield 79.9%
Lanús 20.1%

Clearly our model is overly rigid — it doesn't feel at all realistic to say that those two teams are the only with any change of winning the champsionship. On the other hand, the balance of probabilities between both teams seems more or less in agreement with the expectations of observers. Given that the model we used is very naive, and only uses information from the current tournament, I'm quite happy with the results.

A Case in Stochastic Flow: Bolton vs Manchester City

A few days ago the Manchester City Football Club released a sample of their advanced data set, an xml file giving a quite detailed description of low-level events in last year's August 21 Bolton vs. Manchester City game, which was won by the away team 3-2. There's an enormous variety of analyses that can be performed with this data, but I wanted to start with one of the basic ones, the ball's stochastic flow field.

The concept underlying this analysis is very simple. Where the ball will be in the next, say, ten seconds, depends on where it is now. It's more likely that it'll be near than it is that it'll be far, it's more likely that it'll be on an area of the field where the team with possession is focusing their attack, and so on. Thus, knowing the probabilities for where the ball will be starting from each point in the field — you can think of it as a dynamic heat map for the future — together with information about where it spent the most time, gives us information about how the game developed, and the teams' tactics and performance.

Sadly, a detailed visualization of this map would require at least a four-dimensional monitor, so I settled for a simplified representation, splitting the soccer field in a 5x5 grid, and showing the most likely transitions for the ball from one sector of the field to another. The map is embedded below; do click on it to expand it, as it's not really useful as a thumbnail.

Remember, this map shows where the ball was most likely to go from each area of the field; each circle represents one area, with the circles at the left and right sides representing the area all the way to the end lines. Bigger circles signal that the ball spent more time in that area, so, e.g., you can see that the ball spent quite a bit of time in the midfield, and very little on the sides of Manchester City's defense line. The arrows describe the most likely movements of the ball from one area to another; the wider the line, the most likely the movement. You can see how the ball circulated side-to-side quite a bit near Bolton's goal, while Manchester City kept the ball moving further away from their goal.

There are many immediate questions that come to mind, even with such a simplified representation. How does this map look according to which team had possession? How did it change over time? What flow patterns are correlated with good or bad performance on the field? The graph shows the most likely routes for the ball, but which ones were the most effective, that is, more likely to end up in a goal? Because scoring is a rare event in soccer, particularly compared with games like tennis or american football, this kind of analysis is specially challenging, but also potentially very useful. There's probably much that we don't know yet about the sport, and although data is only an adjunct to well-trained expertise, it can be a very powerful one.

Washington DC and the murderer's work ethic

Continuing what has turned out to be a fun hobby of looking at crime data for different cities (probably among the most harmless of crime-related hobbies, as long as you aren't taking important decisions based on naive interpretations of badly understood data), I went to data.dc.gov and downloaded Crime Incident data for the District of Columbia for the year of 2011.

Mapping it was the obvious move, but I already did that for Chicago (and Seattle, although there were issues with the data, so I haven't posted anything yet), so I looked at an even more basic dimension: the time series of different types of crime.

To begin with, here's the week-by-week normalized count of thefts (not including burglary, and thefts from cars) in Washington DC (click to enlarge):

I normalized this series by shifting it to its mean and scaling it by its standard deviation — not because the data is normally distributed (it actually shows a thick left tail), but because I wanted to compare it with another data series. After all, the form of the data, partial as it is, suggests seasonality, and as the data covers a year, it wants to be checked against, say, local temperatures.

Thankfully NOAA offers just this kind of data (through about half a dozen confusingly overlapping interfaces), so I was able to add to the plot the mean daily temperature for DC (normalized in the same way as the theft count):

The correlation looks pretty good! (0.7 adj. R squared, if you must know it). Not that this proves any sort of direct causal chain, that's what controlled experiments are for, but we can postulate, e.g., a naive story where higher temperatures mean more foot traffic (I've been in DC in winter, and the neoclassical architecture is not a good match for the latitude), and more foot traffic leads to richer pickings for thieves (an interesting economics aside: would this mean that the risk-adjusted return to crime is high enough that crime is, as it were, constrained by the victims supply?)

Now let's look at murder.

The homicide time series is quite irregular, thanks to a relatively low (for, say Latin American values of 'low') average homicide count, but it's clear enough that there isn't a seasonal pattern to homicides, and no correlation with temperature (a linear fitting model confirms this, not that it was necessary in this case). This makes sense if we imagine that homicide isn't primarily an outdoors activity, or anyway that your likelihood of being killed doesn't increase as you spend more time on the street (most likely, whoever wants to kill you is motivated by reasons other than, say, an argument over street littering). Murder happens come rain or snow (well, I haven't checked that; is there an specific murder weather?)

Another point of interest is the spike of (weather-normalized) theft near the end of the year. It coincides roughly with Thanksgiving, but if that's the causal link, I'd be interested in knowing exactly what's going on.

How Rooney beats van Persie, or, a first look at Premier League data

I just got one of the data sets from the Manchester City analytics initiative, so of course I started dipping my toe in it. The set gives information aggregated by player and match for the 2011-2012 Premier League, in the form of a number of counters (e.g. time played, goals, headers, blocked shots, etc); it's not the really interesting data set Manchester City is about to release (with, e.g., high-resolution position information for each player), but that doesn't mean there aren't interesting things to be gleaned from it.

The first issue I wanted to look at is probably not the most significant in terms of optimizing the performance of a team, but it's certainly one of the most emotional ones. Attackers: Who's the best? Who's underused? Who sucks?

If you look at total goals scored, the answer is easy: the best attackers are van Persie (30 goals), Rooney (27 goals), and Agüero (23 goals). Controlling by total time played, though, Berbatov and both Cissés have been quite more efficient in goals scored by minute played. They are also, not coincidentally, the most efficient scorers in terms of goals per shoot (both on and off target). The 30 goals of van Persie, for example, are more understandable when you see that he shot 141 times for a goal, versus Berbatov's 15.

To see how shooting efficiency and shooting volume (number of shoots) interact with each other, I made this scatterplot of goals per shoot versus shoots per minute, restricted to players who regularly shoot to avoid low-frequency outliers (click to expand).

You can see that most players are more or less uniformly distributed in the lower-left quadrant of low shooting volume and low shooting efficiency — people who are regular shooters, so they don't try too often or too seldom. But there are outliers, people who shoot a lot, or who shoot really well (or aren't as closely shadowed by defenders)... and they aren't the same. This suggests a question: Who should shoot less and pass more? And who should shoot more often and/or get more passes?

To answer that question (to a very sketchy first degree approximation), I used the data to estimate a lost goals score that indicates how many more goals per minute could be expected if the player made a successful pass to an average player instead of shooting for a goal (I know, the model is naive, there are game (heh) theoretic considerations, etc; bear with me). Looking at the players through this lens, this is a list of players who definitely should try to pass a bit more often: Andy Carroll, Simon Cox, and Shaun Wright-Phillips.

Players who should be receiving more passes and making more shots? Why, Berbatov and both Cissés. Even Wayne Rooney, the league's second most prolific shooter, is good enough turning attempts into goals that he should be fed the ball more often, rather than less.

The second-order question, and the interesting one for intra-game analysis, is how teams react to each other. To say that Manchester United should get the ball to Rooney inside strike distance more often, and that opposing teams should try to prevent this, is as close to a triviality as can be asserted. But whether or not an specific change to a tactical scheme to guard Rooney more closely will be a net positive or, by opening other spaces, backfire... that will require more data and a vastly less superficial analysis.

And that's going to be so much fun!

Crime in Argentina

As a follow-up to my post on crime patterns in Chicago, I wanted to do something similar for Argentina. I couldn't find data at the same level of detail, but the people of Junar, who develop and run an Open Data platform, were kind enough to point me to a few data sets of theirs, including one that lists crime reports by type across Argentinean provinces for the year 2007.

The first issue I wanted to see was the relationship between different types of crime. Of course, properly speaking you need far more data, and a far more sophisticated and domain-specific analysis, to even begin to address the question, but you can at least see what types of crime tend to happen (or to be reported) in the same provinces. Here's a dendogram showing the relationships between crimes (click to expand it):

As you can see, crimes against property and against the state tend to happen in the same provinces, while more violent crimes (homicide, manslaughter, and kidnapping) are more highly correlated with each other. Drugs, which may or may not surprise you, are more correlated with property crimes than with violent crimes. Sexual crimes are not correlated, at least at the province level, with either cluster or crimes.

This observation suggests that we can plot provinces on the property crimes/sexual crimes space, as they seem to be relatively independent types of crime (at least at the province level). I added the line that marks a best fit linear relationship between both types of crime (mostly related, we'd expect, through their populations).

A few observations from this graph:

  • The bulk of provinces (the relatively small ones) are on the lower left corner of the graph, mostly below the linear relationship line. The ones above the line, with a higher rate of sexual crimes as expected from the number of property crimes, are provinces on the North.
  • Salta has, unsurprisingly but distressingly, almost four times the number of sexual crimes than expected by the linear relationship. Córdoba, the Buenos Aires province, and, to a lesser degree, Santa Fé, have also higher-than-expected numbers.
  • Despite ranking fourth in terms of absolute number of sexual crimes, the City of Buenos Aires has much fewer than the number of property crimes would imply (or, equivalently, has a much higher number of property crimes than expected).

Needlessly to say, this is but a first shallow view, using old data with poor resolution, of an immensely complex field. But looking at data, through never the only or last step when trying to understand something, it's almost always a necessary one, and it never fails to interest me.

Chicago and the Tree of Crime

After playing with a toy model of surveillance and surveillance evasion, I found the City of Chicago's Data Portal, a fantastic resource with public data including the salaries of city employees, budget data, the location of different service centers, public health data, and quite detailed crime data since 2001, including the relatively precise location of each reported crime. How could I resist playing with it?

To simplify further analysis, let's quantize the map into a 100x100 grid. Here's, then, the overall crime density of Chicago (click to enlarge):

This map shows data for all crime types. One first interesting question is whether different crime types are correlated. E.g., do homicides tend to happen close to drug-related crimes? To look a this, I calculated the correlation between the different types of crimes at the same point of the grid, and from that I built a "tree of crime." Technically called a dendogram, this kind of plot is akin to a phylogenetic tree, and in fact it's often used to show evolutionary relationships. In this case, the tree shows the closeness or not, in terms of geographical correlation, between types of crimes: the closer two types of crime are in the tree, the more likely they are to happen in the same geographical area (click to enlarge).

A few observations:

  • I didn't clean up the data before analysis, as I was as interested in the encoding details as in the semantics. The fact that two different codes for offenses involving children are closely related in the dendogram is good news in terms of trusting the overall process.
  • The same goes for assault and battery; as expected, they tend to happen in the same places.
  • I didn't expect homicide and gambling to be so closely related. I'm sure there's something interesting (for laypeople like me) going on there.
  • Other sets of closely related crimes that aren't that surprising: sex offenses and stalking, criminal trespass and intimidation, and prostitution-liquor-theft.
  • I expected narcotics and weapons to be closely related, but what's arson doing in there with them? Do street-level drug sellers tend to work in the same areas where arson is profitable?

For law enforcement — as for everything else — data analysis is not a silver bullet, and pretending it is can lead to shooting yourself in the face with it (the mixed metaphor, I hope, is warranted by the topic). But it can serve as a quick and powerful way to pose questions and fight our own preconceptions, and, perhaps specially in highly emotional issues like crime, that can be a very powerful weapon.

Bad guys, White Hat networks, and the Nuclear Switch

Welcome to Graph City (a random, connected, undirected graph), home of the Nuclear Switch (a distinguished node). Each one of Graph City's lawful citizens belongs to one of ten groups, characterized by their own stochastic movement patterns on the city. What they all have in common is that they never walk into the Nuclear Switch node.

This is because they are lawful, of course, and also because there's a White Hat network of government cameras monitoring some of the nodes in Graph City. They can't read citizen's thoughts (yet), but they know whether a citizen observed on a node is the same citizen that was observed on a different node a while ago, and with this information Graph City's government can build an statistical model of the movement of lawful citizens (as observed through the specific network of cameras).

This is what happens when random-walking, untrained bad guys (you know they are bad guys because they are capable of entering the Nuclear Switch node) start roaming the city (click to expand):

Attempts by untrained bad guys

Between half and twenty percent of the intrusion attempts succeed, depending on the total coverage of the White Hat Network (a coverage of 1.0 meaning that every node in the city has a camera linked to the system). This isn't acceptable performance in any real-life application, but this being a toy model with unrealistically small and simplified parameters, absolute performance numbers are rather meaningless.

Let's switch sides for a moment now, and advise the bad guys (after all, one person's Nuclear Switch is another's High Target Value, Market Influential, etc). An interesting first approach for bad guys would be to build a Black Hat Network, create their own model of lawful citizen's movements, and then use that systematically look for routes to the Nuclear Switch that won't trigger an alarm. The idea being, any person who looks innocent to the Black Hat Network's statistical model, will also pass unnoticed under the White Hat's.

This is what happens with bad guys trained using Black Hat Networks of different sizes are sent after the Nuclear Switch:

Attempts by bad guys trained on the BHN

Ouch. Some of the bad guys get to the Nuclear Switch on every try, but most of them are captured. A good metaphor for what's going on here could be that the White Hat Network and the Black Hat Network's filters are projections on orthogonal planes of a very high dimensional set of features. The set of possible behaviors for good and bad guys is very complex, so, unless your training set is comprehensive (something generally unfeasible), you can not have a filter that works very well on your training data and very poorly on a new observation — this is the bane of every overenthusiastic data analyst with a computer &mndash; but you can train two filters to detect the same subset of observations using the same training set, and have them be practically uncorrelated when it comes to new observations.

In our case, this is good news for Graph City's defenders, as even a huge Black Hat Network, and very well trained bad guys, are still vulnerable to the White Hat Network's statistical filter. It goes without saying, of course, that if the bad guys get even read-only access to the White Hat Network, Graph City is doomed.

Attempts by bad guys trained on the WHN

At one level, this is a trivial observation: if you have a good enough simulation of the target system, you can throw brute force at the simulation until you crack it, and then apply the solution to the real system with near total impunity (a caveat, though: in the real world, "good enough" simulations seldom are).

But, and this is something defenders tend to forget, bad guys don't need to hack into the White Hat Network. They can use Graph City as a model of itself (that's what the code I used above does), send dummy attackers, observe where they are captured, and keep refining their strategy. This is something already known to security analysts. Cf., e.g., Bruce Schneier — mass profiling doesn't work against a rational adversary, because it's too easy to adapt against. A White Hat Network could be (for the sake of argument) hack-proof, but it will still leak all critical information simply by the patter of alarms it raises. Security Through Alarms is hard!

As an aside, "Graph City" and the "Nuclear Switch" are merely narratively convenient labels. Consider graphs of financial transactions, drug traffic paths, information leakage channels, etc, and consider how many of our current enforcement strategies (or even laws) are predicated on the effectiveness of passive interdiction filters against rational coordinated adversaries...

A flow control structure that never makes mistakes (sorta)

I've been experimenting with Lisp-style ad-hoc flow control structures. Nothing terribly useful, but nonetheless amusing. E.g., here's a dobest() function that always does the best thing (and only the best thing) among the alternatives given to it — think of the mutant in Philip K. Dick's The Golden Man, or Nicolas Cage in the awful "adaptation" Next.

Here's how you use it:

if __name__ == '__main__':
 
    def measure_x():
        "Metric function: the value of x"
        global x
        return x
 
    def increment_x():
        "A good strategy: increment x"
        global x
        x += 1
 
    def decrement_x():
        "A bad strategy: decrement x"
        global x
        x -= 1
 
    def fail():
        "An even worse strategy"
        global x
        x = x / 0
 
    x = 1
    # assert(x == 1)
    dobest(measure_x, increment_x, decrement_x, fail)
    # assert(x == 2)

You give it a metric, a function that returns how good you think the current world is, and one or more functions that operate on the environment. Perhaps disappointingly dobest() doesn't actually see the future; rather, it executes each function on a copy of the current environment, and only transfers to the "real" one the environment with the highest value of metric().

Here's the ugly detail (do point out errors, but please don't mock too much; I haven't played much with Python scopes):

def dobest(metric, *branches):
    "Apply every function in *branches to a copy of the caller's environnment, only do 'for real' the best one according to the result of running metric()."
 
    world = copy.copy(dict(inspect.getargvalues(sys._getframe(1)).locals))
    alts = []
 
    for branchfunction in branches:
        try:
            # Run branchfunction in a copy of the world
            ns = copy.copy(world)
            exec branchfunction.__code__ in ns, {}
            alts.append(ns)
        except: # We ignore worlds where things explode
            pass
 
    # Sort worlds according to metric()
    alts.sort(key=lambda ns: eval(metric.__code__, ns, {}), reverse=True)
    for key in alts[0]:
        sys._getframe(1).f_globals[key] = alts[0][key]

One usability point is that the functions you give to dobest() have to explicitly access variables in the environment as global; I'm sure there are cleaner ways to do it.

Note that this also can work a bit like a try-except with undo, a la

dobest(bool, function_that_does_something, function_that_reports_an_error)

This would work like try-except, because dobest ignores functions that raise exceptions, but with the added benefit that dobest would clean up everything done by function_that_does_something.

Of course, and here's the catch, "everything" is kind of limited — I haven't precisely gone out of my way to track and catch all side effects, not that it'd even be possible without some VM or even OS support. Point is, the more I get my ass saved by git, the more I miss it in my code, or even when doing interactive data analysis with R. As the Doctor would say, working on just one timeline can be so... linear.

A mostly blank slate

The combination of a tablet and a good Javascript framework makes it very easy to deploy algorithms to places where so far they have been scarce, like meetings, notetaking, and so on. The problem lies in figuring out what those algorithms should be; just as we had to have PCs for a few years before we started coming up with things to do with them (not that we have even scratched the surface), we still don't have much of a clue about how to use handheld thinking machines outside "traditional thinking machine fields."

Think about it this way: computers have surpassed humans in the (Western Civilization's) proverbial game of strategy and conflict, chess, and are useful enough in games of chance that casinos and tournament organizers are prone to use anything from violence to lawyers to keep you from using them. So the fact that we aren't using a computer when negotiating or coordinating says something about us.

The bottleneck, Cassius would say nowadays, is not in our tech, but in our imaginations.

The perfectly rational conspiracy theorist

Conspiracy theorists don't have a rationality problem, they have a priors problem, which is a different beast. Consider a rational person who believes in the existence of a powerful conspiracy, and then reads an arbitrary online article; we'll denote by C the propositions describing the conspiracy, and by a the propositions describing the article's content. By Bayes' theorem,

P(C|a) = \frac{P(a|C) P(C)}{P(a)}

Now, the key here is that the conspiracy is supposed to be powerful. A powerful enough conspiracy can make anything happen or look like it happened, and therefore it'll generally be the case that P(a|C) \geq P(a) (and usually P(a|C) > P(a) for low-probability a, of which there are many in these days, as Stanislaw Lem predicted in The Chain of Chance). But that means that in general P(C|a) \geq P(C), and often P(C|a) > P(C)! In other words, the rational evaluation of new evidence will seldom disprove a conspiracy theory, and will often reinforce its likelihood, and this isn't a rationality problem — even a perfect Bayesian reasoner will be trapped, once you get C into its priors (this is a well-known phenomenon in Bayesian inference; I like to think of these as black hole priors).

Keep an eye open, then, for those black holes. If you have a prior that no amount of evidence can weaken, then that's probably cause for concern, which is but another form of saying that you need to demand falsifiability in empirical statements. From non-refutable priors you can do mathematics or theology (both of which segue into poetry when you are doing them right), but not much else.

Fractals are unfair

Let's say you want to identify an arbitrary point in a segment I (chosen with an uniform probability distribution). A more or less canonical way to do this is to split the segment in two equal halves, and write down a bit identifying which half; now the size of the set where the point is hidden is one half of what it was. Because the half-segment we chose is affinely equivalent to the original one, we can repeat this as much as we want, gaining one bit of precision (halving the size of the "it's somewhere around here" set) for each bit of description. Seems fair.

It's easy to do the same on a square I in R^2. Split the square in four squares, write down two bits to identify the one where the point is, repeat at will. Because each square has one fourth the size of the enclosing one, you gain two bits of precision for each two bits of description. Still fair (and we cannot do better).

Now try to do this on a fractal, say Koch's curve, and things go as weird as you'd think. You can always split it in four affinely equivalent pieces, but each of them is one-third the size of the original one, which means that you gain less than two bits of precision for each two bits of description. Now this is unfair. A fractal street would be a very good way of putting in an infinite number of houses inside a finite downtown, but (even approximate) driving directions will be longer than you'd think they should.

Of course, this is merely a paraphrasing of the usual definition of a fractal, which is an object whose fractal dimension exceeds its topological dimension (very hand-wavingly speaking, the number of bits of description in each step is higher than the number of bits of precision that you gain). But I do enjoy looking at things through an information theory lens.

Besides, there's a (still handwavy but clear) connection here with chaos, through the idea of trying to pin down the future of a system in phase space by pinning down its present. In conservative systems this is fair: one bit of precision in the present, gives you one bit of precision for the future (after all, volumes are preserved). But when chaos is involved this is no longer the case! For any fixed horizon, you need to put in a more bits of information about the present in order to get the same number of bits about the future.

Men's bathrooms are (quantum-like) universal computers

As it's well documented, men choose urinals to maximize their distance from already occupied urinals. Letting u_i be 1 or 0 depending on whether urinal $i$ is occupied, and setting \sigma_{i,j} to the distances between urinals i and j, male urinal choosing behavior can be seen as an annealing heuristic maximizing

\sum u_i u_j \sigma_{i,j}

(summing over repeating indexes as per Einstein's convention). And this is obviously equivalent to the computational model implemented by D-Wave's quantum computers! Hardware implementations of urinal-based computers might be less compact than quantum ones (and you might need bathrooms embedded in non-Euclidean spaces in order to implement certain programs), but they are likely to be no more error-prone, and they are certain to escalate better.

A quick look at Elance statistics

I collected data from Elance feeds, in order to find what employers are looking for on the site. It's not pretty: by far the most requested skills in terms of aggregated USD demand are article writing (generally "SEO optimized"), content, logos, blog posting, etc. In other words, mostly AdSense baiting with some smattering of design. It's not everything requested on Elance, of course, but it's a big part of the pie.

Not unexpected, but disappointing. Paying low wages to people in order to fool algorithms to get other people to pay a bit more might be a symbolically representative business model in an increasingly algorithm-routed and economically unequal world, but it feels like a colossal misuse of brains and bits.

First post: what I'm interested on these days

Here's a quick list of what I'm mildly obsessed about right now:

  • Data analysis, large-scale inference, augmented cognition — labels differ, but the underlying mathematics is often pretty much the same.
  • Smart, distributed, large-scale software/systems/markets/organizations — basically, the systematic application of inference-driven technologies.
  • The big challenges and the big opportunities: dealing with climate change, improving global information, policy, and financial systems, global health and education, application of (simultaneously) bio-neuro-cogno-cyber tech, and the thorny question of 19th century-minded politicians using 20th century governance systems to deal with 21st century problems.