# 2019 Should be the end of Big Attention

If your business or cultural strategy was based on attention at scale, the first years of the century felt great, as the leading Internet platforms offered the possibility of nearly limitless, precisely targeted traffic. Then came the troubles.

The first one is economic. If you depend on traffic from any of the dominant platforms, you're less an independent, disruptive entrepreneur than a sharecropper working on the lands of one of a handful oligopolistic landlords. As the sole large-scale suppliers of attention/traffic, big social networks are free to set rules that maximize their control and revenue with no regard to the long-term viability of individual tenants or their communities – just ask the news industry. It's not a coincidence that they are among the biggest and most profitable companies in the world.

But if social networks are in some senses pseudo-feudal lords, they lack even the (extremely) dubious commitment of their historical counterparts to providing at least some semblance of security in their territories. Online platforms weren't built around a concept of community any closer to the real thing than Apple stores are to real town squares. They are built to generate, amplify, and sustain "engagement" with as little cost as possible. An staggeringly brilliant movie review or a vicious trolling attack are, from the point of view of their systems, core processes, and metrics, indistinguishable.

This led to the second problem, the social one. Large-scale Internet platforms didn't cause the rising levels of virulent authoritarianism, racist nationalism, deliberate ignorantism, and misogyny, but they did much to facilitate their growth, in no small measure by building amplifiers without filters and then expressing surprise about being held responsible for what came out. Much of their scalability was, in fact, due to unpriced negative externalities: building a social network that even under adversarial conditions can't be used to amplify fraud, abuse, and hate is inherently hard, expensive, and difficult to scale, and companies like Facebook and Twitter just kicked down that can for as long as they could, and then some. The ongoing Tumblr meltdown, with their over-simplistic technological fix to a predictable platform hijacking problem failing in an even more predictable way, is but the fast-forward, somewhat farcical version of what every other social network is also going through.

This worldwide political development is doubtlessly the most critical problem. Unless we push back racist nationalism, misogyny, and deliberate obscurantism, few or none of our other problems will be solvable (including possibly terminal ecological ones).

Part of the solution, though, will probably come from recognizing that our cultural and economic fixation with gaining the largest possible audience, and the platforms this fixation has made so ubiquitous and profitable, is no longer tenable. Simply put, if we can't have viral success without social networks, then we better not depend on having viral success, because social networks are less scalability tools than huge vulnerability surfaces, and neither communities, institutions, nor societies can afford them any more.

This is a difficult thing to give up. The temptation and occasional reality of instantaneous attention at scale, and the riches that can come from it, are intoxicating. But the brief window in which this was feasible in a socially sustainable way, if it ever existed, is now gone. We need to go back to, rebuild, and strengthen the old Internet of millions of nodes at multiple scales, from the homepage to, yes, the huge aggregator, but no longer ceding the rules under which we interact with each other to the control of organizations unwilling or unable to do so in a responsible way.

# AI for Science and Management in 2019: You haven't seen disruption yet

For all of its explosive growth, AI still has to make deep roads in management (as opposed to operations) and theoretical science beyond data analysis. But both frontiers will be pushed back faster and further than most people think, and with unprecedented impact.

The difficulty so far lies in the balance between the complexity of the systems we want to control and the limits to experimentation. Computers are superb at learning how to do even extremely complex things, like Go or chess, as long as they are given freedom to experiment. Even complex physical tasks like driving a car are beginning to be manageable through a combination of massive data sets, good simulators, and very large amounts of money spent having cars drive themselves around.

Companies have much smaller margins of error: you can't test thousands of business strategies until a neural network learns what works for you. Scientific research, for example in medicine, faces a related set of issues. Although it's a profoundly experimental science, our tools are so inelegant compared with the (often underestimated in the popular press) complexity of the human body, that we have comparatively little information about what's going on, and relatively blunt tools with which to try to steer it.

The next breakthrough, or rather what's already moving through the "early adopter" phase, lies precisely in algorithms focused on learning how to do things to systems with the minimum of experimentation. The combination of causal models (a type of probabilistic models with some simple but powerful mathematical extensions) and increasingly flexible probabilistic programming systems is moving us, as described in Judea Pearl's technical report, from tools that let us figure out what we know about what's going on based on what we see, to tools that let us ask what will happen if we do something to the system, or what would have happened if we had, and this while integrating in an efficient way both the available data and the conceptual knowledge of human experts.

It all sounds significantly more abstract, and much less exciting, than self-driving cars, but its impact will be seismic. Today days managers and scientists make decisions using superhuman amounts of information; nobody in a competitive organization is expected, or allowed, to operate using only the information they can hold in their head. In a comparatively short time, the only competitive organizations will be those that make decisions using data-driven causal models allowing them to simulate the results, and know the uncertainties, of different actions. Despite their apparent simplicity compared with the ambiguous richness of our intuitions, properly built causal models prove to be more effective, dynamic, and, perhaps above all, scalable than anything a human could master on their own, much less run in their head.

High-level jobs will change, not just in how they are performed but also, to an important degree, in their very nature. Even more significantly, organizations that deploy these technologies will be consistently and qualitatively better at making decisions at all levels, in a cumulative, constantly improving way. Expect the leading business and research organizations to move faster, more effectively, and in stranger ways than ever before, with the gap between winners and losers becoming larger and harder to overcome regardless of money or geography.

That's probably the only constant in the history of the still-ongoing IT revolution: it's not the ability to create new technology or the money to pay for it what defines competitive advantage — after all, most (near-)cutting-edge AI, except in specific industry verticals, is freely available — but the willingness, or rather the eagerness, with which an organization chooses to adopt it.

# Digital anamnesis and other crimes [short story]

A database is a tool for forgetting; it decays on command, faster than nightmares and more thoroughly than graves.

That's why you copied your employers' databases the day the first Chinese aid workers flew in. All talk was of reconstructing India, the unburied bodies blamed on nothing worse than thirst and hunger, the burned villages on the brutish heat of the young century's worst drought. So unrelentingly generous as they had been with their carbon, Western countries had turned misers of their own heat-parched grain. Never mind: the proud tale of every news site on this side of the country's firewalls, the disappearing monsoon had killed the crops but raised the people's courage and selfless unity.

When your program dumped its indictments online, it was the first time so many individuals had been accused of so many atrocities in so much detail. The first selfie taken by a crazed mob, each face tagged with a name and a history of casual hate turned bloody by fear. The press, built for an age where the resolution of guilt was limited by human patience and the human eye, did not know what to do with it, so they did very little.

You haven't had online access since then, and are unlikely to ever will. They've told you what you did was one of the top stories for a week, and then disappeared without impact at the beginning of the season's Amazon dust storms. You hope they're lying, just another facet of the casual, unrecorded torture.

# Short story: "On Agile Management as a Mechanism of Social Control"

No, for God's sake, it's not the Turing Police. They don't look for superhuman AIs, much less the sort that hires teams for convoluted transcension heists. I know that's what you wish we were working on, but they don't think they are any close to being feasible, and they would know.

What they try to anticipate is technology that would put somebody out of the reach of governments without having to pay for anybody's campaign. Terrorists and corporations, in theory, but, come on, as if Buzzfeed didn't have better tech people than ISIS. It's corporations they are worried about. From what I've heard, no, don't ask me where, it's mostly military technologies like whatever might be the nanotech version of a nuke, real versions of the memetics snake oil Cambridge Analytics was selling, encryption-killing number theory breakthroughs, that sort of thing. Supervillain stuff.

Don't worry, they don't go around killing every half-competent programmer. They don't even have to hack into networks to sabotage them, so IT sec doesn't help. Thye just blackmail key people to flounder about wasting time. Have you ever met a tech person you didn't suspect of hobbies they'd rather not talk about?

Listen. Point is, I'm not saying I'm sure our PM is deliberately sabotaging our project. I'm just saying I'm not finding that old declassified OSS sabotage manual as funny as I used to. The entire industry can't be this bad at deadlines, can we? And anyway, I think we should just roll with it. The money will always be there, this bubble or the next one, and I know I said they don't kill people very often, but maybe that's because they rarely have to, not because they won't.

And please, really, don't call them the Turing Police. That's one of the keywords they monitor.

(Automated transcript. Audio source: passive Smart TV listening network. Flagged for human review.)

# Mapping the shifting constellations of online debate

Online conversations, specially around contentious topics, are complex and dynamic. Mapping them is not just a matter of gathering enough data and applying sophisticated algorithms. It's critical to adjust the map to the questions you want to answer; like models in general, no map is true, but some are more useful, in some contexts, than others.

This post shows a quick example of a type of map I've found useful in practice. It shows the key semantics of tweets around the Irish abortion referendum of past May, based on data collected by Justin Littman. This is how the main topics of conversation looked like on May 3 — the size of each "star" in this constellation representing the relative weight of mentions of each key term (click on each graph to enlarge):

Points close to each other represent semantically similar terms (e.g., Dublin, Ireland, and irish), and the size of each "star" is proportional to the relative weight of each term in the set of key terms.

Generating a map like this takes multiple steps, and each step requires choices that can make the map relevant or not to a specific use case. In this case, I wanted to know how what people talked about in this debate shifted through time, and represent it graphically. This final goal shaped the process:

• I applied a key terms extraction algorithm to large sets of tweets from May 3, treating the entire discussion as a text; this is inappropriate, of course, if you want to see how different groups talk, or to compare and contrast hashtags, but it's compatible with my specific goal of drawing a map of the general debate's main semantic contents.
• I then used the well-known GloVe encodings to represent each key term in a high-dimensional vectorial space — for a map's geometry to make sense, geometric relationships between vectors have to be meaningful, and this is one of the key benefits of using this kind of representation.
• After that, I performed an isometric embedding of those key vectors into a two-dimensional plane, a disastrous step if we were to pass the data to another algorithm, but one to allows the drawing of a human-readable map where, if nothing else, nearness between words represents semantic nearness.
• Using those projected vectors as a reference grid, we can finally plot the "constellation" representing how much each key concept was mentioned during that day, in a way that's hopefully more semantically meaningful than a word cloud or list of mentions.

Every one of these choices, in a very real sense, is about what information to discard, and what possible insights to lose. We retained the information relevant to what we wanted to do — a two-dimensional representation of the main concepts, so we could later see how they changed — but this same pipeline would be worse than useless, for example, for the early detection of new topics.

Even choosing to encode individual words is a potentially questionable choice. We could, for example, use something like Facebook's InferSent or Google's Universal Sentence Encoder to encode the full text of each tweet, and then use standard dimensionality reduction algorithms like PCA or a narrow autoencoder to turn those points into a map. That's potentially very useful for some operations, but it turns out to be less than effective at creating a global map of the debate.

This is what the tweets above look through the eyes of an autoencoder projection to two dimensions of each tweet encoded using Google's Deep Learning sentence encoder:

As we can see, the structure this process picks is almost entirely one-dimensional — and, poking at the individual tweets, with no obvious semantic pattern. Complex models (and sentence encoding models are quite complex, if easy to use once trained) capture a lot of patterns. Blindly applied — asking "what do you think of this?" instead of a specific question — they aren't likely to return anything of immediate use. (This isn't to say that having a human ask the questions is always the right choice; you want a robot in charge of a system to have access to the full picture, not one filtered and simplified enough for a human to be able to follow. Humans viewing a chart and strategic software reacting to a situation have vastly different capabilities, and the last thing you want to do is to code for the minimum common denominator — but that's a different discussion.)

Back to our less powerful process, but one better tailored to the human-readable map we want to make, this is how the semantic constellation looked on May 24, one day before the referendum, with the previous size of each key term/star overlapped as a thin red circle for reference:

The weight of most key terms is relatively unchanged, but we see how terms like Dublin, tomorrow, and home gained relative weight (while others like baby, great, and even 8th lost it); this is compatible with both the timing of these tweets (the day before the vote), and the visibility of multiple initiatives to "bring home" people specifically for the referendum.

The point isn't that this particular combination of processing (keyword extraction), geometrical semantics (isometric embedding of a larger GloVe embedding), and change representation (overlapped circles) should be used in every or even most cases. The graph above is one possible answer to one specific question ("how did the things talked about the most in the Twitter debate over the referendum changed between those two dates?"), and we have seen how every stage of the processing had to be tailored towards answering it. Different questions — differences between groups of users, fast-emerging topics, links between sentiment and topic — will demand different approaches.

We aren't yet at a point where we can answer everything we want to know about an online conversation using a single modeling approach. Not because of lack of data or processing power, but perhaps because we have yet to work out the right concepts to think about the problem — as if we had the astronomical observations, but not yet had derived the physics. If so, we can look forward to a radical change in marketing, politics, sociological analysis, and every other area dealing with large-scale discourse once we do, and the work we do now, ad hoc as it is, is by bits and pieces getting us closer to that.

# The politics of crime-fighting software

Call it machine learning, Artificial Intelligence, or simply computational intelligence: countries are rushing to apply new technologies to combat crimes, but how they do so — and even what counts as crime — varies among them, and says much about their societies, priorities, and future.

One one extreme of the arc of possibilities there's China. Ruling a single-party state, the overriding focus of the Communist Party is to prevent and, if needs be, manage popular dissatisfaction; to keep the "mandate of Heaven", in the traditional terms. China boasts a tradition of internal surveillance and bureaucratic record-keeping that's arguably much longer and more intense than that of Western cultures; despite the relatively small size of Imperial bureaucracy compared with the size of its territory and population, the ambition (and partial success) of its data-intensive political tradition was rarely matched by contemporary states.

This tradition made a natural fit for the country a style of surveillance that mixes ubiquitous visible surveillance with an almost equally ubiquitous and algorithmic approach to modulating behavior that influence everything from jaywalking to political criticism, leveraging a combination of tens (soon to be hundreds) of millions of cameras linked to facial recognition algorithms and central databases, social media monitoring, and detailed feedback mechanisms of reward and punishment that go beyond a crime/non-crime binary — witness the quantitatively graduated "social credit" schemes, as well as the immediate shaming mechanisms of putting your name and face on large screens as soon as you commit a minor infraction. The Chinese government is deploying scalable computational intelligence in a way that mirrors its political traditions and contemporary goals; "crime", in this framework, is any potentially disruptive behavior, and therefore everything is not just up for surveillance, but also a legitimate locus of control.

A much different political tradition is that of the United States. In some senses it's one of extreme distrust of government capabilities; a paradigmatic case is the way the government's own gun ownership database is forbidden, by law, to be stored in an easily searchable electronic database — a pretty unique and symptomatic case of willful stupidity, or, if you will, tactical ignorance. There are similar phenomena in the strong, and occasionally successful, support for reduced monitoring of global climate patterns, vulnerabilities in voting machines, or some forms of crime; clearly, the intensity and pattern of deployment of computational intelligence for law enforcement in the United States is at least partly modulated by a desire to keep some state capabilities strictly limited — legislation by software deficiency.

The exception is whatever can be construed as an issue of "national security", an extremely pliable notion at best. American political traditions give carte blanche to the US government on matters of counter-terrorism, the military, etc (including things like immigration and internal surveillance when construed as national security matters); this has lead to a somewhat schizophrenic situation in which large and sophisticated internal data acquisition capabilities are used to pursuit low-frequency terrorism events, but the more bureaucratic processes of regular law enforcement works on much more primitive principles.

It's not, it must be noted, a matter of rejecting automation as such. Speed cameras — the first "robot cops" in human history — were quickly and enthusiastically adopted by US police forces, becoming sometimes an important component of their budgets. On the other hand, databases recording civil asset forfeitures are notoriously primitive and fragile, even in police forces with budgets and equipment matching those of some national militarizes.

Tactical heterogeneity, then — not only in China and the United States, but across both the developed and the developing worlds — is the defining pattern of deployment of computational intelligence in crime enforcement; being a new technology with somewhat nebulous possibilities, its usage reflects cultural expectations and para-legal strategies as much as it does technical concerns. Governments in the developing world are generally more limited by budgetary matters — usually deploying computational intelligence either against politically unprotected taxpaying sectors, or in matters of internal political security, rather than in more mundane law enforcement activities, but this also reflects long-standing (and politically self-sustaining) patterns of investment and sub-investment, as much as it does stringent budgetary constraints.

This is nowhere more clear than in the most speculative and science-fictional aspects of computational law enforcement. In the US context this is often framed as the Minority Report-style prediction of future crimes — the idea that enough data capture and analytical power will make it possible to predict, and therefore interdict, high-profile crimes without, paradoxically, having to deal with the contextual conditions impacting its frequency and scale. This is a concept with roots in intelligence analysis — made infinitely more salient by the 9/11 attacks — and it contrasts with the dual imagery of ubiquitous behavior control — reinforcement learning, rather than predictive algorithms — in Chinese approaches to computational law enforcement.

A third paradigm, one not tied to specific governments but rather to global civil society, is that of data journalism. In this model, leaked databases or open data repositories are mined by assemblages of journalists, domain experts, and data scientists, with the goal of finding, clarifying, and exposing malfeasance. It's not, needless to say, a replacement to law enforcement — it's a variant of journalism rather than one of policing — but it illustrates how computational intelligence has a potentially ubiquitous role in the law enforcement activities, even when hampered by the non-governmental nature of the actors deploying it.

William Gibson once noted, in a pithy observation likely to echo down the decades as ever-more appropriate, that the future was here, just not evenly distributed. This applies very well to the use of computational intelligence for law enforcement in particular, and state capabilities in general: few countries in the world are too poor not to use it anywhere, and none, no matter how rich, is using it in a widespread manner. The intensity and pattern of its deployment mirrors and amplifies similar local patterns of investment (and strategic sub-investment) in cognitive state capabilities. How could they not?

We're more likely to see the direction of influence change over the very long term; technologies of information processing do change cultures, including concepts of what states are and can and should do, but not right away. We have still to know what computers will do to the law, its enforcement, and its breaking.

(Based on an interview with Radio DelSol.)

# Using machine learning to read Sherlock Holmes

A while ago I posted about how to use machine learning to understand brand semantics by mining Twitter data — not just to count mentions, but to map the similitudes and differences in how people think about them. But individual tweets are brief snapshots, just a few words written and posted in an instant of time. We can use the same methods to understand the flow of meaning along longer texts, to seek patterns in and between stories.

For a quick first example, I downloaded three books of Sherlock Holmes short stories: The Adventures of Sherlock Holmes, The Memoirs of Sherlock Holmes, and The Return of Sherlock Holmes. The main reason is that I like them. Secondary reasons are that Holmes needs no introduction, and that the relatively stable structure of the plots makes it more likely the algorithms will have something to work with.

After extracting the thirty stories included in the books and splitting each story into its component paragraphs, I ran each paragraph through Facebook Research's InferSet sentence embedding library. Similar to its counterparts from Google and elsewhere, it converts a text fragment into a point in an abstract 4096-dimensional space, in such a way that fragments with statistically similar meanings are mapped to points close to each other, and the geometric relationships between points encode their semantic differences (if this sounds a bit vague, the link above works through a concrete example, although mapping individual words instead of longer fragments).

The first question I wanted to ask the data — and the only one in this hopefully short post — is the most basic of any plot: are we at the beginning or at the end of the story? Even ignoring obvious marks like "Once upon a time..." and "The End", readers of any genre quickly develop a fairly robust sense of how plots work, whether it's a detective story or a romantic one. We have expectations about beginnings, middles, and endings that might be subverted by writers, but only because we have them. At the same time, it's a "soft" cultural concept, of the type not traditionally seen as amenable to statistical analysis.

As Holmes would have suggested, I went to it in a methodical manner. The first order of business was, as always, to define the problem. I had between one and three hundred abstract points (one for each paragraph) for each story. To clarify what counts as beginning, middle, and end, I split each story into five segments — the first fifth of paragraphs, the second fifth of paragraphs, and so on — and took the first, third, and last segments as the story's beginning, middle, and end. As usual when using this sort of embedding technique, I summarized each segment simply by taking the average of all the paragraphs/points in it (that's not always the best-performing technique, but works as a reasonable baseline).

So now I had reduced the entire text of each story into three abstract points, each point a collection of 4096 number. Watson would doubtless have been horrified. Did all this statistical scrambling leave us with enough information to tell, without cheating, which points are beginnings, middles, and ends?

Plots in 4096 aren't always clear, but a two-dimensional isometric embedding wasn't very promising:

There are thirty points of each type, one for each story, but, as you can see, there isn't any clear pattern that would allow us to say, if it weren't for the color coding, wich ones are beginnings, middles, or ends. (Of course, this doesn't prove there isn't an obvious pattern in their original 4096-dimensional space; but we're trying for now to explore human-friendly ways to probe the data.)

There was still hope, though. After all, our intuition for plots aren't based exactly on what's being told at each point, but on how it changes along the story; the same marriage that ends a romantic comedy can be the prologue of an slowly unfolding drama.

Fortunately, we can use the abstract points we extracted to calculate differences between texts. Just as the difference between the vectors for the words "king" and "queen" is similar to the difference between the vectors for "man" and "woman" (and therefore encode, in a way, the semantics of the difference in gender, at least in the culture- and context-specific ways present in the language patterns used to train the algorithm), the difference between the summary vectors for the beginning, middle, and end of stories encode... something, hopefully.

And, indeed, they do:

Let's be careful to understand what the evidence is telling us. Each point encodes mathematically the difference between the summary meanings of two fragments of text. The red points describe statistically the path between the beginning and middle of a story, and the green ones betwee that middle and the end. The fact that red and green points are pretty well grouped, even when plotted in two instead of their native 4096 dimensions, indicates that, even after we reduced Doyle's prose to a few numbers, there's still enough statistical structure to distinguish the path, as it were, between the beginning of a story and its middle, and thence to its resolution. In other words, just as online advertisers and political analysts do "sentiment analysis" as a matter of course, it's also possible to do "plot analysis."

A final observation. Low-dimensional clusters are seldom perfectly clean, but there's an exception in the graph above that merits a closer look:

The points highlighted correspond to the beginning-to-middle and middle-to-end transitions of The Crooked Man. The later transition is reasonably positioned in our graph, but why does the first half of the story look, to our algorithm's cold and unsympathetic eyes, as a second half? At first blush it looks like a serious blunder.

It could be; explaining and verifying the output of any non-trivial model requires careful analysis beyond the scope of this quick post. But if you read the story (I linked to it above), you'll note that the middle of it, which begins with Watson's poetic "A monkey, then?", continues with Holmes explaining a series of inferences he made. That's usually how Doyle closes his plots, and in fact the rest of the story is just Holmes and Watson taking a train and interrogating somebody.

This case suggests that our quick analysis wasn't entirely off the mark: the algorithm picked a real pattern in Doyle's writing, and raised an alarm when the author strayed from it.

The family of algorithms in this post descends from those that revolutionized computer translation years ago; since then, they have continuously helped build tools that help computers understand written and spoken language, as well as images, in ways that complement and extend what humans do. Quantitative approaches to text structure will certainly open new opportunities as well.

# The mystical underpinnings of Facebook's anti-fake news algorithms

Imagine you're Rene Descartes, data scientist at Facebook in charge of saying true things. Problem is, your only input is what people say, and everybody lies.

It's not a moral failure, it's just that you've built Facebook into the world's largest integrated content distribution machine, and convincing people of things that aren't true is where the money is. So, how will you synthesize a truthful news feed about the world from the reports of people who are as likely to be trying to deceive you as not?

You can't. There's no algorithm that'll allow you, on its own, to reverse-engineer truth from an active opponent that controls, directly or indirectly, what you see. The original Descartes got away from this problem through a theological leap of faith, but that's not going to help you here.

For all the quantitative processes that make the practice of science possible and fruitful, the roots of it are, fundamentally, social. Science doesn't work without scientists, people who, as a group, are socially and personally committed to, basically, not trying to con you into believe something they know is false. Algorithms, heuristics, big data sets, and all of the other machinery is meant to deal with errors, shortcomings, and the occasional bad egg, but not with systematic active deception from an entire sub-community. To try to extract empirical non-mathematical truth from fundamentally suspect numbers is an exercise in numerology in its most mystical sense.

In order to deal with fake news, Facebook has to engage with other actors that can be trusted to have some sort of epistemological commitment to truth. The problem, of course, is that people interested in pushing specific non-true ideas (e.g. that climate change isn't a thing) have managed, after some systematic work amplifying other people's non-factual epistemological commitments, to paint as suspect the entire social machinery of truth-seeking (e.g. that climatology is a giant worldwide hoax). Facebook can't feed their algorithms with mostly non-adversarial (non-demonic, the Descartes would say) without making an active choice as to what organizations and processes are relatively trustworthy. It can't leave that choice to "society", because for most aspects of the world that matter there's a significantly motivated and funded side of society that has gone off the factual rails.

Can algorithms help? Yes. Can it be done, at the speed and scale required by Facebook's unique position in many societies, without algorithms? No. But algorithms are essentially fast, scalable ways of implementing an epistemological choice (who are you going to believe about what's around Jupiter, philosophy books or your lying eyes?), not magical oracles that make that choice for you.

It's not impossibly hard — it's also what journalists do when they evaluate their sources for both their likelihood of lying and their likelihood of having factual knowledge about facts the journalist wants to report on — but it's a choice that was deemed political at the time that Galileo made it, and it has never ceased to be political since then.

Facebook's position of we're a social network, our job is to help people communicate with each other, not to help them know true things is understandable in the abstract; its concrete moral validity, as it's always the case, depends on context and consequences. History is filled with atrocities made possible because people and organizations didn't make choices framed as political that weren't part of their basic business model, but that should've been part of a more basic moral boundaries. Facebook's history already contains atrocities — not just problematic political developments, but literal mass killings — made possible by their choice not to make one. No algorithmic sophistication will get you out of making that choice, or of your the responsibility for the consequences.

# Superman: The Last Son of Prague

Besides everything else, today Superman is an almost satirically strict exemplar of what a "good" immigrant has to look like: if you can pass as a white able-bodied heterosexual male, were raised in a small Kansas farm, got a prestigious job in a traditional professional field, and save the world two or a hundred times, then you, too, might be accepted (or shot at on sight on account of what a scared cop or soldier thought you might conceivably do, that might happen too). But he was originally something far uncannier and politically disruptive.

Raised in an orphanage, he called himself not a crime-fighter or a world-saver, but a "champion of the oppressed." Wikipedia has a fascinating list of his feats in the much-coveted but seldom-read Action Comics #1 he

• violently breaks into the Governor's mansion in the middle of the night to deliver a confession to stop an execution
• throws around a man who was beating his wife
• rescues Lois Lane from a man who abducted her after she rejected him (the man's a gangster, but the car about to be thrown in the famous cover isn't "a gangster's car" but "the car of a man who was about to rape a woman who rejected him")
• "forcefully interrogates" a corrupt US Senator to obtain information about his crimes

That's not the national icon who gets a statue from the thankful government of Metropolis. That's the creation of a couple of Jewish immigrants and sons of Eastern-European immigrants in a US that wasn't necessarily welcoming of either group, people who knew first hand that what you needed wasn't protection from alien invasions, but from abusive men and corrupt politicians.

Going slightly back in time, the cultural roots of Superman lie not in Krypton but in the ghetto of Prague, where, in the classic telling of Judah Loew ben Bezalel's legend, the Rabbi created an invulnerable, super-strong, unstoppable golem — using, in a sense, the advanced technology of a long-dead world — to protect the inhabitants of the ghetto from the many forms of formal and informal violence they were subject to. The legend usually ends up badly in ways that would make Lex Luthor nod in approval, but, long before rich and privileged Victor Frankenstein successfully created life and completely flunked his ethical responsibilities about and towards it, there was already a tradition of non-/super-human life created by the knowledgeable oppressed for specifically ethical and political goals of communal survival. The focus of Superman as immigrant, his tale of his assimilation into and to America, is a relatively later development, as is his deployment as a sort of long-surrendered ideal of what American "hard power" should and should *not* do.

He was to begin with created by and for oppressed groups, using the then-new idioms of science-fiction to retell the story of a supernatural equalizer called forth when the all-too-natural mechanisms of society are overly stacked against you.

There's no need to remark that all of the crimes Superman fought in Action Comics #1 were and are real and frequent. What might be worth pointing out is that, although our contemporary zeitgeist is one in which the uncanny is becoming increasingly operational and under the control of established powers — where billionaires plan Mars bases, cops in authoritarian countries have cybernetic access to facial recognition databases, and ubiquitously surveilled smart cities are prototyped by companies that are also vying for military contracts for targeting image analysis in, one dreams of being able to hope, completely unrelated developments — there's an older thread of ideas just below the surface, one in which radical technologies aren't just deployed by the powerful, but also as forms of both individual and communal resistance.

Captain America is less a metaphor than an entire category of military R&D grants, and more than one billionaire thinks themselves Tony Stark. But, in the inertia and apparent closure of our current narrative about our increasingly weirding, pre-/would-be posthuman future, let's not forget that the radical application of new technologies can also take place in the political margins. There's agency there, too, and new potential futures, all built with the age-old goals of community and survival, even if using new and stranger clay.

# Cyber-weapons as a form of magic, and why we can't code our way to a safer internet

A "cyber-weapon" isn't a thing, but a skill. It's not an object you can blow up with a missile or send a UN team to inspect, but the technical knowledge of how to identify and exploit a set of systems, written clearly enough that a computer can do it. It's a recipe that can make millions of copies of itself even as it bakes the cake it describes, and even if all copies were to be deleted, all it would take to recreate it is a single person with the technical knowledge writing it down again.

It's a poem that, written down, unread by human eyes, causes havoc in the real world. You might as well apply the concepts of "deterrence" and "arms control" to a rumor. By calling them "weapons," politicians and the military, while reflecting the uses for them they desire and fear, misunderstand their nature. People who, in other contexts and issues, claim it impossible to control the production and distribution of something as solid as an AR-15, attempt to ensure the security of their computational infrastructure by controlling the production and distribution of pure knowledge, in an era where the circuits inside a car's door could drown out the output of any printing press.

It's hopeless. And what's more, the vulnerabilities exploited by these weapons — the grains of truth that make the rumors work — aren't technical problems, not any more than a homeless person dying of hunger or cold is a medical issue. Yes, the immediate cause is technical — code written in haste to outpace a hype cycle, architectures built for external control and surveillance, and therefore half-insecure by design — but that's just as the short-term rational response of organizations to the business, legal, and political environment in which their find themselves. Safe and private computing, like healthy food, is less an impossibility than an often disingenuous gourmet option upon a fundamentally different social economic default, one often out of reach for those without the economic resources — unstressed, unharried free time being sometimes the scarcest — to acquire them. It can technically be done, it's just that the systemic incentives — perhaps primarily the psychological ones — aren't there.

Melvin Conway coined Conway's Law, which can be paraphrased as the observation that software systems created by an organization cannot but reflect the structure of the organization itself. Insecure, powerful, an arena of commercial and political and manipulation as much as one of interpersonal empathy and shared knowledge, a place we both need and distrust, we don't have any of the possible internets our societies claimed to want to build, but we do have the only one, perhaps, that reflects who we are. That's not a technical problem, and thinking it is (the unspoken axiom that everything is) might be at the root of what ails it.

# Wages, jobs, and unions in a world with AI (among other things)

I was quoted in an article in La Nación by Sebastián Campanario on the impact of artificial intelligence on wages and jobs, and positive and negative ways in which unions can respond to this. Original Spanish version here, and English translation here.

# Applying machine learning to SEC filings to find anomalous companies

Contemporary machine learning algorithms are well-suited to the complex, high-dimensional data associated with accounting records. In this short note we apply a simple unsupervised algorithm to find anomalous companies — those with accounting metrics that don't match the statistical patterns implied by the bulk of the companies.

To do this we leverage the SEC structured financial statements data set, a regularly updated collection of the machine-readable numeric core of the financial disclosures regularly filed to the SEC through its EDGAR system. To avoid (fascinating) technical details that would deviate from the focus of this post, we restrict ourselves to a set of 3,858 electronic fillings using a consistent sub-vocabulary of variable tags: Assets, CashAndCashEquivalentsAtCarryingValue, CommonStockSharesAuthorized, LiabilitiesAndStockholdersEquity, NetIncomeLoss, RetainedEarningsAccumulatedDeficit, and StockholdersEquity.

We use the reported company assets as a normalizing factor; while size is of course a variable of interest, we are looking for less obvious, scale-independent patterns and anomalies. The resulting normalized six-dimensional data set cannot be plotted, but a three-dimensional isometric embedding suggests the existence of non-trivial patterns and outliers:

Note that axis and values in the graph above are in many ways arbitrary; it's simply a reasonable effort at representing in three dimensions the relative distances between points in the six-dimensional data space for the company fillings. What is meaningful is the presence of a dense core of overlapping points, together with a handful of far-away outliers.

We can't perform this visual analysis on the original six-dimensional space, but we can fit an statistical model — in this case, a Gaussian mixture model — to capture the density patterns in the data, and then use that model to select outliers.

Fitting this statistical model and evaluating the implied density at each company, we see that although most companies live in "crowded" high-density neighborhoods, there are significant numbers of filings in very low-density neighborhoods — statistical outliers with few or no statistically similar companies:

The following are the five companies from the subset of 2017 SEC fillings our model identifies as the most anomalous:

Random as they might seem, these companies are all anomalous (or at least interesting) in concrete terms: they have zero revenue, have had their stock prices collapse, present weird numbers of employees, have systematically failed to introduce their SEC filings on time, etc. Other statistical outliers include Voltari Corp., DSwiss, Inc. (which had to change both executives and auditors), and Seven Stars Cloud Group, Inc., an AI/bitcoin/content/cloud/etc company that recently dismissed their registered public accounting firm and then slashed their guidance due to "[u]nanticipated personnel issues that led to internal communication and internal administrative oversights that materialized during the Company's 2017 fiscal year."

Note that the those events — and the associated stock price correction — happened after their report was filed into the EDGAR system; the filing was statistically anomalous on its own, and therefore a potential red flag.

Not all statistically anomalous companies, or statistically anomalous numbers inside a single company's data, are necessarily red flags; there's such a thing as a positive outlier. But the amount and richness of digitally available financial data makes it possible to detect in fairly automated and scalable ways companies, individuals, and transactions that are, to a mathematical model's eyes, strange, and therefore worth looking into. In a world where analysts, investors, auditors, and regulators need to deal with enormous complexity under increasing time pressures, machine learning methods offer a way to even out the playing field.

# Using Artificial Intelligence to Understand Brands

Artificial Intelligence techniques behind automated translation and other NLP (Natural Language Processing) applications doesn't just work at the level of words and phrases, but give us quantitative data about the meaning people assign to different concepts, including brands. We can then measure relationships between different brands in a space, understanding what distinguishes how people talk about, or rather with, them.

As an example, we'll use data from the GloVe project at Stanford — in particular their Twitter- and Wikipedia-derived sets — to look at the main smartphone brands: Samsung, Apple, Nokia, Sony, LG, HTC, Motorola, and Huawei. It's a dictionary, but a very special one: instead of taking a term in English and mapping it to a Chinese one, it takes terms in English, Chinese, etc, and translates them to points in an abstract space, essentially sets of apparently meaningless numbers. But they are far from arbitrary; the software learns to map terms that are used in similar ways, regardless of how they are written, into points that are close in this abstract space. So bicycle, bicicleta, and 自行车 are translated to points that are close to each other, which is how a system like Google Translate knows how to go from the English term to the Spanish one — it just looks for the closest point that comes from a Spanish term (using neural networks to figure out this map is where the real difficulty lies, but that's for a different post).

We can leverage this into an intuitive but data-driven way of looking at the relationship between multiple brands. Just as words that have similar meanings are closer to each other than those that have different ones, brands that people think of as similar will be talked about in the same way, and so, because they are similar as words, will end up having close points in the abstract space. It sounds a bit... abstract, but here's how it looks for the main smartphone brands, using the mapping based on GloVe's crawling of about two billion tweets:

What this map is telling us is that, based on the way people actually use the brands as words when tweeting — in some senses, the "real" content of the brand — most smartphone brands are pretty much identical, with Apple (as expected), Huawei, and LG the odd ones out. An algorithm that told us that Samsung and Motorola are identical as brands wouldn't be a very perceptive one, and in fact that's not what's happening here. If we zoom into that cluster of brands, we see they are quite separate from each other:

What we're seeing is that, simply put, compared with Apple (and Huawei and LG) Samsung and Motorola are identical as brands; it's just when you zoom into the area of "major Android brands" that you can see that Samsung and Nokia are quite similar — compared to Motorola. It's all relative, but not in an arbitrary way.

So we've used AI to put in perspective the success of Samsung's branding efforts, or, in a more positive way, to highlight how Apple and a couple of other brands are on their own class, each of them separated as words from the pretty much homogeneous (unless you forget the competitors and zoom into them) central core of Android brands. But can we say something about the semantics of that difference?

Surprisingly, we can! Sort of. The biggest surprise of this algorithm — and it caused quite a stir in the AI community when it was first published — is that there's meaning not just in the distance between points, but also in their specific geometric relationships. The best way to explain it is by showing it:

Each of the words king, queen, man, woman, son, daughter gets its own point, as expected. The fascinating thing is that the arrow between king and queen is almost the same in length and direction as the arrow between man and woman, and they are also almost identical to the arrow between son and daughter! Somehow, the algorithm doesn't just learn a way to represent words as points, but also the abstract relationship female version of, which is "translated" into an arrow of specific angle and length; to know what the female version of son is, you just start from that point, do the same jump as you'd do to go from man to woman... and reach the point for daughter.

This is an incredibly powerful capability, because now we can ask not just which brands are comparatively similar or different, but in what way. We took a list of fifty common adjectives in English as our vocabulary, and asked the data

if king is to queen as man is to woman then
the generic Android brand is to Apple as... ? is to ?

The closest metaphors we got?

the generic Android brand is to Apple as black is to white

the generic Android brand is to Apple as international is to national

the generic Android brand is to Apple as special is to great

This doesn't mean people use the word national a lot when talking about Apple. It's subtler and more powerful: the "national-ness" that is the difference between the words international and national is similar to the difference between the way people use the words for the major Android brands, e.g. Samsung, and Apple. The white cases and Apple being an US company (and, arguably, their being great) are part of the difference in meaning between Apple and Samsung.

The most important thing about the above is how unsurprising it is if you pay attention to the brands; it's part of the discourse, yes, but the algorithm automatically teased those differences out and simplified them to the starkest meaningful metaphor. Apple is the white smartphone — you push billions of tweets in one side, process it carefully enough, and out comes a conceptual observation.

Artificial intelligence: it's not just about numbers anymore (and it never was).

Applying the same analysis to compare LG and Huawei against the rest of the Android brands gives us consistently the terms good (for LG) and strong (for Huawei), perhaps an indication of solid if not brilliant conceptual branding (remember, we're quantifying metaphors, not counting mentions — it's not the words in the advertising copy, but the way people use the brand in their own tweets).

But the important aspect of this technique is that we could just as easily have used a completely different vocabulary to query the relationship between brands — feelings instead of adjectives, or terms related to prices, or whatever vocabulary helps answer the specific question we wanted to ask. Brands, like every other word, are deeply multidimensional, and so are their relationships; rather than attempting to oversimplify them through the narrow lens of a specific survey, pulling vast amounts of actual usage data allows us to look at concrete answers to specific questions about the relationship between brands in all the messy complexity of the real world, yet distill that complexity into conceptually usable semantic relationships.

Forget counting retweets and classifying mentions: we now have the tools to look into the living reality of brands as part of our continuously shifting languages, and to apply quantitative methods to elucidate, and eventually shape, their conceptual and emotional overtones. As happened with metrics-driven, quantitatively optimized advertising, leveraging these tools will require expanding the conceptual and strategic toolsets of organizations in ways that to many will feel too alien to attempt, but which will eventually become part of the basic practices of the industry. Marketing, I believe, will become the richer for that, not just through increased transparency and effectiveness, but also by making possible the development of completely new means to achieve the oldest ends.

After all, what has marketing always been, if not the engineering of hidden metaphors?

# Flash fic: Posthumous

The video of my murder had been viral for days. I could understand the initial error, as it was an uncommonly realistic fake; I preferred not to think about how enthusiastically people viewed and shared it even after the news had been debunked. I had watched it myself often enough to make its key moments too familiar to pay attention to: the man in the dark shirt drawing his gun, the frozen security guard, the sudden spot of blood on a green dress.

So when I saw a man in a black shirt with a gun in his hand I felt no fear but incredulous deja vu. He wasn't as smooth as the real one had been in the synthetic video, but the security guard near us was even slower, his hand momentarily caught in the muscle memory of that other cop.

Nobody in the lobby, as the man took agonizingly awkward seconds to aim, looked surprised or tried to stop him.

I know how you feel, I thought without resentment. I wasn't running, because I hadn't. I knew the spell would only last a few seconds more, and that it would be enough to get me killed.

In a final moment of satisfaction, I realized without looking that I was wearing a green dress.

.finis.

# Urban sensors are for the fog of (climate) war

Silicon Valley pitches for smart cities and military descriptions of future battle environments are awfully similar. That's not entirely coincidental.

Deep historical and institutional links aside, you can chalk it up to convergent engineering. Most people live in unevenly urbanized, socioeconomically unstable, and ecologically unsustainable cities. They are variously labelled as users, potential enemy combatants, or citizens by the different types of organizations setting up increasingly dense systems of algorithmic monitoring, prediction, and control, but the technologies, methods, and even goals are fast becoming impossible to distinguish (nowhere faster than in China, perhaps, although not for lack of trying elsewhere).

The main scenario tech companies have in mind is one of a prosperous, gleaming city filled with deeply monetized and continuously engaged inhabitants; for military organizations, large, poor cities populated by the networked restless with nothing to lose. Neither is implausible — they are instead, like every scenario, selective ways of looking at a complex reality.

Here's another selective way of looking at a complex reality: going by the current trends in technology, economics, and geophysics, the typical large city of the mid- to late-21st century is going to be one of lethal heat waves every year, chronic water difficulties, and food insecurity of the kind that shaped, say, Roman politics during the Late Republic and Early Empire (that's on top of, and ignoring for the sake of argument, not at all unrelated issues of economic and social inequality and instability). With three or more degrees Celsius of overall warming, New York will more or less live or die by its storm walls, but high densities in places like India and the South of China will be monstrously difficult to sustain, and require powerful technologies to monitor, predict, and interact with urban populations.

The most likely use case for all those sensors, drones, and programmable (but not end-user-programmable) infrastructure won't be administrating an ecologically and socially healthy urban environment, but rather making the permanent crisis easier to manage.

Getting comprehensive telemetry of massive multi-causal urban collapse, in the not-quite-the-best scenario.

The future, sometimes, works out better than expected (but not without a lot of hard work, and that rarely by the same people who made up the mess in the first place). A combination of a faster than expected transition to a zero-carbon economy, humane and orderly migration away from whatever areas become impossible to maintain, large investments in sustainable infrastructure wherever it's needed: we have or can develop the means to do all of that, and more. The more knowledge, resources, and technology a civilization has, the more ecological disasters become matters of sociopolitical failure rather than capricious fate. We (for far from uniformly responsible values of "we") made the current and oncoming mess, but it's within our ability to handle it with a minimum of ensuing horrors.

But whatever future we end up building, the hardware and software we're threading through our cities — which by then will be as boring when working as sewage or food logistics, and almost as necessary — will have its greatest impact not through the placement of interactive ads or the airborne delivery of pizzas, but as part of the basic toolkit that will define how — and for whose benefit, or following whose vision — we interact with each other and with the world.

# What's the healthiest city in the US? (and what does that even mean?)

(Spoiler alert: it's not Detroit, and it'd be a relatively simple question if it weren't for cancer.)

The human body has multiple ways of not working well; at its broadest, the CDC's 500 Cities project data set lists fourteen different health outcomes, from arthritis, to stroke, to the wonderfully phrased mental health not good. But, just as for an individual having a certain condition raises the probability of others (e.g., high blood pressure and stroke), there's also a high degree of correlation between how prevalent different conditions are in each city.

For example, and unsurprisingly, cities where high blood pressure is more frequent tend to see more strokes:

But we can do more than plot specific pairs of conditions; we can use this data to build a "family tree" of diseases, with closely related diseases tending to show up together in the same city:

Just as high blood pressure and strokes are closely related, so are diabetes and chronic kidney disease, or coronary heart disease and chronic obstructive pulmonary disease. Interestingly, pretty much all conditions are correlated with each other, except cancer. Leaving the prevalence of cancer aside for a minute, the other thirteen conditions in the CDC data set are correlated closely enough that a direct PCA process gives us a main component that explains almost 80% of their variance.

In simpler terms: we can build a single "general health index" number that predicts relatively well the prevalence in a given city of different kinds of health outcomes, from strokes to asthma.

Calculating this number using PCA and rescaling it to have zero mean and unit variance, we get a pleasantly well-distributed index:

This allows us to sort cities in order of "general healthiness":

• 1. San Ramon (CA)
• 2. Sunnyvale (CA)
• 3. Mountain View (CA)
• ...
• 498. Gary (IN)
• 499. Flint (IN)
• 500. Detroit (MI)

No big surprises there. Twelve of the top fifteen healthiest cities in the US, by this definition, are in California — habits, demography, income, everything helps, and the "California health nut" stereotype has, unlike most, data to back it up. The mirror image of this situation are cities like Detroit and Flint; one corollary of the fact that we do have a large body of public health knowledge is that, unlike in other historical periods, differences in population-level health outcomes are a function of economics and politics (in their broadest senses) rather than, or more than, biology.

However, recall that to build our elegantly simple "general health index" we had to put aside the not-so-small matter of cancer. It turns out that cancer plays by very different rules, as it becomes obvious when we plot its (similarly normalized) prevalence against our generic health index:

Cities like San Ramon (CA) and Mountain View (CA) have an average prevalence of cancer with respect to the rest of the country, but higher than much less healthy (in the everything *but* cancer sense) cities like Laredo (TX) and Brownsville (TX). Here youth seems to beat experience, income, and technology: Laredo and Brownsville have median ages of 28.2 and 27.7 years respectively, while San Ramon has a median age of 37.6 years, almost matching the US's overall metric of 37.8 years.

The lack of correlation between the prevalence of cancer and that of most everything else shows not only the profound differences in their physiological mechanisms, but also in the state of our medical technology. We know quite a bit about how to prevent things like diabetes, strokes, high blood pressure, etc. — by and large, they are different expressions of the same set of underlying physiological issues, which is part of the explanation of why they are so closely correlated across cities. Cancer is a different matter. It's less a single disease than a bewildering array of cellular insurrections, one on which we've made astounding strides during the last years, but still comparatively poorly understood.

This is an stark example of the difference between technological possibility and political outcomes: given the state of our technology, Flint and Mountain View should be equally healthy, as they differ in things we know how to improve, and are equal on the condition we are most powerless against.

The state of technology, of course, isn't static: inexcusably late but at last, medical researchers are beginning to approach aging as a root disease, and having practical ways of reversing some forms of basic physiological damage — the common mechanisms behind the "everything-but-cancer syndrome" — will lead to improvements in how we treat and prevent most conditions. And if cancer is the one we know the least about, it's also the one where our improving computational capabilities might help the most. But there's little difference between not having a technology and choosing not to use it; we have reasons to hope for significant improvements in technological possibilities during the next few decades, but a hazier plan for their public health impact.

# The normalcy of online learning: the more you study, the better you do

Online learning, after all, is just a form of learning: time spent studying is one of the best predictors of success.

Both the pattern (and the exceptions) can be seen quite clearly on the Open University Learning Analytics dataset, which collects anonymized data about the personal characteristics and, crucially, interactions with the Open University's Virtual Learning Environment (as counts of clicks by date) of 32,593 students registered in 22 courses; see the linked entry on Nature for a detailed description of the data set. For this quick exploratory analysis I chose to focus on students that either passed or failed their courses, ignoring those who withdrew along the way; the latter is a very frequent outcome in this kind of setting (31% of cases in the data set), but one that merits a separate analysis.

Of those students that completed the course, 68.6% passed it (13.5% of them with Distinction), and 31.4% failed. To what degree was this a matter of sheer effort?

Here the data supports what teachers and parents always say. Only about a third of the students who interacted with the learning platform between 10 and 23 days (the second decile of activity) passed the course, while 94% of those who did it between 120 and 155 days (the ninth decile) did. This is perhaps an obvious effect, but it's noteworthy that even among the highest deciles of activity, more activity leads to a better result: moving from the eight to the ninth decile of activity — from, say, 110 days of activity to 140 — raises the probability of passing the test an extra five percent.

There are things we can say about the probability of somebody passing the course before it begins. Most significantly, the probability of passing the course among students who finish it grows strongly with the already achieved educational level of the student (note that this date refers to the UK educational system).

There's nothing mysterious about the mechanics of it. By and large, better-educated students interact more often with the platform, and the extra days explain much of the variability in outcome.

This is the point where reading the data becomes tricky, and domain experience and a healthy dosis of skepticism become useful. There's both a correlation and a reasonable mechanism of influence between studying more days and getting a better outcome, which — as an hypothesis to guide interventions — suggests we should attempt to get students to interact with the platform more often. But understanding why they already don't do it on their own is critical to understanding what would help, and that's not necessarily obvious from this data. For example, one possibility is that students simply underestimate how many days of study they'll need in order to get a reasonable chance of passing the course; if that's the case, then explicit, dynamic guidance on this could be of use (including something like a regular, model-based Estimated Probability of Passing alert).

On the other hand, the data does suggest that more exogenous constraints probably play a role. To their credit, and that's something that every educational system should attempt to replicate, this Open University data set also includes socio-economic information in the form of the student's approximate Index of Multiple Deprivation, an statistical proxy — based on a ranking comparison between places in England — to issues like crime prevalence, unemployment, education, income, etc, of the place where the student lived during the course.

This index is correlated with the outcome of the course, as would be expected (a higher IMD band indicates a more favorable socio-economic context):

But also with, and arguably through, the number of days students interact with the platform:

So there are factors, which could be cultural but might as well be, and we could easily imagine are, related to constraints in resources, time, energy, support networks, etc., of students living in more deprived areas. If or to the degree to which the latter is the cause, "gamification" features like the one described above would at best be useless and at worst a mockery. The point of data-driven analysis is to be able to determine what's going on, in order to guide our intuition about what could help; this data set suggests possibilities, but that's as far as we can get with it.

Of course, that in this post we're playing at reinventing the wheel — poorly. Education experts are deeply familiar with everything we've discussed so far, from the impact of study time on outcomes to the effect of socioeconomic constraints. The point isn't that we have found anything new, but rather to show how already-known things surface very quickly and obviously whenever data is gathered in a sufficiently comprehensive and open way, and the possibilities for personalized diagnostics and scalable assistance that this might offer as a way of assisting and helping educational systems.

On the topic of things already well-known, we've seen that putting in days of interaction with the platform improves students' chances of passing the course, and that better-educated students have a higher a priori chance of doing it. Is the increased time all of it? In other words, do higher educational achievements, besides being correlated with exogenous and endogenous factors related to being able to study more, also enable students to do it better? Do students with different educational backgrounds get different amounts of value from a day of interacting with the system?

The data set only offers indirect clues to this, but as far as we can see, this is true pretty consistently. For each intensity of interaction with the platform, students with a higher level of education will, generally speaking, do better (click on the graph for a larger version):

We can't distinguish with this data, of course, the details of the mechanics of how this happens; "better study habits" can include anything from a larger store of previous knowledge to draw relationships from, to a better physical environment in which to study. The often large correlations between different factors are part of what makes research in social sciences both difficult and important. But we see there's a difference, which means there's also potential for improved outcomes.

Online learning isn't, in many ways, a radical departure from traditional education: we can see how the traditional issues of socio-economic context, educational history, and effort continue to play the roles they always have. However, the increased legibility of the online process, and the enormous flexibility it offers for interventions and experiments, make it not just a powerful teaching mechanism on its own, but also a tool to help us understand and improve learning in general.

# Deep(ly) Unsettling: The ubiquitous, unspoken business model of AI-induced mental illness

"The junk merchant," wrote William S. Burroughs, "doesn't sell his product to the consumer, he sells the consumer to his product. He does not improve and simplify his merchandise. He degrades and simplifies the client.” He might as well have been describing the commercial, AI-mediated, social-network-driven internet.

The emotionally and politically toxic effects of the ecosystem of platforms like Facebook and Twitter, together with the organizations leveraging them, might not be their intended goals, but they aren't accidents either. If you configure a data-driven system to learn the best way to induce users to stay on the platform and interact with it and its advertisers, it'll simply do that. It just so happens that the ideally compulsive, engaged user of a game and or a social network, the one every algorithm is continuously trying to train by what content and rewards it offers, isn't the emotionally healthy one.

Maximizing engagement is the explicit optimization goal of contemporary online businesses. They just rediscovered and implemented, quickly and efficiently, the time-honored tools of compulsive gambling, gaslighting, and continuous emotional manipulation. They aren't tools that make the user mentally healthier, quite the opposite, but nobody programmed to algorithms to even measure, much less take into account, this side effect.

And the impacts they've had so far have been achieved with technology that's already conceptually obsolete. Picture the greatest chess player in history, retrained using the knowledge of the day-to-day experience and reactions of billions of people into the world's most effective and least ethical behavioral therapist, fed in real time every scrap of information available about you, constantly interacting with each digital device, service, and information source you are in direct or indirectly contact with, capable of choosing what it's suggested to you to see and do — even of making up whatever text, audio and video it thinks it'll work best — and dedicated exclusively to shaping your emotions and understanding of the world, with no regard at all for your well-being, according to the preferences of whoever or whatever is paying it the most at the moment or is best exploiting its own technological vulnerabilities.

Rephrased in an allegorical way, it could be an updated version of one of Philip K. Dick's Gnostic nightmares. A video designed by a superhumanly capable AI to exploit every one of your emotional weak spots — a murder victim with a face that reminds you of a loved one, a politician's voice slightly remodulated to make it subliminally loathsome, a caption that casually inserts an indirect reference to a personal tragedy at the exact moment of the day where you're most tired and your defenses at their lowest — wouldn't be out of place in one of his stories, but it's also just a few years' away from being technologically feasible, and very explicitly in the industry's R&D roadmap. Change the words used to describe it, changing nothing of what it describes, and it's a pitch Silicon Valley investors hear a dozen times a month.

It'd be absurd to pretend we've always been sane and well-informed. Every form of media carries opportunities for both information and manipulation, for smarter societies and collective insanity. But getting things right is always a challenge. This one is ours, and it might be one of the most difficult we have ever faced. The amount of information and sheer cognitive power bent on manipulating each of us, individually, at any given minute of the day is growing exponentially, and our individual and collective ability to cope with these attempts certainly isn't. Whether and how we react to this will be a subtle but powerful driver of our societies for decades to come.

# When devops involves monitoring for excess suicides

There is strong observational evidence of prolonged social network usage being correlated with depression and suicide — enough for companies like Facebook to deploy tools to attempt to predict and preempt possible cases of self-harm. But taken in isolation, these measures are akin to soda companies sponsoring bicycle races. For social networks, massive online games, and other business models predicated on algorithmic engagement maximization, the things that make them potentially dangerous to psychological health — the fostering and maintenance of compulsive behaviors, the systemic exposure to material engineered to be emotionally upsetting — are the very things that make them work as businesses.

Developers, and particularly those involved in advertising algorithms, content engineering, game design, etc, have in this a role ethically similar to that of, say, scientists designing new chemical sweeteners for a food company. It's not enough for a new compound to have an addictive taste and be cheap to produce — it has to be safe, and it's part of the scientist and the company's responsibility to make sure it is. If algorithms can affect human behavior — and we know they do — and if they can do so in deleterious way — and we also know this to be true — then developers have a responsibility to account for this possibility not just as a theoretical concern, but as a metric to monitor as closely as possible.

Software development and monitoring practices are the sharp end of corporate values for technology companies. You can tell what a company really values by noting what will force an automated rollback of new code. For many companies this is some version of "blowing up," for others it's a performance regression, and for the most sophisticated, a negative change in a business metric. But any new deployment of, e.g., Facebook's feed algorithms or content filtering tools has the potential of causing a huge amount of psychological and political distress, or worse. So their deployment tools have to automatically monitor and react to not just the impact of new code on metrics like resource usage, user interface latencies, or revenue per impression, but also the psychological well-being of those users exposed to the newest version of the code.

I don't know whether companies like Facebook treat those metrics as first-order data input to software engineering decisions; perhaps they do, or are beginning to. The ethical argument for doing so is quite clear, and, if nothing else, it should be a natural first step in any goodwill PR campaign.

# Short story: Nanobots and the Teenage Brain

It took a while to diagnose Charlie's problems; what thirteen years old boy isn't moody? But once his parents suspected there was something else going on inside his head, doctors injected a swarm of machines so small they were practically very large drugs, and the machines showed them that, to Charlie's annoyance, his parents had been right.

Brains are like ecosystems, Charlie's doctor explained to him and his parents. Every part of Charlie's brain works, but the way they synchronize and work together isn't the way we would prefer. The system is in a balance of sorts; it's just that it's a balance that results in things like mood swings and insomnia.

The doctor hadn't mentioned the nightmares, but Charlie suspected he knew how bad they were, even if he hadn't told him, or anybody else, how much the night scared him. Probably the machines inside his head had told the doctor the truth. Charlie didn't have insomnia, he just tried not to sleep.

What do we do then? had asked Charlie's mother. Give him medication? His uncle used to take antidepressants.

The doctor had nodded. That's what we would have tried a few years ago, but it takes quite of a bit of trial and error, and even once you find something that works, there are usually side effects. Almost always the side effects are minor compared with the original symptoms, and you can tweak the dosage and sometimes eventually cease the medication, but today we have better tools. We already have nanobots lodged in key areas of Charlie's brain. We are using them to diagnose him, but they can be partially rebuilt to integrate themselves with his brain functions.

Charlie's father, who had seen a lot of horror movies as a teenager, frowned. You mean use a computer to control his emotions?

The doctor smiled. Oh, no. Its like adding a carefully chosen new species into an ecosystem. It will interact with the rest of his brain, send a signal there, dampen a neurotransmitter here, and Charlie's brain will adapt slightly to it while the machines adapt very strongly to him. The end result will be a healthier and more resilient brain, but not a different one, and certainly not one under anybody's control but him. We are only beginning to try it in humans, but we can monitor it very closely and stop if anything looks wrong, so in a sense it's safer than the usual medication.

Using machines they could instantly switch sounded like a safer option than trying medications until they found something that worked, so they modified the nanobots in Charlie's brain to make them able to talk to it as well as listen.

The brain talked and listened to itself, and now itself included both the machines and the software controlling them from a small chip in Charlie's skull. The chip learned from Charlie's brain, Charlie's brain learned from the chip, and eventually they were just Charlie.

The mood swings and the nightmares went away. The chip didn't change Charlie, and nobody hacked them. This isn't that kind of story.

* * *

There's a thirteen years old child waking up from a nightmare, crying. But this is three years later, and she's called Grace.

* * *

Charlie's parents wanted to refuse. Would've, certainly, even to the doctor who had healed Charlie. But he had shown them videos of Grace, and although everybody agreed that it had been a low trick, it was enough to make the parents agree to leave the choice to Charlie.

She has the same sort of device you have, the doctor told Charlie. The device works well; yours too, by the way, you know I'll get an alert if anything went wrong. But the device needs to learn from the brain how to help it, and for some reason it's not able to learn from Grace's. We think her condition is somewhat different from yours, like the same riddle in a different accent, and the device isn't picking it up.

So what do you want me to do? asked Charlie. He wasn't a bad guy, but he didn't want to go to a hospital again, ever.

The doctor told him his plan. It was much worse than what Charlie had feared. Maybe that's why Charlie said he would do it, the way sixteen years old say 'yes' to whatever really scares them.

* * *

Grace and Charlie didn't lie in parallel operating tables, thick cables connecting their skulls. They sat in comfy chairs next to each other while both sets of parents watched. The doctor was telling them again how they had temporarily reprogrammed the chips in their skulls so Charlie's chip would control Grace's nanobots and the other way around, but that was mostly to fill the silence while he monitored everything.

Not that the parents paid much attention anyway. Charlie's were too worried about something going wrong, and Grace's were crying softly.

For the first time in a long while Grace had fallen asleep smiling.

* * *

It took seven sessions for Charlie to train Grace's device. At the end they were close strangers, people with nothing in common except a very important thing much too big to base a friendship on. But she was thirteen, so she had given him a nickname anyway.

Why does she call you that? asked the doctor after the last session, at a time when he and Charlie were briefly alone.

You know, said Charlie, rolling his eyes, like the guy from the movies. The one who can read minds.

The doctor, who had liked the character about two franchise reboots before, smiled. Well, your brain can do something nobody else's can, and you helped her, so she's not entirely wrong about that.

By the way Charlie looked at him while pretending to find him ridiculous, the doctor knew that he would agree to help if he ever asked again.

* * *

He did ask, four times. It turned out Charlie's success had been less likely than they had thought, and his brain's talent to train the device a rare one. Charlie was always enthusiastic to help, and Charlie's parents eventually made peace with it, not without fear, but also not without pride.

The doctor finally stopped asking for his help once the company designed new machines that could learn from any brain; they had figured out how to do that by watching Charlie help others, and in that sense he would always be helping. Charlie had shrugged when told, relieved but hating himself a bit for it.

He kept in touch with the doctor. They never mentioned the returning nightmares. Charlie had known his own well enough to understand they weren't his to begin with; his brain had learned them from the other kids' devices, who had learned them from their brains.

They talked about everything else, mostly about the people helped by the software they had built based on Charlie's brain, pretending it was a coincidence that the doctor always called the morning after a bad nightmare. He was still monitoring Charlie's device, after all.

Charlie hates the nightmares, and feels bad about never telling his parents about them. But if he had told them they wouldn't have let him help. Keeping secrets had been a necessary part of being a superhero, and if he woke up in a cold sweat more often than not... Most retired heroes had scars, and he had earned his helping others.

And he's no longer afraid of the night.

# Short Story: The Voice of Things

She had liked the illustrated book so much much she told you right away she had prayed to get it for Christmas, alone in her bedroom where nobody but God could hear. You didn't mention her teddy bear had probably heard her and the toy company then sold the information to an advertiser who had offered you the book with an extraordinary discount. If she was happy, that was what mattered.

You never realized the bear sometimes talked back, not until the scandal made the news. It turned out it always could, it just had waited until its sensors told it kid and toy were alone. The license that came with the bear's software made this "user bonding" legal; the company went bankrupt anyway.

But nothing's ever forgotten if there's money in remembering, and sometimes you're almost sure things talk to your daughter not with their standard voices, but with one she remembers and trusts.

So you talked to her about cookies and the cloud, at least what you understand of it. She nodded along to your explanation, unsure, asking nothing. Afterwards, you wondered what things would tell her when she asked them.

# Hegemonía electoral y outliers estadísticos

Hay elecciones que se ganan cómodamente, otras que se ganan por goleada... y está Santiago del Estero.

El Viernes tuve la suerte de participar en el Datatón Electoral organizado por Antonio Milanese, analizando sets de datos de las elecciones pasadas junto con otros analistas de datos, politólogos, etc. El análisis que probé no respaldó mi hipótesis (es el riesgo de trabajar con datos...), pero dio pie a una observación interesante.

A pesar de la aparente polarización electoral en la Argentina, incluso mesa por mesa los resultados tienden a ser relativamente cercanos. Por ejemplo, en las elecciones para diputados nacionales en el 2017, solo en el 47% de las mesas la opción ganadora en esa mesa sacó más de la mitad de los votos:

La asimetría de esta distribución es lógica (es difícil ser la opción ganadora en una mesa con menos del 40%), pero igualmente la cantidad de mesas en las que la opción ganadora sacó un porcentaje muy alto es en si misma muy alta: en el 1% de las mesas el ganador sacó más del 83% de los votos, algo que en un análisis estadístico superficial no debería pasar casi nunca. Esta es una "anomalía" estadística que refleja un patrón social bastante común. La gente que vota en las mismas mesas tiende a ser más homogénea social y políticamente que la que vota en mesas diferentes, y es natural haya más mesas políticamente homogéneas de lo que sería dado esperar si personas y mesas fuesen asignadas al azar.

Pero por otro lado, si miramos donde están esas mesas inesperadamente homogéneas, surge algo que tiene menos que ver con la sociología abstracta. De las 1004 mesas en las que el ganador sacó más del 83% de los votos...

• 46 están en la Ciudad de Buenos Aires
• 74 están en la Provincia de Buenos Aires
• 87 están en Formosa
• 607 están en Santiago del Estero

A nivel nacional, alrededor de una de cada cien mesas fue por goleada; en Santiago del Estero, más de una de cada tres. Las siguientes son las diez provincias con mayor porcentaje de mesas por goleada (haga click en el gráfico para agrandarlo, pero, como puede imaginar, la barra gigante de la izquierda es Santiago del Estero):

Esta no es una observación sorprendente dada la realidad política de Santiago del Estero o Formosa, pero muestra cómo algunos patrones sociales y políticos locales son visibles incluso en el análisis cuantitativo más superficial.

# Short story: Soul in the Loop

Every shower she takes makes you more certain she will have killed herself before her daughter's tenth birthday, and these days she's taking one every time she logs off. You aren't allowed to tell her, but the NDA you made her sign has so many post-employment clauses she wouldn't be likely to find a job elsewhere anyway.

A daughter, two parents with Alzheimer, and the obsolete skillset of a radiologist and former e-sports semi-pro: She's as good a match for this job as any human could be, and only humans are allowed to. That's the point.

The politics of deploying killer robots require humans watching what they do to prevent them from doing the a posteriori unacceptable, but the business side of the equation — and somewhere in the company's software stack there's a piece of mathematics modeling just that — compels humans to barely if ever stop the robots from taking the shot. Armies don't pay for robots that don't shoot. The Oxford Protocol supervisors are there to ensure they could, theoretically, be stopped, and to suffer the legal consequences if it becomes convenient for somebody to.

So she logs in ten hours a day to watch the death of people she could have saved — people who might or might not be innocents, people whose names she'll never know — at the cost of risking homelessness for herself, her daughter, and two helpless people who once raised her and she still loves. She never stops a robot. She just takes a shower immediately after every session, the company's contract-mandated monitoring of home network logging it as another data point in her profile.

The company's behavioral prediction models indicate that compulsive showering correlates with late-stage burnout, which means you should start choosing a replacement for her from the vast and growing pool of the economically deprecated. Some of them would last longer than others, and some would actually enjoy their jobs. You always pick the ones who don't, the ones who eventually need a shower every time they log off, and sometime after that require a replacement of their own.

You understand the business case for this company policy, yet find ironic that you would be barred from doing the job you choose people for. But it's not like you don't enjoy your own.

.finis.

# Short story: Logs from a haunted heart

She's scared all the time. But is her fear the reason why her heart suddenly speeds up a dozen times a day, shifting in a second from the dull ticking of dread into the accelerating staccato of runaway panic? The diagnostics in her peacemaker's app say that everything is normal, but perhaps they can be faked by somebody with maintenance access to the device. She doesn't have it, she's only the patient.

Maybe her ex-husband, a medical tech sales rep, does. Too many things have default passwords companies never bother to change. But there'd be no point in talking with him, even if she hadn't moved across the country to avoid ever having to. In an emergency room they'd just look at the same app she has, and she can't get an appointment with an specialist before next month.

Tomorrow is the one year anniversary of the day she told her husband she was leaving.

She's scared. Maybe that's what makes her chest feel like it's going to break.

.finis.

# There are only two emotions in Facebook, and we only use one at a time

We have the possibility of infinite emotional nuance, but Facebook doesn't seem to be the place for it. The data and psychology of how we react emotionally online are fascinating, but the social implications, although not specific to social networks, are rather worrisome.

A good way to explore our emotional reaction to Facebook news is through Patrick Martinchek's data set of four million posts from mainstream media during the period 2012-2016. I focused on news posts during 2016, most (93%) of which had received one or more of the emotional reactions in Facebook's algorithmic vocabulary: angry, love, sad, thankful, wow, and, of course, like.

In theory, an article could evoke any combination of emotions — make some people sad, others thankful, others a bit angry, and yet in others call for a simple "wow" — but it turns out that our collective emotional range is more limited. Applying to the data a method called Principal Component Analysis, we see that we can predict most of the emotional reactions to an article as a combination of two "hidden knobs":

• There's a knob that increases the frequency of both love and wow reactions. We can just call that knob love.
• The other knob increases the frequency of wows as well, but also, more significantly, the frequency of angry and sad, both in almost equal measure.

And that's it. Thankfulness, likes, even that feeling of "wow," are distributed pretty much at random through our reaction to news. What makes one article different to another to our eyes (or, more poetically, to our hearts) are something that makes us love them, and something else that makes us, with equal strength or probability, feel angry or sad about them.

Despite their names, it's not logically necessary for the "strength" of love to be low when anger/sadness is high, or vice versa. Remember that they measure the frequency of different emotional responses; it's easy to imagine news that half of its readers will love, yet will make the other half angry or sad.

Remarkably, that's not the case:

The graph shows how many news posts, relatively speaking, show different combinations of strength in the (horizontal) love and (vertical) angry/sad dimensions (click on the graph to expand it). Aside from a small group of posts that have zero strength in either dimension, and another, smaller group of more anomalous posts, most posts lie in a straight line between the poles of love and angry/sad: the stronger the love dimension of a post, the weaker will be its angry/sad dimension, and vice versa.

Different people have different, often opposite reactions to the same event. Why is our emotional reaction to news about them so relatively homogeneous? The answer is likely to be audience segmentation: each news post is seen by a rather homogeneous readership (that media source's target audience), so their reaction to the article will also be homogeneous.

In other words, a possible indicator that people with different preferences and values do read different media (and/or are shown different media posts by Facebook) is that the reactions to each post, either love of its statistical opposite, are statistically more homogeneous than they'd otherwise be. If everybody at a sports game are either cheering or booing at the same time, you can tell only one group of fans is watching it.

It's common, but somewhat disingenuous, to blame the use of recommendation algorithms for this. As soon as there are two TV stations in an area or two newspapers in a city, they have always tended to get each their own audience, and shape themselves to their interests as much as they influence them. The fault, such as it is, lies not in our code, but in ourselves.

Two things make algorithmic online media in general, and social networks in particular, different. First, while resistant to certain classic forms of manipulation and pressure (e.g. censure by phone call to TV network owner, except in places like China, where censorship mechanisms are explicitly built in both technology and regulations) they are vulnerable to new ones (content farms, bots, etc).

Second — and this is at the root of the current political kerfuffle around social networks — they need not be. Algorithmic recommendation is increasingly flexible and powerful; while it's unrealistic to require things like "no extremist content online, ever," the dynamics of what gets recommended and why can and are continuously modified and tweaked. There's a flexibility to how Facebook, Twitter, or Google work and could work that newspapers don't have, simply because networked computers are infinitely more programmable than printing presses and pages of paper.

This puts them in a bind that would deserve sympathy if they weren't among the most valuable and influential companies in the world, and utterly devoid of any sort of instinct for public service until their bottom line is threatened: whatever they do and not do risks backlash, and there's no legal, political, or social agreement as to what they should do. It's straightforward to say that they should censor extremist content and provide balanced information about controversial issues — in a way, we're asking them to fix not bugs in their algorithms, but in our own instincts and habits — but there are profound divisions in many societies about what counts as extremism and what's controversial. To focus on the US, when first-line universities sometimes consider white supremacism a legitimate political position, and government officials in charge of environmental issues consider the current global scientific consensus on climatology a very undecided matter, there's no politically safe algorithmic way to de-bias content... and no politically safe way to just wash your hands off the problem.

Social networks aren't powerful just because of how many people they reach, and how much, fast, and far they can amplify what they say. They are are unprecedentedly powerful because they have an almost infinite flexibility on what they can show to whom, and how, and new capabilities can always unsettle the balance of power. Everywhere, from China to the US to the most remote corners of the developing world, we're in the sometimes violent process of re-calculating how this new balance will look like.

"Algorithms" might be the new factor here, but it's human politics what's really at stake.

# What makes an algorithm feminist, and why we need them to be

About one in nine engineers in the US is a woman, which makes some men infer from this that they are "naturally" bad at it. Many data-driven algorithms would conclude the same thing; that's still the wrong conclusion, but, dangerously, it seems blessed by the impartiality of algorithms. Here's how bias creeps in.

Imagine about one in two human beings — randomly distributed across geography, gender, race, income level, etc. — has a pattern of tiny horizontal lines under their left eyelids, and the other half has a pattern of tiny vertical lines; they don't know which group they belong to, and neither do their parents, teachers, or employers. If we take a sample of engineers and find that only one in ten shows horizontal instead of vertical lines, then the influence of vertical lines on engineering ability would be an interesting hypothesis, and the next step would be to look for confounding variables and mechanisms.

When it comes to gender, we do have a pretty clear mechanism: women are told from early childhood that they are bad at STEM disciplines, they are constantly steered in their youth towards more "feminine" activities by parents, teachers, media, and most people and messages they come across, and then they have to endure kinds and levels of harassment male colleagues don't. None of those things have anything to do with how good an engineer they can be, but they do make it much harder to become one. For a given stage of academic and professional development, a female has most likely gone through harsher intellectual and psychological pressures than their male peers; a brilliant female engineer isn't proof that a good enough woman can be an engineer, but rather that they need to be extraordinary in order to reach the professional level of a less competent male peer.

An eight-to-one ratio of male to female engineers doesn't reflect a difference in abilities and potential, but rather the strength of the gender-based filters (which, again, begin when a child enters school, and sometimes before that, and never stops).

But algorithms won't figure that out unless you give them information about the whole process. Add to a statistical model the different gender-based influences through a person's lifetime — the ways in which, for example, the same work is rated differently according to the perceived gender of the author — and any mathematical analysis will show that gender is, as far as the data can show, absolutely irrelevant; men and women go through different pipelines, even if inside the same organizations, so achievement rates aren't comparable without adjusting for the differences between them. Adding that kind of sociological information might seem extraneous, but, actually, not doing it is statistical malpractice: by ignoring key variables that do depend on gender (everything from how kids are taught to think about themselves to the presence of sexual harassment or bias in performance evaluations) you are setting yourself up to fall for meaningless pseudo-causal correlations.

In other words, in many cases a feminist understanding of the micropolitics of gender-based discrimination is a mathematically necessary part of data set preparation. Perhaps counterintuitively, ignoring gender isn't enough. Think of it as a sensor calibration problem: much data comes in one way or another from interactions between individuals, and those interactions are, empirically and verifiably, influenced by biases related to gender (and race, class, age, etc). If you don't account for that "sensor bias" in your model — and this takes both awareness of that need and working with the people who research and write about this, you can't half-ass it whether as an individual programmer or as a large tech company — you'll get the implications of the data very wrong.

We've been getting things wrong in this area for a long while, in a lot of ways. Let's make sure that as we give power to algorithms, we also give them the right data and understanding to make them more rational than us. Processing power, absent critical structural information, only guarantees logical nonsense. And logical nonsense has been the cause and excuse of much human harm.

# Fútbol, semántica, y violencia política

Algo que tienen en común el periodismo y la poesía es que la elección de las palabras con las que se describe un hecho determina su significado, tanto o más que el hecho en sí.

La trama: cómo fue el apriete de la barra brava de Independiente a Ariel Holan que puso en jaque a todo el club es un artículo en la sección Deportes.

Continúa la extorsión sistemática en la vía de pública de bandas criminales a cara descubierta sería la noticia principal en la sección Policiales, si no la tapa de un diario.

Luego de décadas, el Estado Argentino sigue siendo incapaz de eliminar o contener cárteles criminales cuyos miembros operan abiertamente, cuentan con un significativo apoyo popular en algunas de sus actividades, y controlan de facto, si bien no de manera contínua, ciertos espacios físicos dentro del territorio nacional pertenece a la sección Política.

El término mafia, aplicado a las barras de fútbol, es más que metafórico; la combinación de debilidad estatal, integración con la cultura popular, relación casi simbiótica con organizaciones legales, y uso sistemático de la extorsión como una de las fuentes de financiamiento de otras actividades criminales, es un paralelo cercano al rol tradicional de las organizaciones criminales en Sicilia, con el fútbol reemplazando a las actividades religioso-sociales como fuente de validación social, al menos nominal. Y el control regular que tienen las barras sobre los estadios, si fuese contínuo en vez de durante los partidos, no sería menos serio, políticamente hablando, que la pérdida de soberanía del Estado de México sobre partes de su territorio a manos de organizaciones criminales.

Esto es algo característico de la política en su sentido más amplio: si las barras de fútbol estuvieran empezando sus actividades, serían consideradas un desafío inaceptable al sistema republicano. Tras décadas de existencia y su mimetismo con una muchas veces mítica "hinchada pacífica," viven en la sección Deportes. Ocasionalmente uno o dos miembros importantes de una barra son puestos en prisión por actos puntuales, e ignorando patrones sistémicos de actividad criminal; exactamente como en el caso de los cárteles en México o la mafia en Sicilia, esto tiene solo un efecto parcial en el poder de esos individuos, y absolutamente ninguno en las organizaciones mismas.

No son solo "algunos violentos," y tampoco son una aberración social, o un problema ético o de educación. Son parte de la estructura social y política Argentina, y mientras sigan apareciendo en la sección de deportes de los diarios, lo van a seguir siendo.

Gamification doesn't need to be enjoyable to be effective.

You're more likely to cheat on your taxes than to walk barefoot into a bank, even if it's summer and your feet hurt. That's because we don't just care about how bad the consequences of something could be, but also how certain they are to happen, and, illogically but consistently, how soon they will happen.

That's what makes Facebook so addictive. Staying another minute isn't going to make you happy, but it guarantees a small and immediate dose of socially-triggered emotion, and that's an incredibly powerful driver of behavior. The business of Facebook is to know enough about you, and have enough material, to make sure it can keep that subliminal promise while showing you targeted ads.

Governments' tools are noticeably blunter. Most of the laws that are generally respected reflect some sort of pre-existing social agreement. Conversely, where that social agreement doesn't exist (e.g., the legitimacy of buying dollars in Argentina, or the acceptability of misogyny pretty much everywhere), laws can only be enforced sporadically and with delay, and hence are seldom effective.

What the ongoing deployment by totalitarian governments — and the totalitarian arms of not-entirely-totalitarian governments — is making possible is the recreation of Facebook, but one co-founded by Foucault. The granularity, flexibility, and speed of perception and action, once a State is digitized enough, is unfathomable by the standards of any State in history. You can charge a fine, report a behavior to a boss, inconvenience a family member, impact a credit score, or notify a child's school the very moment a frowned-upon action was performed, with (sufficiently) total certainty and visibility. It doesn't have to be a large punishment or a lavish reward, or even the same for everybody: just as Facebook knows what you like, a government good enough at processing the data it has can know what you care about, and calibrate exactly how to use it so even small transgressions and small "socially beneficial activities" will get a small but fast and certain reward. Small but fast and certain is a cheap and effective way of shaping behavior, as long as it's something you do care about, and not generic "points" or "achievements." It can be your children's educational opportunities, your job, your public image, anything — governments, once they develop the right process and software infrastructure, can always find buttons to push.

This kind of detail-oriented totalitarianism only used to be possible in the most insanely paranoid societies (the Stasi being a paradigmatic example) but it escalated very poorly, and with ultimately suicidal economic and social costs.

Doing it with contemporary technology, on the other hand, scales very well, as long as a government is willing to cede control of the "last mile" of carrots and sticks to software. You would be very surprised if you entered Facebook one day and saw something as impersonal and generic, or at best as fake-personalized, as most interactions with the State are now. A government leveraging contemporary technology has a some significant computing power constantly looking at you and thinking about you — what you're doing, what you care about, what you're likely to do next — and instead of different parts of the government keeping their own files and dealing with you on their own time, everything from the cop on your street to your grandparents' pharmacist is integrated into that bit of the State that is exclusively and constantly dedicated to nudging you into being the best citizen you can possibly be.

It won't just be a cost-effective way of social control. Everything we know of psychology, and our recent experience with social networks and other mobile games, suggests it'll be an effective way of shaping our decisions before we even make them.

# Open Source is one of the engines of the world's economy and culture. Its next iteration will be bigger.

Once upon a time, the very concept of Open Source was absurd, and only its proponents ever thought it could be other than marginal. Important software could only be built and supported by sophisticated businesses, an expensive industrial component whose blueprints — the source code — was extremely valuable.

But Open Source won. It became clear, to no historian's surprise, that once knowledge is sufficiently distributed and tools become cheap enough, distributed development by heterogeneously (and heterogeneously motivated) people not only creates high-quality software at zero marginal cost; because it only takes a single motivated individual to leverage existing developments and move them forward regardless of its novelty or risk, it's inherently much more creative.

Open Source developers can take risks others can't, and they begin from further ahead, on the shoulder of other, taller developers. What's more adventurous than a single individual toying with an idea out of love and curiosity? When has true innovation began in any other way?

The form of this victory, though, wasn't the one expected by early adopters. Desktop computers as they were known are definitely on the wane, and it's still not "the Year of Linux on the Desktop." Relatively few people knowingly use Open Source software as their main computing environment, and the smartphone, history's most popular personal computing platform, is regardless of software licenses as regulated a proprietary environment as you could imagine.

The social and political promise of Open Source is still unrealized. Things have software inside them now, programs monitoring and controlling them to a larger degree than most people imagine, and this software is closed in every sense of the word. It's not just for surveillance: the software in car engines lies to pass government regulation tests, the one controlling electric batteries makes them work worse than they could so you have the "option" of paying more to the manufacturer for flipping a software switch to de-hobble them, and so on and so forth. Things work worse than they say they do, do things they aren't supposed to, and are not really under your control even after you bought them, and there's little that you can do about that, and that little very difficult, not just because the source code is hidden, but because in many cases, and through a Kafkian global system of "security" and copyright laws, it's literally a crime to try to understand, never mention fix, what this thing you bought is doing.

No, the main impact of Open Source was also what made it possible: the Internet. It's not just that the overwhelming majority of the software that runs it, from most of the operating systems of most servers to the JavaScript frameworks rendering web pages, is Open Source. There could've been no explosive growth of the online world with license costs for every individual piece of software, no free-form experimentation of content, shapes, tools, modes of use. Most of the sites and services we use today, and most of the tools used to build them, began as an individual's crazy idea — as just one example, the browser you're using to read this was originally a tool built by and for scientists — and, had the Internet's growth been directed by the large software companies of that age, it'd look more like cable TV, in diversity, speed of technological change, overall social impact, than what we have now.

Even if you don't own an smartphone or a computer, finance, government, culture, our entire society has been profoundly influenced by an Internet, and a computing ecosystem in general, simply unthinkable without Open Source. Like many of the truly influential technological shifts, its invisibility to most people doesn't diminish, but rather highlights, its ubiquity and power.

What's next?

More Open Source is an obvious, true, but conservative observation. Of course people, governments, and companies (even those whose business model includes selling some software) will continue to write, distribute, and use Open Source. Each of them for their own goals, some of them attempting to cheat or break the system, but, most likely, always coming back to the economic attractor of a system of creating and using technology that, for many uses and in many contexts, simply works too well to abandon.

What comes next is what's happening now. Still not fully exploited, the Internet is no longer the cutting edge of how computing is impacting our societies. Call this latest iteration Artificial Intelligence, cognitive computing, or however you want. Silicon Valley throws money at it, popular newspapers write about the danger it poses to jobs, China aims at having the most advanced AI technology in the world as an strategic goal of the highest priority, and even Vladimir Putin, not a man inclined to idealistic whimsy, said that whichever country leads in Artificial Intelligence "will rule the world."

Unlike Open Source during its critical years, Artificial Intelligence certainly isn't a low-profile phenomenon. But a lot of the coverage seems to make the same assumptions the software industry used to make, that truly relevant AI can only be built by superpowers, giant companies, or cutting-edge labs.

To some degree this is true: some AI problems are still difficult enough that they require billions of dollars to attack and solve, and the development of the tools required to build and train AIs requires in many cases extremely specialized knowledge in mathematics and computer science.

However, "some" doesn't mean "all," and once the tools used to build AIs are Open Source, which many if not most of them are, using them becomes progressively eaiser. There's something happening that has happened before: almost every month it's cheaper, and it requires less specialized knowledge, to make a program that learns from humans how to do something no machine ever could, or that finds ways to do it much better than we can. Rings a bell?

The more intuitive parallel isn't software, but rather another success story of open, collaborative development that went from a ridiculous proposition to upending a centuries-old industry: Wikipedia. Like Open Source software, and with a higher public profile, Wikipedia went from an esoteric idea with no chances of competing in quality with the carefully curated professional encyclopedias, to what's very often the first (and, too often for too many people, the only) source of factual information about a topic.

What we're beginning to build is a Wikipedia of Artificial Intelligences, or, better yet, and Internet of them: smart programs highly skilled in specific areas that anybody can download, use, modify, and share. The tools have just began to be available, and the intelligences themselves are still mostly built by programmers for programmers, but as the know-how required to build a certain level of intelligence becomes smaller and better distributed, this is beginning to change.

Instead of scores of doctors contributing to a Wikipedia page or a personal site about dealing with a certain medical emergency at home, we'll have them contributing to teach what they know to a program that will be freely available to anybody, giving perhaps life-saving advice in real time. A program any doctor in the world will be able to contribute to, modify, and enhance, keeping up with scientific advances, adapting it to different countries and contexts.

It won't replace doctors, lawyers, interior decorators, editors, or other human experts — certainly not the ones who leverage those programs to make themselves even better — but it'll potentially give each human in the world access to advice and intellectual resources in every profession, art, and discipline known to humankind, from giving you honest feedback about your amateur opera singing, to reading and explaining the meaning of whatever morass of legal terms you're about to click "I Accept" to. Instantaneously, freely, continuously improving, and not limited to what a company would find profitable or a government convenient for you to know.

If the Internet, whenever and wherever we choose to, is or can be something we build together, a literal commons of infinitely reusable knowledge, we'll be building, when and where we choose to, a commons of infinitely reusable skills at our command.

It will also resemble Wikipedia more than Open Source on the ease with which people will be able to add to it. Developing powerful software has never been easier, but contributing to Wikipedia, or making a post on a site or social network about something you know about, only requires technical knowledge many societies already take for granted: open a web page and start typing about the history of Art Deco, your ideas for a revolutionary fusion of empanadas with Chinese cousine, or whatever else it is you want to teach the world about.

Teaching computers about many things will be even easier than that. We're close to the point where computers will be able to learn your recipe just from a video of you cooking and talking about it, and if besides sending that video to a social network you give access to it to an Open Cook, then it'll learn from your recipy, mix it with other ideas, and be able to give improved advice to anybody else in the world. You'll also be able to directly engage with these intelligences to teach them deliberately: just as artificial intelligences can learn to beat games just by playing them, they'll be able to "pick up" skills from humans by doing things and asking for feedback. And if you don't like how it does something, you can always teach it to do in a different way, and anybody will be able to use your version if they think it's better, and in turn modify it any way they want.

Neither Open Source nor Wikipedia, under different names, looks, and motivations, are as new as they seem to be. They've been known for decades, and only seemed pointless or impossible because our shared imagination often runs a bit behind our shared power. We've began to realize we can make computers do an enormous number of things, much sooner than we thought we would, and while we try to predict and shape the implications of this, we're still approaching at it as if revolutionary technology can only work if built and controlled by giant countries and companies.

They are a part of it, but not the only one, and over the long term perhaps not even the most important part. Google matters because it gives us access to the knowledge we — journalists, scientists, amateurs, scholars, people armed with nothing more and nothing less than a phone and curiosity — built and shared. We go to Facebook to see what we are doing.

Some Artificial Intelligences can only be built by sophisticated, specialized, organizations; some companies will become wealthy (or even more so) doing it. And some others can and will be built by all of us, together, and over the long term, their impact will be just as large, if not more. The world changed once everybody was able, at least in theory, to read. It changed again when everybody was able, at least in theory, to write something than everybody in the world can read.

How much will it change again once the things around us learn how to do things on their own, and we teach them together?

# Russia 1, Data Science 0

The lessons, I suspect, are three:

• The theory and practice of data-driven campaigning is still very immature. Algorithmize the Breitbart-Russia-Assange-Fox News maneuver, and you'll have something far ahead of the state of the art. (I believe this will come from more sophisticated psychological modeling, rather than more data.)
• If a country's political process is as vulnerable as the US' was to what the Russians did, then how will it do against an external actor properly leveraging the kind of tools you can develop at the intersection of obsessive data collection, an extremely Internet-focused government, cutting-edge AI, and an assertive foreign policy.
• You know, like China. Hypothetically.

Whenever this happens, the proper reaction to this isn't to get angry, but to recognize that a political system proved embarrassingly vulnerable, and take measures to improve it. That said, that's slightly less likely to happen when those informational vulnerabilities are also used by the same local actors that are partially responsible for fixing them.

(As an aside, "out under-investment on security /deliberate exploiting of regulatory gaps we lobbied for/cover-up of known vulnerabilities would've been fine if not for those dastardly hackers" is also the default response of large companies to this kind of thing; this isn't a coincidence, but a shared ethos.)

# ¿Qué quiere decir que la Argentina crezca 3%?

¿Es mucho, es poco? Es las dos cosas. Lo que sigue es una explicación rápida, ignorando un montón de factores importantes, de qué quiere decir ese 3% que predice el Banco Mundial para el bolsillo y las elecciones.

La forma más clara que se me ocurre de explicar ese 3% es pensar qué pasaría si se mantiene de acá hasta las elecciones del 2019. Simplificando mucho, y suponiendo que no pase algo inesperadamente bueno (en un sentido más bien despiadado, un ejemplo sería un colapso ecológico en las zonas productoras de soja en EEUU) o inesperadamente malo (como, por ejemplo, una guerra en Corea), 3% de acá a 2019 permitiría:

• Reducir el déficit más o menos en un quinto (o más si se reduce el gasto estatal).
• Aumentar el gasto estatal por persona más o menos en un 10% (o más si se mantiene o sube el déficit).

Por un lado, esto sería un logro significativo: crecer 3% sin que el precio de tus exportaciones primarias haya saltado es muy difícil de lograr, y hacerlo por varios años todavía más. A los EEUU les encantaría poder hacerlo de manera sostenida, y el año pasado menos de un país de cada tres pudo hacerlo. Por otro lado, se traduciría en cambios positivos, pero no espectaculares en el nivel de vida de los Argentinos.

De acá a las elecciones de 2023 lo improbable es casi seguro, pero imaginando que se mantiene ese 3% por año de crecimiento — y esto sería realmente un triunfo administrativo y político — el déficit podría reducirse a un quinto de lo que es ahora, con el gasto estatal por persona alrededor de un tercio más alto (con diferentes números de acuerdo a reformas impositivas, decisiones políticas, etc; esto es un escenario razonable, nada más). Un cambio muy positivo en la calidad de vida, definitivamente. No espectacular. Tan bueno como sería realista esperar, probablemente.

¿Políticamente suficiente para mantener, sea cual sea el partido que gane, una política económica coherente por la década o más que sería necesaria para poner al país en algo parecido a una curva de crecimiento autosustentable? Esa es la cuestión. Históricamente, la Argentina tiene tres problemas económicos estructurales:

• Una economía poco avanzada, y con mecanismos internos prácticamente diseñados para hacer difícil mejorarla.
• Tiempos políticos (en última instancia, culturales) incompatibles con lo que toma la clase de crecimiento incremental sostenido que es la única forma en la que las economías crecen (salvo excepciones históricas en situaciones en las que la Argentina no está).
• Un "techo" bastante rígido para la eficiencia de la economía que parece ser bastante estructural; nunca pudimos atravesarlo, y sospecho que requeriría cambios culturales y sociales bastante radicales, especialmente en el contexto de las tradiciones políticas Argentinas.

En el largo plazo (lo que en este contexto tristemente quiere decir "no las próximas elecciones, sino las que les siguen") el desafío del Gobierno — de cualquier gobierno — es doble. Por un lado, una política administrativa, económica, y de negociación interna que permita una tasa de crecimiento significativa sostenida a lo largo del tiempo, y por otro lado la satisfacción de expectativas públicas que son más altas de lo que de la velocidad de crecimiento hace posible en un sentido puramente material (y en algunos casos con razón; una familia pasando hambre no puede esperar a que ese 3% haga su trabajo). Realizar ese malabarismo constante entre lo material y lo simbólico a lo largo de por lo menos un par de décadas — en un país que sospecha profundamente, de manera históricamente entendible pero también demasiado automáticamente — del concepto mismo de una economía y Estado técnicamente sofisticados, es lo que la clase política elegida y/o tolerada por los Argentinos no ha sido capaz de lograr, en los pocos casos en los que siquiera se intentó.

Resumiendo, ese 3% es, empíricamente, un logro. Mantenerlo consistentemente hasta las próximas elecciones, y especialmente hasta las que vienen después, sería un triunfo administrativo y político notable, además de requerir una dosis importante de suerte.

Es la peculiaridad del país, y la trampa en la que se encuentra desde hace más de un siglo, el que, por razones buenas y malas, no está para nada claro que sería suficiente.

# Tesla (or Google) and the risk of massively distributed physical terrorist attacks

You know, an autonomous car is only a software vulnerability away from being a lethal autonomous weapon, and a successful autonomous car company is only a hack away from being the world's largest (if single-use) urban combat force. Such an event would easily be the worst terrorist attack in history. Imagine a year's worth of traffic car deaths, in multiple countries all over the world, during a single, horrifying span of ten minutes. And how ready is your underfunded public transit system to cope with a large proportion of the city's cars being unusable during the few days it takes the company to deal with the hack while everybody is going at them with pitchforks both legal and more or less literal?

But this is a science-fictional premise that's already been used in fiction more than once. In the real world, the whole of our critical software infrastructure is practically impervious to any form of attack, and, if nothing else, companies take the ethical responsibilities inherent in their control over data and systems with the seriousness it demands, even lobbying for higher levels of regulation than less technically sophisticated public and governments demand. And, while current on-board software systems are known to be ridiculously vulnerable to remote attacks, it's only to be expected that more complex programs running on heterogeneous large-scale platforms under overlapping spheres of regulation and oversight will be much safer.

# Probability-as-logic vs probability-as-strategy vs probability-as-measure-theory

Attention conservation notice: Elementary (and possibly not-even-right) if you have the relevant mathematical background, pointless if you don't. Written to help me clarify to myself a moment of categorical (pun not intended) confusion.

What's a possible way to understand the relationship between probability as a the (by Cox) extension of classical logic, probability as an optimal way to make decisions, and probability in the frequentist usage? Not in any deep philosophical sense, just in terms of pragmatics.

I like to begin from the Bayes/Jaynes/Cox view: if you take classical logic as valid (which I do in daily life) and want to extend it in a consistent way to continuous logic values (which I also do), then you end up with continuous logic/certainty values we unfortunately call probability due to historical reasons.

Perhaps surprisingly, its relationship with frequentist probability isn't necessarily contentious. You can take the Kolmogorov axioms as, roughly speaking, helping you define a sort of functor (awfully, based on shared notation and vocabulary, an observation that made me shudder a bit — it's almost magical thinking) between the place where you do probability-as-logic and a place where you can exploit the machinery of measure theory. This is a nice place to be when you have to deal with an asymptotically large number of propositions; possibly the Probability Wars were driven mostly by doing this so implicitly that we aren't clear about what we're putting *into* this machinery, and then, because the notation is similar, forgetting to explicitly go back to the world of propositions, which is where we want to be once we're done with the calculations.

What made me stare a bit at the wall is the other correspondence: Let's say that for some proposition $A$, $P[A] > P[\neg A]$ in the Bayesian sense (we're assuming the law of excluded middle, etc; this is about a different kind of confusing). Why should I bet that $A$? In other words, why the relationship between probability-as-certainty and probability-as-strategy? You can define probability based on a decision theoretic point of view (and historically speaking, that's how it was first thought of), but why the correspondence between those two conceptually different formulations?

It's a silly non-question with a silly non-answer, but I want to record it because it wasn't the first thing I thought of. I began by thinking about $P[\text{win} | (P(A) > P(\neg A)) \wedge \text{bet on } A]$, but that leads to a lot of circularity. It turns out that the forehead-smacking way to do it is simply to observe that the best strategy is to bet on $A$ is true iff $A$, and this isn't circular if we haven't yet assumed that probability-as-strategy is the same as probability-as-logic, but rather it's a non-tautological consequence of the assumed psychology and sociology of what bet on means: I should've done whatever ended up working, regardless of what the numbers told me (I'll try to feel less upset the next time somebody tells me that).

But then, in the sense of probability-as-logic, $P[\text{the best strategy is to bet on A}] = P[A]$ by substituting propositions (and hence without resorting to any frequentist assumption about repeated trials and the long term) so, generally speaking, you end up with probability-as-strategy being part of probability-as-logic. I'm likely counting angels dancing on infinitesimals here, but it's something it felt less clear to me earlier today: probability-as-strategy is probability-as-logic, you're just thinking about propositions about strategies, which, confusingly, in the simplest cases end up having the same numerical certainty values as the propositions the strategies are about. But those aren't the same propositions, although I'm not entirely sure that in practice, given the fundamentally intuitive nature of bet on (insert here very handwavy argument from evolutionary psychology about how we all descend from organisms who got this well enough not to die before reproducing), you get in trouble by not taking this into account.

# Original Fic: The Gift of Memory

Not the kind of story I usually post here, but I don't just write dread-infused, mostly-dystopian sci-fi, you know?

In your dreams the world is full of marvels, love, safety. You're immortal and beautiful, and reality, charmed, dances with your thoughts.

In your nightmares the Universe's laws are poisoned, malignant, infected by something else. Something that shouldn't be there, is. Something that hates, haunts, hungers for you.

In your waking you forget they are memories.

We could've taken them with the power and the beauty and the everlasting life, but we enjoy reliving the endless night of our victory when we sucked the world dry and left it the ruined husk it is now. We left you the memories and the sadness, but not the knowledge. At times, in the satiety after other victories among the unperceived rubble of other worlds, it gives us an extra bit of joy.

.finis.

# Big Data, Endless Wars, and Why Gamification (Often) Fails

Militaries and software companies are currently stuck in something of a rut: billions of dollars are spent on the latest technology, including sophisticated and supposedly game-changing data gathering and analysis, and yet for most victory seems a best to be a matter of luck, and at worst perpetually elusive.

As different as those "industries" are, this common failure has a common root; perhaps unsurprisingly so, given the long and complex history of cultural, financial, and technological relationships between them.

Both military action and gamified software (of whatever kind: games, nudge-rich crowdsourcing software, behaviorally intrusive e-commerce shops, etc) are focused on the same thing: changing somebody else's behavior. It's easy to forget, amid the current explosion — pun not intended — of data-driven technologies, that wars are rarely fought until the enemy stops being able to fight back, but rather until they choose not to, and that all the data and smarts behind a game is pointless unless more players do more of what you want them to do. It doesn't matter how big your military stick is, or how sophisticated your gamified carrot algorithm, that's what they exist for.

History, psychology, and personal experience show that carrots and sticks, alone or in combination, do, work. So why do some wars take forever, and some games or apps whimper and die without getting any traction?

The root cause is that, while carrots and sticks work, different people and groups have different concepts of what counts as one. This is partly a matter of cultural and personal differences, and partly a matter of specific situations: as every teacher knows, a gold star only works for children who care about gold stars, and the threat of being sent to detention only deters those for whom it's not an accepted fact of life, if not a badge of honor. Hence the failure of most online reputational systems, the endemic nature of trolls, the hit-and-miss nature of new games not based on an already successful franchise, or, for that matter, the enormous difficulty even major militaries have stopping insurgencies and other similar actors.

But the root problem behind that root problem isn't a feature in the culture and psychology of adversaries and customers (and it's interesting to note that, artillery aside, the technologies applied on both aren't always different), but in the culture and psychology of civilian and military engineers. The fault, so to speak, is not in our five-stars rating systems, but in ourselves.

How so? As obvious as it is that achieving the goals of gamified software and military interventions requires a deep knowledge of the psychology, culture, and political dynamics of targets and/or customer bases, software engineers, product designers, technology CEOs, soldiers, and military strategists don't receive more than token encouragement to develop a strong foundation in those areas, much less are required to do so. Game designers and intelligence analysts, to mention a couple of exceptions, do, but their advice is often given but a half-hearted ear, and, unless they go solo, they lack any sort of authority. Thus we end, by and large, with large and meticulously planned campaigns — of either sort — that fail spectacularly or slowly fizzle out without achieving their goals, not for failures of execution (those are also endemic, but a different issue) but because the link between execution and the end goal was formulated, often implicitly, by people without much training in or inclination for the relevant disciplines.

There's a mythology behind this: they idea that, given enough accumulation of data and analytical power, human behavior can be predicted and simulated, and hence shaped. This might yet be true — the opposite mythology of some ineffable quality of unpredictability in human behavior is, if anything, even less well-supported by facts — but right now we are far from that point, particularly when it comes to very different societies, complex political situations, or customers already under heavy "attack" by competitors. It's not that people can't be understood, and forms of shaping their behavior designed, it's that this takes knowledge that for now lies in the work and brains of people who specialize in studying individual and collective behavior: political analysts, psychologists, anthropologists, and so on.

They are given roles, write briefs, have fun job titles, and sometimes are even paid attention to. The need for their type of expertise is paid lip service to; I'm not describing explicit doctrine, either in the military or in the civilian world, but rather more insidious implicit attitudes (the same attitudes the drive, in an even more ethically, socially, and pragmatically destructive way, sexism and racism in most societies and organizations).

Women and minorities aside (although there's a fair and not accidental degree of overlap), people with a strong professional formation in the humanities are pretty much the people you're least likely to see — honorable and successful exceptions aside — in a C-level position or having authority over military strategy. It's not just that they don't appear there: they are mostly shunned, and implicitly or explicitly, well, let's go with "underappreciated." Both Silicon Valley and the Pentagon, as well as their overseas equivalents, are seen and see themselves at places explicitly away from that sort of "soft" and "vague" thing. Sufficiently advanced carrots and sticks, goes the implicit tale, can replace political understanding and a grasp of psychological nuance.

Sometimes, sure. Not always. Even the most advanced organizations get stuck in quagmires (Google+, anyone?) when they forget that, absent an overwhelming technological advantage, and sometimes even then (Afghanistan, anyone?) successful strategy begins with a correct grasp of politics and psychology, not the other way around, and that we aren't yet at a point where this can be provided solely by data gathering and analysis.

Can that help? Yes. Is an organization that leverages political analysis, anthropology, and psychology together with data analysis and artificial intelligence like to out-think and out-match most competitors regardless of relative size? Again, yes.

Societies and organizations that reject advanced information technology because it's new have, by and large, been left behind, often irreparably so. Societies and organizations that reject humanities because they are traditional (never mind how much they have advanced) risk suffering the same fate.

# Article in La Nación

Last Sunday I was lucky enough to have an article [in Spanish] published in La Nación's Economía para No Economistas: El PBI Medido en "Globones". It's a 30,000 ft. look at the Argentine economy around three numbers, and a look at how old, deep, and stable are the dynamics when seen from this point of view.

# A simplified nuclear game with Kim Jong-un

Despite its formal apparatus and cold reputation, game theory is in fact the systematic deployment of empathy. It's hard to overstate how powerful this can be, without or without mathematical machinery behind it, so let's take an informal look at a game-theoretical way of empathizing with somebody none of us would particularly want to, North Korea's Kim Jong-un.

First, a caveat: as I'm not trained in international politics, and this is an informal toy model rather than a proper analytical project, it'll be very oversimplified both in form and content. The main point is simply to show a quick example of how to think "game-theoretically" (in a handwavy, pre-mathematical sense) that for once isn't the Prisoner's Dilemma.

This particular game has two players, Kim and the US, and three possible outcomes: regime change, collapse, and status quo. We don't need to put specific values to each outcome to note that each player has clear preferences:

• For the US, collapse < status quo < regime change
• For Kim, collapse,regime change < status quo

(From Kim's point of view, a collapsing North Korea and one where he's no longer in charge are probably equivalent.)

Let's simplify the United States' possible moves to attempt regime change and do nothing. The latter results in the status quo with certainty, while the former might end up in a proper regime change with probability $p$, or in a more or less quick collapse with probability $1-p$. Therefore, the United States will attempt a regime change as soon as

There are multiple ways in which Kim's perceived risk can rise, even aside from direct threats. For example:

• Decreased rapport between the US and South Korea or China (the two major countries who would suffer the brunt of the costs of a collapse) decreases the cost of collapse in the US' strategic calculations, and hence makes a regime change attempt more likely.
• Every attempt of regime change by the US elsewhere in the world, and any expression of increased self-confidence in their ability to perform one, makes Kim's estimate of the US' estimate of $p$ that much higher, and hence a regime change attempt more likely.
• Any internal change in North Korea's politics risking Kim's control of the country, should it be found, will also raise $p$.
• For that matter, a sufficiently strong fall in their military capabilities would eventually have the same effect.

Kim most likely knows he can't actually defend himself from an attempted regime change (there's no repelled regime change attempt outcome), so his only shot at staying in power is to change the US' strategic calculus. Given how unlikely it seems to be that he can make the status quo more desirable, he has, from a strategic point of view, to make the cost of an attempted regime change high enough to deter one. That's what atomic bombs are for: you change the payout matrix, and you change the game equilibrium. Once you can blow up something in the United States, which of course has an extremely negative value for the US, then even if $p = 1$,

The unintended problem is that, by both signalling and action, Kim and his regime have convinced the world that they are not entirely rational in strategic terms. As Schelling noted, deterrence often requires convincing other players that you're "crazy enough to do it," but in Kim's case nobody feels entirely certain that he will only use a nuclear weapon in case of an attempted regime change, or exactly what he'd consider one, so, although possessing a nuclear weapon decreases the expected value of a regime change attempt, it also decreases the value of the status quo, making the net impact on the US' strategic calculus &mdahs; the real goal of North Korea's nuclear program — doubtful. It can, and perhaps has, set the system in a dangerous course: the US decries the country as dangerous, the probability of a regime change attempt grows, Kim tries to develop and demonstrate stronger nuclear capabilities, this makes the US posture harsher, etc.

In this toy model — and I emphasize it's one — any attempt to de-escalate has to being by acknowledging that Kim's preferences between outcomes are what they are. Sanctions that weaken the regime spur, rather than delay, nuclear development. Paradoxically and distastefully, what you want is to credibly commit to not attempting a regime change, which at this point can only be done by actively strengthening it. This is something that both China and South Korea seem acutely aware of: pressures on and threats to North Korea tend to be of the "annoying but not regime-threatening" kind, as anything stronger would be counterproductive and not credible, and their assistance to the country has nothing to do with ideological sympathy, and everything to do with keeping the country away from collapse.

But not everything is bleakly pragmatic in game theory, and more humane suggestions can be derived from the above analysis. E.g.,

• A Chinese offer to strengthen and modernize North Korea's nuclear command chain to avoid hasty or accidental deployments would raise a bit the value of the status quo without increasing the chance of a regime attempt, a mutual win that'd probably be accepted.
• Any form of humanitarian development, as long as it's not seen as threatening the regime, could be implemented if Kim can sell it internally as being his own accomplishment. That'd be very annoying to everybody else, but suggests that quality of life in North Korea (although not political freedom) can be improved in the short term.
• Credibly limited tit-for-tat counterattacks might, paradoxically, reinforce everybody's trust in mutual boundaries. So, if a North Korean hack against an US bank is retailed to by hitting Kim's own considerable financial resources in a way that is obviously designed to hurt him while also obviously designed to not impact his grip on power, that'd have a much higher chance of changing his behavior than threatening war.

To once again repeat my caveats, this is far from a proper analysis. To mention one of a multitude of disqualifying limitations, useful strategic analysis of this kind often involves scores of players (e.g., we'd have to look at internal politics in North and South Korea, China, Japan, and the United States, to begin with) with multiple, overlapping, multi-step games, and certainly more detailed and well-sourced domain information than what I've applied here. To derive real-world opinions or suggestions from it would be analytical malpractice.

The point of the article isn't to give yet another uninformed opinion on international politics, but rather to show how even a very primitive and only roughly formal analysis can help frame a discussion about a complex topic in a way that a more unstructured approach couldn't, specially when there are strong moral issues at play.

Sometimes emotions get in the way of understanding somebody else. Thankfully, we have maths to help with that.

# This screen is an attack surface

A very short note on why human gut feeling isn't just subpar, but positively dangerous.

One of the most active areas of research in machine learning is adversarial machine learning, broadly defined as the study of how to fool and subvert other people's machine learning algorithms for your own goals, and how to prevent it from happening to yours. A key way to do this is through controlling sampling; the point of machine learning, after all, is to have behavior be guided by data, and sometimes the careful poisoning of what an algorithm sees — not the whole of its data, just a set of well-chosen inputs — can make its behavior deviate from what its creators intended.

A very public example of this is the nascent tradition of people collectively turning a public Microsoft demonstration chatbot into a bigot spouting conspiracy theories, by training it with the right conversations, last year with "Tay" and this week with "Zo." Humans are obviously subject to all sorts of analogous attacks through lies, misdirection, indoctrination, etc, and a big part of our socialization consists on learning to counteract (and, let's be honest, to enact) the adversarial use of language. But there's a subtler vector of attack that, because it's not really conscious, is extremely difficult to defend from.

Human minds rely very heavily on what's called the availability heuristic: when trying to figure out what will happen, we tend to give more weight to possibilities we can easily recall and picture. This is a reasonable automatic process in stable environments and first-hand observations, as it's fast and likely to give good predictions. We easily imagine the very frequent and the very dangerous, so our decision-making follows probabilities, with a bias towards avoiding that place where a lion almost ate us five years ago.

However, we don't observe most of our environment first-hand. Most of us, thankfully, have more exposure to violence through fiction than through real experience, always in highly memorable forms (more and better-crafted stories about violent crime than about car accidents), making our intuition misjudge relative probabilities and dangers. The same happens in every other area of our lives: tens of thousands of words about startup billionaires for every phrase about founders who never got a single project to work, Hollywood-style security threats versus much more likely and cumulatively harmful issues, the quick gut decision versus the detached analysis of multiple scenarios.

And there's no way to fix this. Retraining instincts is a difficult and problematic task, even for very specific ones, much less for the myriad different decisions we make in our personal and professional lives. Every form of media aims at memorability and interest over following reality's statistical distribution — people read and watch the new and spectacular, not the thing that keeps happening — so most of the information you've acquired during your life comes from an statistically biased sample. You might have a highly accurate gut feeling for a very specific area where you've deliberately accumulated an statistically strong data set and interacted with it in an intensive way, in other words, where you've developed expertise, but for most decisions we make in our highly heterogeneous professional and personal activities, our gut feelings have already been irreversibly compromised into at best suboptimal and at worst extensively damaging patterns.

It's a rather disheartening realization, and one that goes against the often raised defense of intuition as one area where humans outperform machines. We very much don't, not because our algorithms are worse (although that's sometimes also true) but because training a machine learning algorithm allows you to carefully select the input data and compensate for any bias in it. To get an equivalently well-trained human you'd have to begin when they are very young, put them on a diet of statistically unbiased and well-structured domain information, and train them intensively. That's how we get mathematicians, ballet dancers, and other human experts, but it's very slow and expensive, and outright impossible for poorly defined areas — think management and strategy — or ones where the underlying dynamics change often and drastically — again, think management and strategy.

So in the race to improve our decision-making, which over time is one of the main factors influencing our ultimate success, there's really no way around substituting human gut feeling with algorithms. The stronger you feel about a choice, the more likely it is to be driven by how easy it is to picture, and that's going to have more to do with the interesting and spectacular things you read, watched, and remember than with the boring or unexpected things that do happen.

Psychologically speaking, those are the most difficult and scariest decisions to delegate. Which is why there's still, and might still be for some time, a window of opportunity to gain competitive advantage by doing it.

But hurry. Sooner or later everybody will have heard about it.

# Regularization, continuity, and the mystery of generalization in Deep Learning

A light and short note on a dense subset of a large space...

There's increasing interest in the very happy problem of why Deep Learning methods generalize so well in real-world usage. After all,

• Successful networks have ridiculous amounts of parameters. By all rights, they should be overfitting training data and doing awfully with new data.
• In fact, they are large enough to learn the classification of entire data sets even with random labels.
• And yet, they generalize very well.
• On the other hand, they are vulnerable to adversarial attacks with weird and entirely unnatural-looking inputs.

One possible very informal way to think about this — I'm not claiming it's an explanation, just a mental model I'm using until the community reaches a consensus as to what's going on — is the following:

• If the target functions we're trying to learn are (roughly speaking) nicely continuous (a non-tautological but often true property of the real world, where, e.g., changing a few pixels of a cat's picture rarely makes it cease to be one)...
• and regularization methods steer networks toward that sort of functions (partly as a side effect of trying to avoid nasty gradient blowups)...
• and your data set is more or less dense in whatever subset of all possible inputs is realistic...
• ... then, by a frankly metaphorical appeal to a property of continuous functions in Hausdorff spaces, learning well the target function on the training set implies learning well the function on the entire subset.

This is so vague that I'm having trouble keeping myself from making a political joke, but I've found it a wrong but useful model to think about how Deep Learning works (together with an, I think, essentially accurate model of Deep Learning as test-driven development) and how it doesn't.

As a bonus, this gives a nice intuition about why networks are vulnerable to weird adversarial inputs: if you only train the network with realistic data, no matter how large your data set, the most you can hope for is for it to be dense on the realistic subset of all possible inputs. Insofar as the mathematical analogy holds, you only get a guarantee of your network approximating the target function wherever you're dense; outside that subset — in this case, for unrealistic, weird inputs — all bets are off.

If this is true, protecting against adversarial examples might require some sort of specialized "realistic picture of the world" filters, as better training methods or more data won't help (in theory, you could add nonsense inputs to the data set so it can learn to recognize and reject it, but you'd need to pretty much cover the entire input subset with a dense set of samples, and if you're going to do that, then you might as well set up a lookup table, because you aren't anymore).

# Short story: Nice girl falls in love with vampire boy. Of course he kills her

(In honor of World Dracula Day)

Nice girl falls in love with vampire boy. Of course he kills her. Did she want him to? Did she understand his hunger wasn't metaphorical? Let's not assume innocence.

Perhaps between man and monster she chose the safer one. Better to know where you stand. Even if there is no such thing as turning; you are born a vampire or you die to feed one. What predator recruits from the herd? Curses are arbitrary, ecosystems have to make sense.

Let's not assume authorial motivation for the story. Identity. Species. Beautiful monsters don't need to dream about being loved, but they can regret not having been able to be otherwise than they are.

They can imagine a world where nice girl falls in love with vampire boy and survives. Innocent meals and sunlit warmth. Otherwise - otherwise she would have to share his night , his murders, his table. Know the taste of her people in his lips and her tongue. Die of guilt or embrace the hunt.

Let's not assume her niceness was more than gesture-deep. Maybe the monster's appeal wasn't his beauty. Maybe she first kissed him in search of that flavor.

Let's not assume monsters can always tell their own. One can regret losing somebody who was never there. Maybe she laughs as she reads your tales, at who you thought she was. At the future you thought you both wanted and could have.

Let's not assume her laugh doesn't hurt you, or that you don't love her for that.

# Don't worry about opaque algorithms; you already don't know what anything is doing, or why

Machine learning algorithms are opaque, difficult to audit, unconstrained by ethics , and there's always the possibility they'll do the unthinkable when facing the unexpected. But that's true of most our society's code base, and, in a way, they are the most secure part of it, because we haven't talked ourselves yet into a false sense of security about them.

There's a technical side to this argument: contemporary software is so complex, and the pressures under which it's developed so strong, that it's materially impossible to make sure it'll always behave the way you want it to. Your phone isn't supposed to freeze while you're making a call, and your webcam shouldn't send real-time surveillance to some guy in Ukraine, and yet here we are.

But that's not the biggest problem. Yes, some Toyota vehicles decided on their own to accelerate at inconvenient times because their software systems were mindbogglingly and unnecessarily complex, but nobody outside the company knew they were because it was so legally difficult to have access to the code that even after the crashed they had to be inspected by an outside expert under conditions usually reserved to high-level intelligence briefings.

And there was the hidden code in VW engines designed to fool emissions tests, and the programs Uber uses to track you even while they say they aren't, or even Facebook's convenient tools to help advertisers target the emotionally vulnerable.

The point is, the main problem right now isn't what a self-driving car _might_ do when it has to make a complex ethical choice guided by ultimately unknowable algorithms, but what the car is doing on every other moment, reflecting ethical choices guided by corporate executives that might be unknowable in a philosophical, existential sense, but are worryingly familiar in an empirical one. You don't know most of what your phone is doing at any given time, not to mention other devices, it can be illegal to try to figure it out, and it can also be illegal if not impossible to change it even if you did.

And a phone a thing you hold in your hand and can, at least in theory, put in a drawer somewhere if you want to have a discrete chat with a Russian diplomat. Even more serious are all the hidden bits of software running in the background, like the ones that can automatically flag you as a national security risk, or are constantly weighting whether you should be allowed to turn on your tractor. Even if the organization that developed or runs the software did its job uncommonly well and knows what it's doing down to the last bit, you don't and most likely never will.

This situation, perhaps first and certainly most forcefully argued against by Richard Stallman, is endemic to our society, and absolutely independent of the otherwise world-changing Open Source movement. Very little of the code in our lives is running in something resembling a personal computer, after all, and even when it does, it mostly works by connecting to remote infrastructures whose key algorithms are jealously guarded business secrets. Emphasis on secret, with a hidden subtext of specially from users.

So let's not get too focused on the fact that we don't really understand how a given neural network works. It might suddenly decide to accelerate your car, but "old fashioned" code could, and as a matter of fact did, and in any case there's very little practical difference between not knowing what something is doing because it's a cognitively opaque piece of code, and not knowing what something is doing because the company controlling the thing you bought doesn't want you to know and has the law on its side if it wants to send you to jail if you try to.

Going forward, our approach to software as users, and, increasingly, as citizens, cannot but be empirical paranoia. Just assume everything around you is potentially doing everything it's physically capable of (noting that being remotely connected to huge amounts of computational power makes even simple hardware quite more powerful than you'd think), and if any of that is something you don't find acceptable, take external steps to prevent it, above and beyond toggling a dubiously effective setting somewhere. Recent experience shows that FOIA requests, legal suits, and the occasional whistleblower might be more important for adding transparency to our technological infrastructure than your choice of operating system or clicking a "do not track" checkbox.

# The insidious not-so-badness of technological underemployment, and why more education and better technology won't help

Mass technological unemployment is seen by some as a looming concern, but there are signs we're already living in an era of mass technological underemployment. It's not just an intermediate phase: its politics are toxic, it increases inequality, and it's very difficult to get out of.

Underemployment doesn't necessarily mean working less hours than you'd like, or switching jobs frequently. In fact, it often means working a lot, under psychologically and/or physically unhealthy conditions, for low pay, with few or no protections against abuse and firing, and doing your damndest to keep that job because the alternatives are worse. The United States is a paradigmatic case: unemployment is low, but wage growth has been stagnant for a very long while, and working conditions for large numbers of workers aren't particularly great.

Technology isn't the only culprit — choices in macroeconomic management, fiscal policy, and political philosophy are at least just as important — but it certainly hasn't helped. Yes, computers make anybody who knows how to use them much more productive, from the trucker who can use satellite measurements and map databases to identify their location and figure out an optimal route to the writer using a global information network to gather news and references for a article. But you see the problem: those are extremely useful things, but "using a GPS" and "googling" are also extremely easy things. Most jobs require some form of technological literacy, but when most people got enough of it to fulfill the requirements — thanks in part to decades of single-minded focus in the computer industry — knowing how to use computers makes you more productive, but doesn't get you a better salary. Supply and demand.

More technology obviously won't come to the rescue here; the more advanced our computers become, the easier it is for people to interact with them to get a certain task done (until it's automated and you don't need to interact at all), which makes workers more productive, just not better paid. As most of the new kinds of jobs being created tend to be based on intensive use of technology, they are intrinsically prone to this kind of technological underemployment, and more vulnerable to eventual technological unemployment. The people building those tools are usually safe from this dynamic, but the scalability of mass production, and the even more impressive scalability of software systems, mean that you don't need many people to build those tools and infrastructure. And as we've become more adept at making software easy to use, we've become very good at giving it at best a neutral effect on wages.

Don't think "software engineer," think "underpaid person with an hourly contract working in the local warehouse of a highly advanced global logistics company under the control of a sophisticated software system." There are more of the latter than of the former (and things that used to look like the former have become easy enough to begin to look like the latter...).

More education is equally useless. *Not* to the individual: besides its non-economic significance, your education relative is one of the strongest predictors of your wages. But raising everybody's educational level, just like making everybody's technology easier to use, doesn't raise anybody's wages. By making people more productive, it makes it possible for companies to pay higher wages, but as long as there's more educated-enough people than positions you want to fill, it doesn't make it necessary, so of course (an "of course" contingent on a specific political philosophy) it doesn't happen.

Absent a huge exogenous increase in the demand for labor, or an infinitely more ominous exogenous decrease in its supply, the ongoing dynamic is that technology will keep being improved in power and ease of use, making workers more productive and at the same time giving them less bargaining power, and therefore stalling or reducing their wages and their working conditions.

The developing world faces this problem no less than the developed world, with the added difficulty, but also the ironic advantage, of starting behind them in human, physical, and institutional capital. Investment and integration with the global economy can raise living standards very significantly from that baseline, but eventually hitting the same plateau (and usually at a much lower absolute level).

This isn't just an economic tragedy of missed opportunities, it's an extremely toxic political environment. Mass unemployment isn't politically viable for long — sooner or later, peacefully or not, some action is demanded, which might or might not be rational, humane, or work at all, but which definitely changes the status quo — but mass underemployment of this kind just keeps everybody busy holding on to crappy jobs and trying to learn enough new technology or soft skills or whatever's being talked about this month in order to keep holding to it or even get a promotion to an slightly less crappy job where, not coincidentally, you're likely to end up using less technology (the marketing intern googling something vs the marketing VP having a power breakfast with a large customer). It sustains the idea that people could get a better life if they just studied and worked hard enough, which is true in an individual sense — highly skilled software engineers are very well paid — and absurd as a policy solution — once everybody can do what a highly skilled software engineer can do, then highly skilled software engineers won't be very well paid. Yet it's the kind of absurdity that sounds obvious, and therefore ends up driving politics and hence policy.

The fact that technology and education don't help with this problem doesn't mean we need less of either. There are other problems they help with, and for those problems we need more of both. But we do need to fight back increased underemployment, not to avoid it shifting into mass unemployment, but because there's a good risk of a it becoming widespread and structural, with serious social and political side effects .

There are workable solutions for this , but they lie in the realm of macroeconomics and fiscal policy, which ultimately depend on political philosophy, and that's a different post.

# The case for blockchains as international aid

Blockchains aren't primarily financial tools. They are a political technology, and their natural field of application is the developing world.

The main problem a blockchain is meant to solve is lack of a trusted third party, which is at its root a problem of institutions, that is, politics. Bitcoin isn't used because it's convenient or scalable, but because it works as a rudimentary global financial system without having to trust any person or organization (at least that's the theory; poorly regulated financial intermediaries, like life, always find a way). The fact is that we do have a global financial system that it's relatively trusted, but bitcoin users — speculators aside — think the system checks don't work, think they work and want to avoid them, or some combination of both. I'm not judging.

Yet beyond those (huge) nooks and crannies in the developed world, there are billions of people who just don't have access to financial systems they can trust, and beyond finance, there are billions of people who don't have access to any kind of governance system they can trust. Honest cops, relatively functional bureaucracies, public records that don't change overnight: building a state that has and deserves a certain amount of trust takes generations, is always a work in progress, and is very difficult to even begin. Low trust environments are self-perpetuating, simply because individual incentives, risks, and choices become structurally skewed in that way.

Can blockchains solve this? No, obviously not.

But they can provide one small bit of extra buttressing, through a globally visible and verified public document ledger. Don't think in terms of financial transactions, but of more general documents: ownership transfer records, government contracts, some judicial and fiscal records, etc. Boring, old-fashioned, absolutely essential bits of information that everybody in a developed country just assumes without thinking are present, accessible, and reliable, but people elsewhere know can be anything but.

Blockchains working as a sort of global notary, set up by international development organizations but basing their reliability on the processing power donated by a multitude of CPU-rich but often money- and time-poor activists, would give citizens, businesses, and governments a way to fight some forms of mutual abuse. It won't, and cannot, prevent it, but it can at least raise the reputational cost of hiding, changing, or destroying documents that are utterly uninteresting to the likes of WikiLeaks, but that for a family can mean the difference between keeping or losing their home.

Even countries that have improved much in this area can strengthen their international reputations, and therefore their attractiveness for investments and migration, by this kind of globally verifiable transparency.

It's not sexy, it'll never make money, and it doesn't fully, or even mostly, solve the problem. It doesn't disrupt the business model of corruption and structural incompetence, and, best case, it'll put a small pebble in one or two undeservedly expensive shoes. Hopefully. Maybe.

But good governance is the core platform of a prosperous and healthy society. Getting it right is one of the hardest things, but also one of the most important we can try to help each other do.

# Short story: The Associate

I seldom know who's paying me or what they do; only my few friends lucky enough to have jobs do. My phone will buzz, and if I bid low enough I'll get to do things that will feel like isolated musical notes, meaningless on their own, in places that sometimes will appear later in the news in ways I won't be able to relate to my own actions but also won't try to.

A wordless feeling will keep me from adding to the pain and outrage of the comment threads, but the daily rent payments sometimes don't leave me enough for food, so I'm always hoping my phone will buzz with a new incomprehensible gig, and when it does I always bid low.

# Statistics, Simians, the Scottish, and Sizing up Soothsayers

A predictive model can be a parametrized mathematical formula, or a complex deep learning network, but it can also be a talkative cab driver or a slides-wielding consultant. From a mathematical point of view, they are all trying to do the same thing, to predict what's going to happen, so they can all be evaluated in the same way. Let's look at how to do that by poking a little bit into a soccer betting data set, and evaluating it as if it were an statistical model we just fitted.

The most basic outcome you'll want to predict in soccer is whether a game goes to the home team, the visitors or away team, or is a draw. A predictive model is anything and anybody that's willing to give you a probability distribution over those outcomes. Betting markets, by giving you odds, are implicitly doing that: the higher the odds, the less likely they think is the outcome.

The Football-Data.co.uk data set we'll use contains results and odds from various soccer leagues for more than 37,000 games. We'll use the odds for the Pinnacle platform whenever available (those are closing odds, the last ones available before the game).

For example, for the Juventus-Fiorentina game in August 20, 2016, the odds offered were 1.51 for a Juventus win, 4.15 for a draw (ouch), and 8.61 for a Fiorentina victory (double ouch). Odds of 1.51 for Juventus mean that for each dollar you bet on Juventus, you'd get USD 1.51 if Juventus won (your initial bet included) and nothing if it didn't. These numbers aren't probabilities, but they imply probabilities. If platforms gave odds too high relative to the event's probability they'd go broke, while if they gave odds too low they wouldn't be able to attract bettors. On balance, then, we can read from the odds probabilities slightly lower than the the betting market's best guesses, but, in a world with multiple competing platforms, not really that far from the mark. This sounds like a very indirect justification for using them as a predictive model, but every predictive model, no matter how abstract, has a lot of assumptions; a linear model assumes the relevant phenomenon is linear (almost never true, sometimes true enough), and looking at a betting market as a predictive model assumes the participants know what they are doing, the margins aren't too high, and there isn't anything too shady going on (not always true, sometimes true enough).

We can convert odds to probabilities by asking ourselves: if these odds were absolutely fair, how probable would the event have to be so neither side of the bet can expect to earn anything? (a reasonable definition of "fair" here, with historical links to the earliest developments of the concept of probability). Calling $P$ the probability and $L$ the odds, we can write this condition $PL + (1-P)*0 = 1$. The left side of the equation is how much you get on average — $L$ when, with probability $P$, the event happens, and zero otherwise — and the right side says that on average you should get you dollar back, without winning or losing anything. From there it's obvious that $P = \frac{1}{L}$. For example, the odds above, if absolutely fair (which they never are, not completely, as people in the industry have to eat) would imply a probability for Juventus to win of 66.2%, and for Fiorentina of 11.6% (for the record, Juventus won, 2-1).

In this way we can put information into the betting platform (actually, the participants do), and read out probabilities. That's all we need to use it as a predictive model, and there's in fact a small industry dedicated to building betting markets tailored to predict all sorts of events, like political outcomes; when built with this use in mind, they are called prediction or information markets. The question, as with any model, isn't if it's true or not — unlike statistical models, betting markets don't have any misleading aura of mathematical certainty — but rather how good those probabilities are.

One natural way of answering that question is to compare our model with another one. Is this fancy machine learning model better than the spreadsheet we already use? Is this consultant better than this other consultant? Is this cab driver better at predicting games than that analyst on TV? Language gets very confusing very quickly, so mathematical notation becomes necessary here. Using the standard notation $P[x | y]$ for how likely do I think is that x will happen if y is true?, we can compare the cab driver and the TV analyst by calculating

If that ratio is higher than one, this means of course that the cab driver is better at predicting games than the TV analyst, as she gave higher probabilities to the things that actually happened, and vice versa. This ratio is called the Bayes factor.

In our case, the factors are easy to calculate, as $P[\textrm{home win} | \textrm{odds are good predictors}]$ is just $\textrm{probability of a home win as implied by the odds}$, which we already know how to calculate. And because the probabilities of independent events are the product of the individual probabilities, then

In reality, those events aren't independent, but we're assuming participants in the betting market take into account information from previous games, which is part of what "knowing what you're talking about" intuitively means.

Note how we aren't calculating how likely a model is, just which one of one two models has more support from the data we're seeing. To calculate the former value we'd need more information (e.g., how much you believed the model was right before looking at the data). This is a very useful analysis, particularly when it comes to making decisions, but often the first question is a comparative one.

Using our data set, we'll compare the betting market as a predictive model against a bunch of dart-throwing chimps as a predictive model (dart-throwing chimps are a traditional device in financial analysis). The chimps throw darts against a wall covered with little Hs, Ds, and As, so they always predict each event has a probability of $\frac{1}{3}$. Running the numbers, we get

This is (much) larger than one, so the evidence in the data favors the betting market over the chimps (very; see the link above for a couple of rules of thumb about interpreting those numbers). That's good, and not something to be taken for granted: many stock traders underperform chimps. Note that if one model is better than another, the Bayes factor comparing them will keep growing as you collect more observations and therefore become more certain of it. If you make the above calculation with a smaller data set, the resulting Bayes factor will be lower.

Are odds also better in this sense than just using a rule of thumb about how frequent each event is? In this data set, the home team wins about 44.3% of the time, and the visitors 29%, so we'll assign those outcome probabilities to every match.

That's again overwhelming evidence in favor of the betting market, as expected.

We have statistics, soothsayers, and simians (chimpanzees aren't simians, but I couldn't resist the alliteration). What about the Scottish?

Lets look at how better than chimps are the odds for different countries and leagues or divisions (you could say that the chimps are our null hypothesis, but the concept of null hypothesis is at best a confusing and at worst a dangerous one: quoting the Zen of Python, explicit is better than implicit). The calculations will be the same, applied to subsets of the data corresponding to each division. A difference is that we're going to show the logarithm of the Bayes factor comparing the model implied by the odds and the model from the dart-throwing chimps (otherwise numbers become impractically large), and this divided by the number of game results we have for each division. Why that division? As we said above, if one model is better than another, the more observations you accumulate, the higher the amount of evidence for one over the other you're going to get. It's not that the first model is getting better over time, it's just that you're getting more evidence that it's better. In other words, if model A is slightly better than model B but you have a lot of data, and model C is much better than model D but you only have a bit of data, then the Bayes factor between A and B can be much larger than the one between C and D: the size of an effect isn't the same thing as your certainty about it.

By dividing the (logarithm of) the Bayes factor by the number of games, we're trying to get a rough idea of how good the odds are, as models, comparing different divisions with each other. This is something of a cheat — they aren't models of the same thing! — but by asking of each model how quickly they build evidence that they are better than our chimps, we get a sense of their comparative power (there are other, more mathematically principled ways of doing this, and to a degree the method you choose has to depend on your own criteria of usefulness, which depends on what you'll use the model for, but this will suffice here).

I'm following here the naming convention for divisions used in the data set: E0 is the English Premier League, E1 is their Championship, etc (the larger the number, the "lower" the league), and the country prefixes are: E for England, SC for Scotland, D for Germany, I for Italy, SP for Spain, F for France, N for the Netherlands, B for Belgium, P for Portugal, T for Turkey, and G for Greece. There's quite a bit of heterogeneity inside each country, but with clear patterns. To make them clearer, let's sort the graph by value instead of division, and keep only the lowest and highest five:

The betting odds generate better models for the top leagues of Greece, Portugal, Spain, Italy, and England, and worse ones for the lower leagues, with the very worst modeled one being SC3 (properly speaking, the Scottish League Two – there are the Scottish). This makes sense: the larger leagues have a lot of bettors who want in, many of them professionals, so the odds are going to be more informative.

To go back to the beginning: everything that gives you probabilities about the future is a predictive model. Just because one is a betting market and the other is a chimpanzee, or one is a consultant and the other one is a regression model, it doesn't mean they can't and shouldn't be compared to each other in a meaningful way. That's why it's so critical to save the guesses and predictions of every software model and every "human predictor" you work with. It lets you go back over time and ask the first and most basic question in predictive data science:

How much better is this program or this guy than a chimp throwing darts?

When you think about it, is that really a question you would want to leave unanswered about anything or anybody you work with?

# The Children of the Dead City

Dusk is coming and walking at night is no longer allowed, but the children still loiter near the black windowless building that looks like a tombstone for a giant or a town. A year ago most of their parents worked there, their hands the AI-controlled manipulators of the self-managed warehouse, but since then artificial hands have become good enough, and no more than a dozen humans tarnish the algorithmic purity of the logistics hub.

With so many residents unemployed, the town can no longer afford the software usage licenses that keep the smart city infrastructure working. Traffic lights cycle blindly without regard for people or cars. Medical help has to be called for manually, phones and buildings callously ignoring emergencies and uninterested in saving lives.

No unblinking mind watches over children on the streets. Something does, something nameless and uncaring, and parents have tried to explain that it's just an analytics company the town is selling the video feeds to, but they also tell them to be home early, and fret over their health more than before.

Like every physically vulnerable life form, children know when they are being lied to. They also know when a place is haunted.

Night has fallen, and the children finally leave the familiar presence of the warehouse's continuously thinking walls. The walk back home is scary and thrilling, the well-lighted streets only increasing the menace from the once soothing eyes on every pole and wall. The children move in packs, wordlessly alert, but some must walk alone to houses out of the way.

Not all of the children arrive on time. When apprehensive parents eventually go out searching for them, asking the city in vain for help, not all are found. A camera last saw them, a neural network recognized them, a database holds the memory. But the city is silent.

For a while no child walks unaccompanied, yet that cannot last forever, and the black monolith keeps calling to them with the familiar warmth of a place where everything sees, and thinks, and cares.

# Why the most influential business AIs will look like spellcheckers (and a toy example of how to build one)

Forget voice-controlled assistants. at work, AIs will turn everybody into functional cyborgs through squishy red lines under everything you type. Let's look at a toy example I just built (mostly to play with deep learning along the way).

I chose as a data set Patrick Martinchek's collection of Facebook posts from news organizations. It's a very useful resource, covering more that a dozen organizations and with interesting metadata for each post, but for this toy model I focused exclusively on the headlines of CNN's posts. Let's say you're a journalist/editor/social network specialist working for CNN, and part of your job is to write good headlines. In this context, a good headline could be defined as one having a lot of shares. How would you use an AI to help you with that?

The first step is simply to teach the AI about good and bad headlines. Patrick's data set included 28,300 posts with both the headline and the count of shares (there were some parsing errors for which I chose just to ignore the data; in a production project the number of posts would've been larger). As what counts as a good headline depends on the organization, I defined a good headline as one that got a number of shares in the top 5% for the data set. This simplifies the task from predicting a number (how many shares) to a much simpler classification problem (good vs bad headline)

The script I used to train the network to perform this classification was Denny Britz' classic Implementing a CNN for text classification in TensorFlow example. It's an introductory model, not meant to have production-level performance (also, it was posted on December 2015, and sixteen months in this field is a very long time), but the code is elegant, well-documented, and easy to understand and modify, so it was the obvious choice for this project. The only changes I made were adapting it to train the network without having to load all of the data in memory at the same time and replacing the parser with one of NLTK's.

After an hour of training on my laptop, testing the model against out-of-sample data gives an accuracy of 93% and a precision for the class of good headlines of 9%. The latter is the metric I cared about for this model: it means that 9% of the headlines the model marks as good are, in fact, good. That's about 80% better than random chance, which is... well, it's not that impressive. But that's after an hour of training with a tutorial example, and rather better than what you'd get from that data set using most other modeling approaches.

In any case, the point of the exercise wasn't to get awesome numbers, but to be able to do the next step, which is where this kind of model moves from a tool used by CNN's data scientists into one that turns writers into cyborgs.

Reaching again into NLTK's impressive bag of tricks, I used its part-of-speech tagger to identify the nouns in every bad headline, and then a combination of WordNet's tools for finding synonyms and the pluralizer in CLiPS' Pattern Python module to generate a number of variants for each headline, creating new variations using simple rewrites of the original one.

So for What people across the globe think of Donald Trump, the program suggested What people across the Earth think of Donald Trump and What people across the world think of Donald Trump. What's more, while the original headline was "bad," the model predicts that the last variation will be good. With a 9% precision for the class, it's not a sure thing, but it's almost twice the a priori probability of the original, which isn't something to sneeze at.

In another case, the program took Dog sacrifices life to save infant in fire, and suggested Dog sacrifices life to save baby in fire. The point of the model is to improve on intuition, and I don't have the experience of whoever writes CNN's post headlines, but that does look like it'd work better.

Where things go from a tool for data analysts to something that changes how almost everybody works is that nothing prevents a trained model from working in the background, constantly checking what you're writing — for example, the headline for your post — and suggesting alternatives. To grasp the true power a tool like this could have, don't imagine a web application that suggests changes to your headline, or even as a tool in your CMS or text editor, but something more like your spellchecker. For example, the "headline" field in your web app will have attached a model trained from the specific data from your organization (and/or from open data sets), which will underline it in red if it predicts it won't work well. Right-click on the text, and it'll show you some alternatives.

Or if the response to a customer you're typing might make them angry.

Or if the presentation you're building has the sort of look that works well on SlideShare.

Or if the code you're writing is similar to the kind of code that breaks your application's test suite.

Or if there's something fishy in the spreadsheet you're looking at.

Or... You get the idea. Whenever you have a classification model and a way to generate alternatives, you have a tool that can help knowledge workers do they work better, a tool that gets better over time — not just learning from its experience, as humans do, but from the collective experience of the entire organization — and no reason not to use it.

"Artificial intelligence," or whatever label you want to apply to the current crop of technologies, is something that can, does, and will work invisibly as part of our infrastructure, and it's also at the core of dedicated data analysis, but it'll also change the way everybody works by having domain-specific models look in real time at everything you're seeing and doing, and making suggestions and comments. Microsoft's Clippy might have been the most universally reviled digital character before Jar Jar Binks, but we've come to depend on unobtrusive but superhuman spellcheckers, GPS guides, etc. Even now image editors work in this way, applying lots of domain-specific smarts to assist and subtly guide your work. As building models for human or superhuman performance on very specific tasks becomes accessible to every organization, the same will apply to almost every task.

It's already beginning to. We don't have, yet, the Microsoft Office of domain-specific AIs, and I'm not sure how that would look like, but, unavoidably, the fact that we can teach programs to perform better than humans in a list of "real-world" tasks that grows almost every week means that organizations that routinely do so — companies that don't wait for fully artificial employees, but that also don't neglect to enhance their employees with every better-than-human narrow AI they can build right now — have an increasing advantage over those that don't. The interfaces are still clumsy, there's no explicit business function or fancy LinkedIn position for it, and most workers, including ironically enough knowledge workers and people with leadership and strategic roles, still have to be convinced that cyborgization, ego issues aside, is a better career choice than eventual obsolescence, but the same barriers applied when business software first became available, yet the crushing economic and business advantages made them irrelevant in a very short amount of time.

The bottom line: Even if you won't be replaced by an artificial intelligence, there will be many specific aspects of your work they will be or are already able to do better than you, and if you can't or won't work with them as part of your daily routine, there's somebody who will. Knowing how to train and team up with software in an effective way will be one of the key work skills of the near future, and whether explicit or not, the "AI Resources Department" — a business function focused on constantly building, deploying, and improving programs with business-specific knowledge and skills — will be at the center of any organization's efforts to become and remain competitive.

# Don't blame algorithms for United's (literally) bloody mess

It's the topical angle, but let's not blame algorithms for the United debacle. If anything, algorithms might be the way to reduce how often things like this happen.

What made it possible for a passenger to be hit and dragged off a plane to avoid inconveniencing an airline's personnel logistics wasn't the fact that the organization implements and follows quantitative algorithms, but the fact that it's an organization. By definition, organizations are built to make human behavior uniform and explicitly determined.

A modern bureaucratic state is an algorithm so bureaucrats will behave in homogeneous, predictable ways.

A modern army is an algorithm so people with weapons will behave in homogeneous, predictable ways.

And a modern company is an algorithm so employees will behave in homogeneous, predictable ways.

It's not as if companies used to be loose federations of autonomous decision-making agents applying both utilitarian and ethical calculus to their every interaction with customers. The lower you are in an organization's hierarchy, the less leeway you have to deviate from rules, no matter how silly or evil they prove to be in a specific context, and customers (or, for that matter, civilians in combat areas) rarely if ever interact with anybody who has much power.

That's perhaps an structural, and certainly a very old, problem in how humans more or less manage to scale up our social organizations. The specific problem in Dao's case was simply that the rules were awful, both ethically ("don't beat up people who are behaving according to the law just because it'll save you some money") and commercially ("don't do things that will get people viscerally and virally angry with you somewhere with cameras, which nowadays is anywhere with people.")

Part of the blame could be attributed to United CEO's Muños and his tenuous grasp of at least simulated forms of empathy, as manifested by his first and probably most sincere reaction. But hoping organizations will behave ethically or efficiently when and because they have ethical and efficient leaders is precisely why we have rules: one of the major points of a Republic is that there are rules that constrain even the highest-ranking officers, so we limit both the temptation and the costs of unethical behavior.

Something of a work in progress.

So, yes, rules are or can be useful to prevent the sort of thing that happened to Dao. And to focus on current technology, algorithms can be an important part of this. In a perhaps better world, rules would be mostly about goals and values, not methods, and you would trust the people on the ground to choose well what to do and how to do it. In practice, due to a combination of the advantages of homogeneity and predictability of behavior, the real or perceived scarcity of people you'd trust to make those choices while lightly constrained, and maybe the fact that for many people the point of getting to the top is partially to tell people what to do, employees, soldiers, etc, have very little flexibility to shape their own behavior. To blame this on algorithms is to ignore that this has always been the case.

What algorithms can do is make those rules more flexible without sacrificing predictability and homogeneity. While it's true that algorithmic decision-making can have counterproductive behaviors in unexpected cases, that's equally true of every system of rules. But algorithms can take into account more aspects of a situation than any reasonable rule book could handle. As long as you haven't given your employees the power to override rules, it's irrelevant whether the algorithm can make better ethical choices than them — the incremental improvement happens because it can make a better ethical choice than a static rule book.

In the case of United, it'd be entirely possible for an algorithm to learn to predict and take into account the optics of a given situation. Sentiment analysis and prediction is after all a very active area of application and research. "How will this look on Twitter?" can be part of the utility function maximized by an algorithm, just as much as cost or time efficiencies.

It feels quite dystopic to think that, say, ride hailing companies should have machine learning models to prevent them from suddenly canceling trips for pregnant women going to the hospital to pick up a more profitable trip elsewhere; shouldn't that be obvious from everybody from Uber drivers to Uber CEOs? Yes, it should. And no, it isn't. Putting "morality" (or at least "a vague sense of what's likely to make half the Internet think you're scum") in code that can be reviewed, as — in the best case — a redundancy backup to a humane and reasonable corporate culture, is what we already do in every organization. What we can and should do is to teach algorithms to try to predict the ethical and PR impact of every recommendation they make, and take that into account.

Whether they'll be better than humans at this isn't the point. The point is that, as long as we're going to have rules and organizations where people don't have much flexibility not to follow them, the behavioral boundaries of those organizations will be defined by that set of rules, and algorithms can function as more flexible and careful, and hence more humane, rules.

The problem isn't that people do what computers tell them to do (if you want, you can say that the root problem is when people do bad things other people tell them to do, but that has nothing to do with computers, algorithms, or AI). Computers do what people tell them. We just need to, and can, tell them to be more ethical, or at least to always take into account how the unavoidable YouTube video will look.

# Deep Learning as the apotheosis of Test-Driven Development

Even if you aren't interested in data science, Deep Learning is an interesting programming paradigm; you can see it as "doing test-driven development with a ludicrously large number of tests, an IDE that writes most of the code, and a forgiving client." No wonder everybody's pouring so much money and brains into it! Here's a way of thinking about Deep Learning not as an application you're asked to code, but a language to code with.

Deep Learning applies test-driven development as we're all taught to (and not always do): first you write the tests, and then you move from code that fails all of them to one that passes them all. One difference from the usual way of doing it, and the most obvious, is that you'll usually have anything from hundreds of thousands to Google-scale numbers of test cases in the way of pairs (picture of a cat, type of cute thing the cat is doing), or even a potentially infinite number that look like pairs (anything you try, how badly Donkey Kong kills you). This gives you a good chance that, if you selected or generated them intelligently, the test cases represent the problem well enough that a program that passes them will work in the wild, even if the test cases are all you know about the problem. It definitely helps that for most applications the client doesn't expect perfect performance. In a way, this lets you get away with the problem of having to get and document domain knowledge, at least for reasonable-but-not-state-of-the-art levels of performance, which is specially hard to do for to things like understanding cat pictures, because we just don't know how we do it.

The second difference between test-driven development with the usual tools and test-driven development with Deep Learning languages and runtimes is that the latter are differentiable. Forget the mathematical side of that: the code monkey aspect of it is that when a test case fails, the compiler can fix the code on its own.

Yep.

Once you stop thinking about neural networks as "artificial brains" or data science-y stuff, and look at them as a relatively unfamiliar form of bytecode — but, as bytecode goes, also a fantastically simple one — then all that hoopla about backpropagation algorithms is justified, because they do pretty much what we do: look at how a test failed and then work backwards through the call stack, tweaking things here and there, and then running the test suite again to see if you fixed more tests than you broke. But they do it automatically and very quickly, so you can dedicate yourself to collecting the tests and figuring out the large scale structure of your program (e.g. the number and types of layers in your network, and their topology) and the best compiler settings (e.g., optimizing hyperparameters and setting up TensorFlow or whatever other framework you're using; they are labeled as libraries and frameworks, but they can also be seen as compilers or code generators that go from data-shaped tests to network-shaped bytecode).

One currently confusing fact is that this is all rather new, so very often the same people who are writing a program are also improving the compiler or coming up with new runtimes, so it looks like that's what programming with Deep Learning is about. But that's just a side effect of being in the early "half of writing the program is improving gcc so it can compile it" days of the technology, where things improve by leaps and bounds (we have both a fantastic new compiler and the new Internet-scale computers to run it), but are also rather messy and very fun.

To go back to the point: from a programmer's point of view, Deep Learning isn't just a type of application you might be asked to implement. They are also a language to write things with, one with its own set of limitations and weak spots, sure, but also with the kind of automated code generation and bug fixing capabilities that programmers have always dreamed of, but by and large avoid because doing it with our usual languages involves a lot of maths and the kind of development timelines that makes PMs either laugh or cry.

Well, it still does, but with the right language the compiler takes care of that, and you can focus on high-level features and getting the test cases right. It isn't the most intuitive way of working for programmers trained as we were, and it's not going to fully replace the other languages and methods in our toolset, but it's solving problems that we thought were impossible. How can a code monkey not be fascinated by that?

# "Tactical Awareness" en Español

Esteban Flamini hizo lo que no imaginé que fuera posible: tradujo TACTICAL AWARENESS al Español manteniendo tanto el argumento de las historias como la cuenta de palabras. Su traducción, como el texto original, se puede bajar gratuitamente en su sitio.

Incluso si no te interesan las historias, o si ya las leíste, vale la pena leer la versión de Esteban, aunque más no sea para apreciar una traducción realmente difícil realizada extremadamente bien.

# Short story: The Eater of Silicon Sins

His job is not to press the button. When he fails at his job, people don't die.

There used to be support groups for people like him, groups he wasn't supposed to attend but did anyway. They were for the people who worked the most awful images the human mind could conceive, videos of violence and sexual abuse beyond any quaint nightmares they might have had before, flagging them so the psychological damage of seeing those videos — and knowing those things were happening at that very moment to some terrified person inarticulate with pain — would remain contained inside their own minds. They could barely afford food on gig economy rates, much less therapy, so they met online to not talk about what they couldn't, and half-heatedly and not often successfully prevent each other from killing themselves.

He would go to those groups to seek some simulacrum of health in their shared illness, yet there would always be a barrier between him and everybody else. What he sees every day isn't the crisp video of a carefully recorded personal hell, but the blurry real-time monitoring feed of a superhumanly fast combat robot moving, targeting, and shooting quicker than any human could. It would be impossible for him to decide faster and better than the robot which of the moving figures are enemy combatants, children trying to run from a war without fronts, or both.

So he never presses the button, and prays every night beyond statistical hope to have never let a terrified innocent die.

The groups went away when computers became better than humans at filtering out that kind of material, but he knows he will never be replaced. No matter how good the robots get, how superhumanly quick and accurate their autonomous reactions, there'll still be innocents dead whenever they are used for what they were built for; not because the technology is flawed, but because that's the tactically optimal tradeoff they've been configured for. His job is to take the blame for, and only a human can do that.

He doesn't drink, nor take pills, nor beat his wife. He has no dangerous hobbies. He does his duty like any good soldier would do.

In his dreams he seems himself on a screen, his face framed by a targeting solution. The image stays stills for an impossibly long time, yet he never presses the button.

.finis.

# Short story: Dead Man's Trigger

My name is Rob, short for Roberta. I'm a private investigator, which means I'm good enough with social networks to do what the police does, just without the automated subpoenas and the retroactively legal hacking. It's not difficult, really. Nine times out of ten the obvious suspect did it. The bereaved know who did it, acquaintances know who did it, even the police know who did it.

So ten times out of ten I'm hired when the police pretends not to know who did it, when a judge pretends not to believe them, or when a jury pretends they've got reasonable doubt. I'm never hired to figure out who did it, despite the pretenses the client and I go through. I'm not even hired to find proof. I'm hired because once I've found, again, what everybody knew, and collected the proof they didn't need, I give them a burner email address.

They hire me for that email address. I don't like it, but I don't dislike it enough not to give it to them. It's my business to give the address, not what they do with it.

I can pretend not to know just as well as cops, judges, and juries do, but I can't lie to myself, not about this. Content sent to those addresses usually goes viral. Which by itself would be a weak form of revenge: The crimes the police decide not to solve, judges not to take to trial, and juries not to punish, are the kinds of crime many people cheer the criminal for. Shooting the "right" kind of person, more often than not. (My boyfriend was the right kind of person. Serious, sad, brilliant John. Did he know how he'd die when he wrote this program?)

But the evidence doesn't just go viral, it infects the right sort of group. I don't use the word metaphorically, or at least not much. I don't know who those people are, but I'm sure they aren't always the same. Depends on the crime, on the victim, and on tides I don't visit the right forums to feel the shifting of. I'm glad of that, for my sanity's sake. (John had to, if nothing else to teach the program to seek them. I didn't know him well, it turns out, while he knew exactly what I would and wouldn't do. I only get email addresses sent to me. Nothing more.)

I don't tell myself that the deaths that follow are coincidence. I don't dwell in how they are not. I sleep reasonably well.

I've stopped missing John.

.finis.

# The new (and very old) political responsibility of data scientists

We still have a responsibility to prevent the ethical misuse of new technologies, as well as helping make their impact on human welfare a positive one. But we now have a more fundamental challenge: to help defend the very concept and practice of the measurement and analysis of quantitative fact.

To be sure, a big part of practicing data science consists of dealing with the multiple issues and limitations we face when trying to observe and understand the world. Data seldom means what its name implies it means; there are qualifications, measurement biases, unclear assumptions, etc. And that's even before we engage the useful but tricky work of making inferences off that data.

But the end result of what we do — and not only, or even mainly us, for this collective work of observation and analysis is one of the common threads and foundations of civilization — is usually a pretty good guess, and it's always better than closing your eyes and giving whatever number provides you with an excuse to do what you'd rather do. Deliberately messing with the measurement of physical, economic, or social data is a lethal attack on democratic practices, because it makes impossible for citizens to evaluate government behavior. Defending the impossibility of objective measurement (as opposed to acknowledging and adapting to the many difficulties involved) is simply to give up on any form of societal organization different from mystical authoritarianism.

Neither attitude is new, but both have gained dramatically in visibility and influence during the last year. This adds to the existing ethical responsibilities of our profession a new one, unavoidably in tension with them. We not only need to fight against over-reliance on algorithmic governance driven by biased data (e.g. predicting behavior from records compiled by historically biased organizations) or the unethical commercial and political usage of collected information, but also, paradoxically, we need to defend and collaborate in the use of data-driven governance based on best-effort data and models.

There are forms of tyranny based on the systematic deployment of ubiquitous algorithmic technologies, and there are forms of obscurantism based on the use of cargo cult pseudo-science. But there are also forms of tyranny and obscurantism predicated on the deliberate corruption of data or even the negation of the very possibility of collecting it, and it's part of our job to resist them.

Economists and statisticians in Argentina, when previous governments deliberately altered some national statistics and stopped collecting others, rose to the challenge by providing parallel, and much more widely believed, numbers (among the first, the journalist and economist — a combination of skills more necessary with every passing year — Sebastián Campanario). Theirs weren't the kind of arbitrary statements that are frequently part of political discourse, nor did they reject official statistics because they didn't match ideological preconceptions or it was politically convenient to do so. Official statistics were technically wrong in their process of measurement and analysis, and for any society that aspires to meaningful self-government the soundness and availability of statistics about itself are an absolute necessity.

Data scientists are increasingly involved in the process of collection and analysis of socially relevant metrics, both in the private and the public sectors. We need to consistently refuse to do it wrong, and to do our best to do it correctly even, and specially, when we suspect other people are choosing not to. Nowcasting, inferring the present from the available information, can be as much of a challenge, and as important, as predicting the future. The fact that we might end up having to do it without the assumption of possibly flawed but honest data will be a problem we have in other contexts already began to work on. Some of the earliest applications of modern data-driven models in finance, after all, were in fraud detection.

We are all potentially climate scientists now, massive observational efforts to be refuted based on anecdotes, disingenuous visualizations to be touted as definitive proof, and eventually the very possibility of quantitative understanding to be violently mocked. We (still) have to make sure the economic and social impact of things like ubiquitous predictive surveillance and technology-driven mass unemployment are managed in positive ways, but this new responsibility isn't one we can afford to ignore.

# Rush Hour

Three minutes ago you were in a traffic jam, one of dozens of drivers impatiently waiting for their cars to reboot and shake off whatever piece of malware had infected them through the city network. Now you're moving.

You're moving very, very fast. You can see every car ahead of you moving aside as if by magic, either on their own or pushed by another, their drivers as surprised as you are.

A few other cars both ahead and behind are moving just as fast as yours. They are all big ones. There's a certain, important building a few blocks ahead and a handful of seconds away.

You understand where the cars are accelerating towards and what for.

You don't scream until the car in front of you crashes through the wall.

.finis.

# The Mental Health of Smart Cities

Not the mental health of the people living in smart cities, but that of the cities themselves. Why not? We are building smart cities to be able to sense, think, and act; their perceptions, thoughts, and actions won't be remotely human, or even biological, but that doesn't make them any less real.

Cities can monitor themselves with an unprecedented level of coverage and detail, from cameras to government records to the wireless information flow permeating the air. But these perceptions will be very weakly integrated, as information flows slowly, if at all, between organizational units and social groups. Will the air quality sensors in a hospital be able to convince most traffic to be rerouted further away until rush hour passes? Will the city be able to cross-reference crime and health records with the distribution of different business, and offer tax credits to, say, grocery stores opening in a place that needs them? When a camera sees you having trouble, will the city know who you are, what's happening to you, and who it should call?

This isn't a technological limitation. It comes from the way our institutions and business are set up, which is in turn reflected in our processes and infrastructure. The only exception in most parts of the world is security, particularly against terrorists and other rare but high-profile crimes. Organizations like the NSA or the Department of Homeland Security (and its myriad partly overlapping versions both within and outside the United States) cross through institutional barriers, most legal regulations, and even the distinction between the public and the private in a way that nothing else does.

The city has multiple fields of partial awareness, but they are only integrated when it comes to perceiving threats. Extrapolating an overused psychological term, isn't this an heuristic definition of paranoia? The part of the city's mind that deals with traffic and the part that deals with health will speak with each other slowly and seldom, the part who manages taxes with the one who sees the world through the electrical grid. But when scared, and the city is scared very often, and close to being scared every day, all of its senses and muscles will snap together in fear. Every scrap of information correlated in central databases, every camera and sensor searching for suspects, all services following a single coordinated plan.

For comparison, shopping malls are built to distract and cocoon us, to put us in the perfect mood to buy. So smart shopping malls see us like customers: they track where we are, where we're going, what we looked at, what we bought. They try to redirect us to places where we'll spend more money, ideally away from the doors. It's a feeling you can notice even in the most primitive "dumb" mall: the very shape of the space is built as a machine to do this. Computers and sensors only heighten this awareness; not your awareness of the space, but the space's awareness of you.

We're building our smart cities in a different direction. We're making them see us as elements needing to get from point A to point B as quickly as possible, taking little or no care of what's going on at either end... except when it sees us, and it never sees or thinks as clearly and as fast, as potential threats. Much of the mind of the city takes the form of mobile services from large global companies that seldom interact locally with each other, much less with the civic fabric itself. Everything only snaps together with an alert is raised and, for the first time, we see what the city can do when it wakes up and its sensors and algorithms, its departments and infrastructure, are at least attempting to work coordinately toward a single end.

The city as a whole has no separate concept of what a person is, no way of tracing you through its perceptions and memories of your movements, actions, and context except when you're a threat. As a whole, it knows of "persons of interest" and "active situations." It doesn't know about health, quality of life, a sudden change in a neighborhood. It doesn't know itself as anything else than a target.

It doesn't need to be like that. The psychology of a smart city, how it integrates its multiple perceptions, what it can think about, how it chooses what to do and why, all of that is up to us. A smart city is just an incredibly complex machine we live in and whom we give life to. We could build it to have a sense of itself and of its inhabitants, to perceive needs and be constantly trying to help. A city whose mind, vaguely and perhaps unconsciously intuited behind its ubiquitous and thus invisible cameras, we find comforting. A sane mind.

Right now we're building cities that see the world mostly in terms of cars and terrorism threats. A mind that sees everything and puts together very little except when it scares it, where personal emergencies are almost entirely your own affair, but becomes single-minded when there's a hunt.

That's not a sane mind, and we're planning to live in a physical environment controlled by it.

# How to be data-driven without data...

...and then make better use of the data you get.

The usefulness of data science begins long before you collect the first data point. It can be used to describe very clearly your questions and your assumptions, and to analyze in a consistent manner what they imply. This is neither a simple exercise nor an academic one: informal approaches are notoriously bad at handling the interplay of complex probabilities, yet even the a priori knowledge embedded in personal experience and publicly available research, when properly organized and queried, can answer many questions that mass quantities of data, processed carelessly, wouldn't be able to, as well as suggest what measurements should be attempted first, and what for.

The larger the gap between the complexity of a system and the existing data capture and analysis infrastructure, the more important it is to set up initial data-free (which doesn't mean knowledge-free) formal models as a temporary bridge between both. Toy models are a good way to begin this approach; as the British statistician George E.P. Box wrote, all models are wrong, but some are useful (at least for a while, we might add, but that's as much as we can ask of any tool).

Let's say you're evaluating an idea for a new network-like service for specialized peer-to-peer consulting that will have the possibility of monetizing a certain percentage of the interactions between users. You will, of course, capture all of the relevant information once the network is running — and there's no substitute for real data — but that doesn't mean you have to wait until then to start thinking about it as a data scientist, which in this context means probabilistically.

Note that the following numbers are wrong: it takes research, experience, and time to figure out useful guesses. What matters for the purposes of this post is describing the process, oversimplified as it will be.

You don't know a priori how large the network will be after, say, one year, but you can look at other competitors, the size of the relevant market, and so on, and guess, not a number ("our network in one year will have a hundred thousand users"), but the relative likelihood of different values.

The graph above shows one possible set of guesses. Instead of giving a single number, it "says" that there's a 50% chance that the network will have at least a hundred thousand users, and a 5.4% chance that it'll have at least half a million (although note that decimals points in this context are rather pointless; a guess based on experience and research can be extremely useful, but will rarely be this precise). On the other hand, there's almost a 25% chance that the network will have less than fifty thousand users, and a 10% chance that it'll have less than twenty-eight thousand.

You can use the same process to codify your educated guesses about other key aspects of the application, like the rate at which members of the network will interact, and the average revenue you'll be able to get from each interaction. As always, neither these numbers nor the specific shape of the curves matter for this toy example, but note how different degrees and forms of uncertainty are represented through different types of probability distributions:

Clearly, in this toy model we're sure about some things like the interaction rate (measured, say, in interactions per month), and very unsure about others, like the average revenue per interaction. Thinking about the implications of multiple uncertainties is one of the toughest cognitive challenges, as humans tend to conceptualize specific concrete scenarios: we think in terms of one or at best a couple of states of the world we expect to happen, but when there are multiple interacting variables, even the most likely scenario might have a very low absolute probability.

Simulation software, though, makes this nearly trivial even for the most complex models. Here's, for example, the distribution of probabilities for the monthly revenue, as necessarily implied by our assumptions about the other variables:

There are scenarios where your revenue is more than USD 10M per month, and you're of course free to choose the other variables so this is one of the handful of specific scenarios you describe (perhaps the most common and powerful of the ways in which people pitching a product or idea exploit the biases and limitations in human cognition). But doing this sort of quantitative analysis forces you to be honest at least to yourself: if what you know and don't know is described by the distributions above, then you aren't free to tell yourself that your chance of hitting it big is other than microscopic, no matter how clear the image might be in your mind.

That said, not getting USD 10M a month doesn't mean the idea is worthless; maybe you can break even and then use that time to pivot or sell it, or you just want to create something that works and is useful, and then grow it over time. Either way, let's assume your total costs are expected to be USD 200k per month (if this were a proper analysis and not a toy example, this wouldn't be an specific guess, but another probability distribution based on educated guesses, expert opinions, market surveys, etc). How do probabilities look then?

You can answer this question using the same sort of analysis:

The inescapable consequence of your assumptions is that your chances of breaking even are 1 in 20. Can they be improved? One advantage of fully explicit models is that you can ask not just for the probability of something happening, but also about how things depend on each other.

Here are the relationships between the revenue, according to the model, and each of the main variables, with a linear best fit approximation superimposed:

As you can see, network size has the clearest relationship with revenue. This might look strange – wouldn't, under this kind of simple model, multiplying by ten the number of interactions keeping the monetization rate also multiply by ten the revenue? Yes, but your assumptions say you can't multiply the number of interactions by more than a factor of five, which, together with your other assumptions, isn't enough to move your revenue very far. So it isn't that it's unreasonable to consider the option of increasing interactions significantly, to improve your chances of breaking even (or even getting to USD 10M). But if you plan to increase outside the explicit range encoded your assumptions, you have to explain why they were wrong. Always be careful when you do this: changing your assumptions to make possible something that would be useful if it were possible is one of humankind's favorite ways of driving directly into blind alleys at high speed.

It's key to understand that none of this is really a prediction about the future. Statistical analysis doesn't really deal with predicting the future or even getting information about the present: it's all about clarifying the implications of your observations and assumptions. It's your job to make those observations and assumptions as good and releevant as possible, both not leaving out anything you know, and not pretending you know what you don't, or that your are more certain about something that you should be.

This problem is somewhat mitigated for domains where we have vast amounts of information, including, recently, areas like computer vision and robotics. But we have yet to achieve the same level of data collection in other key areas like business strategy, so there's no way of avoiding using expert knowledge... which doesn't mean, as we saw, that we have to ditch quantitative methods.

Ultimately, successful organizations do the entire spectrum of analysis activities: they build high-level explicit models, encode expert knowledge, collect as much high-quality data as possible, train machine learning models based on that, and exploit all of that for strategic analysis, automatization, predictive modeling, etc. There are no silver bullets, but you probably have more ammunition than you think.

# In the News that Make me Very Proud department...

AntipodeanSF just posted my short story Across the Glass from TACTICAL AWARENESS.

# Safe Travels

The almost absolute lack of TSA security measures in "your" queue is both insult and carrot, but as long as they still feel the need to offer a carrot things aren't really that bad. You mostly try to believe this when your son is looking at you with the relaxed smile of the unscared. It makes it easier to smile back.

Boarding is unnervingly fast, the plane small and old, the uniform rows of dark skins and headscarves an insult, the lack of angry whispers a carrot. You try to focus on your son, who's excited about his first flight although pretending not to. You think, and hope, he doesn't notice how everybody in the plane resembles his own family, or that he doesn't think they do — that he thinks skin and dress less important than the way some kids like soccer and some prefer VR games.

Believing this would make him a good man. Trusting that everybody does could get him lynched one day. For now, he sees neither carrots nor insults here, just a small window, the ground falling, and then the sky.

It breaks your heart as much as it lifts it, but when he looks again at you you'll be waiting with a smile. And later de-boarding will be quick and your terminal will be small and somehow quaint, and you know one day you'll have to talk with him about such things, but for now you just look at his breathless expression reflected on the plane window, and tell yourself it isn't selfish to wish for you both just a little bit more of sky.

.finis.

# The best political countersurveillance tool is to grow the heck up

The thing is, we're all naughty. The specifics of what counts as "wrong" depend on the context, but there isn't anybody on Earth so boring that haven't done or aren't doing something they'd rather not be known worldwide.

Ordinarily this just means that, as every other social species, we learn pretty early how to dissimulate. But we aren't living in an ordinary world. As our environment becomes a sensor platform with business models bolted on top of it, private companies have access to enormous amounts of information about things that were ordinarily very difficult to find, non-state actors can find even more, and the most advanced security agencies... Well. Their big problem is managing and understanding this information, not gathering it. And all of this can be done more cheaply, scalably, and just better than ever before.

Besides issues of individual privacy, this has a very dangerous effect on politics wherever it's coupled with overly strict standards: it essentially gives a certain degree of veto power over candidates to any number of non-democratic actors, from security agencies to hacker groups. As much as transparency is an integral part of democracy, we haven't yet adapted to the kind of deep but selective transparency this makes possible, the US election being but the most recent, glaring, and dangerous example.

It will happen again, it will keep happening, and the prospect of technical or legal solutions is dim. This being politics, the structural solution isn't technical, but human. While we probably aren't going to stop sustaining the fiction that we are whatever our social context considers acceptable, we do need to stop reacting to "scandals" in an indiscriminate way. There are individual advantages to doing so, of course, but the political implications of this behavior, aggregated over an entire society, are extremely deleterious.

Does this mean this anything goes? No, quite the contrary. It means we need to become better at discriminating between the embarrassing and the disqualifying, between the hurtful crime and the indiscretion, between what makes somebody dangerous to give power to, and what makes them somebody with very different and somewhat unsettling life choices. Because everybody has something "scandalous" in their lives that can and will be digged up and displayed to the world whenever it's politically convenient to somebody with the power to do it, and reacting to all of it in the same way will give enormous amounts of direct political power to organizations and individuals, everywhere and at all points in the spectrum of legality, that are among the least transparent and accountable in the world.

This means knowing the difference between the frowned upon and the evil. It's part of growing up, yet it's rarer, and more difficult, the larger and more interconnected a group becomes. Eventually the very concept of evil as something other than a faux pas disappears, and, historically, socially sanctioned totalitarianism follows because, while political power in nominally democratic societies seldom arrogates to itself the power to define what's evil, it has enormous power to change the scope of "adequate behavior."

We aren't going to shift our public morals to fully match our private behavior. We aren't really wired that way; we are social primates, and lying to each other is the way we make our societies work. But we are social primates living in an increasingly total surveillance environment vulnerable to multiple actors, a new (geo)political development with impossible technical solutions, but a very simple, very hard, and very necessary sociological fix: we just need to grow the heck up.

# The informal sector Singularity

At the intersection of cryptocurrencies and the "gig economy" lies the prospect of almost self-contained shadow economies with their own laws and regulations, vast potential for fostering growth, and the possibility of systematic abuse.

There have always been shadow, "unofficial" economies overlapping and in some places overruling their legal counterparts. What's changing now is that technology is making possible the setup and operation of extremely sophisticated informational infrastructures with very few resources. The disruptive impact of blockchains and related technologies isn't any single cryptocurrency, but the fact that it's another building block for any group, legal or not, to operate their own financial system.

Add to this how easy it is to create fairly generic e-commerce marketplaces, reputation tracking systems, and, perhaps most importantly, purely online labor markets. For employers, the latter can be a flexible and cost-efficient way of acquiring services, while for many workers it's becoming an useful, and for some an increasingly necessary, source of income. Large rises in unemployment, especially those driven by new technologies, always increase the usefulness of this kind of labor markets for employers in both regulated and unregulated activities, as a "liquid" market over sophisticated platforms makes it easy to continuously optimize costs.

You might call it a form of "Singularity" of the informal sector: there are unregulated or even fully criminal sectors that are technologically and algorithmically more sophisticated than the average (or even most) of the legal economy.

While most online labor markets are fully legal, this isn't always the case, even when the activity being contracted isn't per se illegal. One current example is Uber's situation in Argentina: their operation is currently illegal due to regulatory non-compliance, but, short of arresting drivers — something that's actually being considered, due in some measure to the clout of the cab driver's union — there's nothing the government can do to completely stop them. Activities less visible than picking somebody up in a car — for example, anything you can do from a computer or a cellphone in your home — contracted over the internet and paid in a cryptocurrency or in any parallel payment system anywhere in the world are very unlikely to be ever visible to, or regulated by, the state or states who theoretically govern the people involved.

There are clear potential upsides to this. The most immediate one is that these shadow economies are often very highly efficient and technologically sophisticated by design. They can also help people avoid some of the barriers of entry that keep many people from full-time legal employment. A lack of academic accreditations, a disadvantaged socioeconomic background, or membership in an unpopular minority or age bracket can be a non-issue for many types of online work. In other cases they simply make possible types of work so new there's no regulatory framework for them, or that are impeded by obsolete ones. And purely online activities are often one of the few ways in which individuals can respond to economic downturns in their own country by supplying services overseas without intermediate organizations capturing most or all of the wage differential.

The main downside is, of course, that a shadow economy isn't just free from obsolete regulatory frameworks, but also free from those regulations meant to prevent abuse, discrimination, and fraud: minimum wages, safe working conditions, protection against sexual harassment, etc.

These issues might seem somewhat academic right now: most of the "gig economy" is either a secondary source of income, or the realm of relatively well-paid professionals. But technological unemployment and the increase in inequality suggest that this kind of labor markets are likely to become more important, particularly for the lower deciles of the income distribution.

Assuming a government has the political will to attack the problem of a growing, technologically advanced, and mostly unregulated labor economy — for some, at least, this seems to be a favoured outcome rather than a problem — fines, arrests, etc, are very unlikely to work, at least in moderately democratic societies. The global experience with software and media piracy shows how extremely difficult it is to stop an advanced decentralized digital service regardless of its legality. Silk Road was shut down, but it was one site, and run by a conveniently careless operator. The size, sophistication, and longevity of the on-demand network attacks, hacked information, and illegal pornography sectors are a better indicator of the impossibility of blocking or taxing this kind of activity once supply and demand can meet online.

A more fruitful approach to the problem is to note that, given the choice, most people prefer to work inside the law. It's true that employers very often prefer the flexibility and lower cost of an unregulated "high-frequency" labor economy, but people offer their work in unregulated economies when the regulated economy is blocked to them by discrimination, the legal framework hasn't kept up with the possibilities of new technologies, or there simply isn't enough demand in the local economy, making "virtual exports" an attractive option.

The point isn't that online labor markets, reputation systems, cryptocurrencies, etc, are unqualified evils. Quite the contrary. They offer the possibility of wealthier, smarter economies with a better quality of life, less onerous yet more effective regulations for both employers and employees, and new forms of work. However, these changes have to be fully implemented. Upgrading the legal economy to take advantage of new technologies — and doing it very soon — isn't a matter of not missing an opportunity, particularly for less developed economies. Absent a technological overhaul of how the legal economy works, more effective and flexible unregulated shadow economies are only going to keep growing; a lesser evil than effective unemployment, but not without a heavy social price.

# For the unexpected innovations, look where you'd rather not

Before Bill Gates was a billionaire, before the power, the cultural cachet, and the Robert Downey Jr. portrayals, computers were for losers who would never get laid. Their potential was of course independent of these considerations, but Steve Jobs could become one of the richest people on Earth because he was fascinated with, and dedicated time to, something that cool kids — specially from the wealthy families who could most easily afford access to them — wouldn't have been caught dead playing with, or at least loving.

Geek, once upon a time, was an unambiguous insult. It was meant to humiliate. Dedicating yourself to certain things meant you'd pay a certain social price. Now, of course, things are better for that particular group; if nothing else, an entire area of intellectual curiosity is no longer stigmatized.

But as our innovation-driven society is locked into computer geeks as the source of change, that means it's going to be completely blindsided by whatever comes next.

Consider J. K. Rowling. Stephenie Meyer. E. L. James. It's significant that you might not recognize the last two names: Meyer wrote Twilight and James Fifty Shades of Grey. Those three women (and it's also significant that they are women) are among the best-selling and most widely influential writers of our time, and pretty much nobody in the publishing industry was even aware that there was a market for what they were doing. Theirs aren't just the standard stories of talented artists struggling to be published. By the standards of the (mostly male) people who ran and by and large still run the publishing industry, the stories they wrote were, if they were to be kind, pointless and low-brow. A school for wizards where people died during a multi-volume malignant cou d'état? The love story of a teenager torn between her possessive werewolf friend and a teenage-looking centuries old vampire struggling to maintain self-control? Romantic sadomasochism from a female point of view?

Millions upon millions did. And then they watched the movies, and read the books again. Many of them were already writing the things they wanted to read — James' story was originally fan fiction in the Twilight universe — and wanted more. The publishing industry, supposedly in the business of figuring out that, had ignored them because they weren't a prestigious market (they were women, to be blunt, including very young women who "weren't supposed" to read long books, and older women who "weren't supposed" to care about boy wizards), and those weren't prestigious stories. When it comes to choosing where to go next, industries are as driven by the search for reputation as they are for the search of profit (except finance, where the search for profit regardless of everything else is the basis of reputation). Rowling and Meyer had to convince editors, and James first surge of sales came through self-published Kindle books. The next literary phenomenon might very well bypass publishers, and if that becomes the norm then the question will be what the publishing industry is for.

Going briefly back to the IT industry, gender and race stereotypes are still awfully prevalent. The next J. K. Rowling of software — and there will be one — will have to go through a much more difficult path than she should've had to. On the other hand, a whole string of potential early investors will have painful almost-did-it stories they'll never tell anyone.

This isn't a modern development, but rather a well-established historical pattern. It's the underdogs — the sidelined, the less reputable — who most often come up with revolutionary practices. The "mechanical arts" that we now call engineering were once a disreputable occupation, and no land-owning aristocrat would have guessed that one day they'll sell their bankrupted ancestral homes to industrialists. Rich, powerful Venice began, or so its own legend tells, as a refugee camp. And there's no need to recount the many and ultimately fruitful ways in which the Jewish diaspora adapted to and ultimately leveraged the restrictions imposed everywhere upon them.

Today geographical distances have greatly diminished, and are practically zero when it comes to communication and information. The remaining gap is social — who's paid attention to, and what about.

To put it in terms of a litmus test, if you wouldn't be somewhat ashamed of putting it in a pitch deck, it might be innovative, brilliant, and a future unicorn times ten, but it's something people already sort-of see coming. And a candidate every one of your competitors would consider hiring is one that will most likely go to the biggest or best-paying one, and will give them the kind of advantage they already have. To steal a march on them — to borrow a tactic most famously used by Napoleon, somebody no king would have appointed as a general until he won enough wars to appoint kings himself — you need to hire not only the best of the obvious candidates, but also look at the ones nobody is looking at, precisely because nobody is looking at them. They are the point from which new futures branch.

The next all-caps NEW thing, the kind of new that truly shifts markets and industries, is right now being dreamed and honed by people you probably don't talk to about this kind of thing (or at all) who are doing weird things they'd rather not tell most people about, or that they love discussing but have to go online to find like-minded souls who won't make fun of them or worse.

Diversity isn't just a matter of simple human decency, although it's certainly that as well, and that should be enough. In a world of increasingly AI-driven hyper-corporations that can acquire or reproduce any technological, operational, or logistical innovation anybody but their peer competitors might come up with, it's the only reliable strategy to compete against them. "Black swans" only surprise you if you never bothered looking at the "uncool" side of the pond.

# The Differentiable Organization

Neural networks aren't just at the fast-advancing forefront of AI research and applications, they are also a good metaphor for the structures of the organizations leveraging them.

DeepMind's description of their latest deep learning architecture, the Differentiable Neural Computer highlights one of the core properties of neural networks: they are differentiable systems to perform computations. Generalizing the mathematical definition, for a system to be differentiable implies that it's possible to work backwards quantitatively from its current behavior to figure out the changes that should be done to the system to improve it. Very roughly speaking — I'm ignoring most of the interesting details — that's a key component of how neural networks are usually trained, and part of how they can quickly learn to match or outperform humans in complex activities beginning from a completely random "program." Each training round provides not only a performance measurement, but also information about how to tweak the system so it'll perform better the next time.

Learning from errors and adjusting processes accordingly is also how organizations are supposed to work, through project postmortems, mission debriefings, and similar mechanisms. However, for the majority of traditional organizations this is in practice highly inefficient, when at all possible.

• Most of the details of how they work aren't explicit, but encoded in the organizational culture, workflow, individual habits, etc.
• They have at best a vague informal model — encoded in the often mutually contradictory experience and instincts of personnel — of how changes to those details will impact performance.
• Because most of the "code" of the organization is encoded in documents, culture, training, the idiosincratic habits of key personnel, etc, they change only partially, slowly, and with far less control than implied in organizational improvement plans.

Taken together, these limitations — which are unavoidable in any system where operational control is left to humans — make learning organizations almost chimerical. Even after extensive data collection, without a quantitative model of how the details of its activities impact performance and a fast and effective way of changing them, learning remains a very difficult proposition.

By contrast, organizations that have automated low-level operational decisions and, most importantly, have implemented quick and automated feedback loops between their performance and their operational patterns, are, in a sense, the first truly learning organizations in history. As long as their operations are "differentiable" in the metaphorical sense of having even limited quantitative models allowing to work out in a backwards faction desirable changes from observed performance — you'll note that the kind of problems the most advanced organizations have chosen to tackle are usually of this kind, beginning in fact relatively long ago with automated manufacturing — then simply by continuing their activities, even if inefficiently at first, they will be improving quickly and relentlessly.

Compare this pattern with an organization where learning only happens in quarterly cycles of feedback, performed by humans with a necessarily incomplete, or at least heavily summarized, view of low-level operations and the impact on overall performance of each possible low-level change. Feedback delivered to humans that, with the best intentions and professionalism, will struggle to change individual and group behavior patterns that in any case will probably not be the ones with the most impact on downstream metrics.

It's the same structural difference observed between manually written software and trained and constantly re-trained neural networks; the former can perform better at first, but the latter's improvement rate is orders of magnitude higher, and sooner or later leaves them in the dust. The last few years in AI have shown the magnitude of this gap, with software routinely learning in hours or weeks from scratch to play games, identify images, and other complex tasks, going poor or absolutely null performance to, in some cases, surpassing human capabilities.

Structural analogies between organizations and technologies are always tempting and usually misleading, but I believe the underlying point is generic enough to apply: "non-differentiable" organizations aren't, and cannot be, learning organizations at the operational level, and sooner or later aren't competitive with other that set up from the beginning automation, information capture, and the appropriate, automated, feedback loops.

While the first two steps are at the core of "big data" organizational initiatives, the latter is still a somewhat unappreciated feature of the most effective organizations. Rare enough, for the moment, to be a competitive advantage.

# Cuando subastan tus emociones

Un artículo de Sebastián Campanario sobre cambios en el márketing a partir de la Internet of Things y mejores algoritmos de modelaje de comportamiento humano, con algunas citas mías.

# When the world is the ad

Data-driven algorithms are effective not because of what they know, but as a function of what they don't. From a mathematical point of view, Internet advertising isn't about putting ads on pages or crafting seemingly neutral content. There's just the input — some change to the world you pay somebody or something to make — and the output — a change in somebody's likelihood of purchasing a given product or voting for somebody. The concept of multitouch attribution, the attempt to understand how multiple contacts with different ads influenced some action, is a step in the right direction, but it's still driven by a cosmology that sees ads as little gems of influence embedded in a larger universe that you can't change.

That's no longer true. The Internet isn't primarily a medium in the sense of something that is between. It's a medium in that we live inside it. It's the atmosphere through which the sound waves of information, feelings, and money flow. It's the spacetime through which the gravity waves from some piece of code shifting from data center to data center according to some post-geographical search of efficiency reach your car to suggest a route. And, on the opposite direction, it's how physical measurements of your location, activities — even physiological state — are captured, shared, and reused in ways that are increasingly more difficult to know about, and much less to be aware of during our daily life. Transparency of action often equals, and is used to achieve, opacity to oversight.

Everything we experience impacts our behavior, and each day more of what we experience is controlled, optimized, configured, personalized — pick your word — by companies desperately looking for a business model or methodically searching for their next billion dollars or ten.

Consider as a harbinger of the future that most traditional of companies, Facebook, a space so embedded in our culture that people older than credit cards (1950, Diners) use it without wonder. Among the constant experimentation with the willingly shared content of our lives that is the company, they ran an experiment attempting to deliberately influence the mood of their users by changing the order of what they read. The ethics of that experiment are important to discuss now and irrelevant to what will happen next, because the business implications are too obvious not to be exploited: some products and services are acquired preferentially by people in a certain mood, and it might be easier to change the mood of an already promising or tested customer than to find another new one.

If nostalgia makes you buy music, why wait until you feel nostalgic to show you an ad, when I can make sure you encounter mentions of places and activities from your childhood? A weapons company (or a law-and-order political candidate) will pay to place their ad next to a crime story, but if they pay more they can also make sure the articles you read before that, just their titles as you scroll down, are also scary ones, regardless of topic. Scary, that is, specifically for you. And knowledge can work just as well, and just as subtly: tracking everything you read, and adapting the text here and there, seemingly separate sources of information will give you "A" and "B," close enough for you to remember them when a third one offers to sell you "C." It's not a new trick, but with ubiquitous transparent personalization and a pervasive infrastructure allowing companies to bid for the right to change pretty much all you read and see, it will be even more effective.

It won't be (just) ads, and it won't be (just) content marketing. The main business model of the consumer-facing internet is to change what they consume, and when it comes down to what can and will be leveraged to do it, the answer is of course all of it.

Along the way, advertising will once again drag into widespread commercial application, as well as public awareness, areas of mathematics and technology currently used in more specialized areas. Advertisers mostly see us — because their data systems have been built to see us — as black boxes with tagged attributes (age, searches, location). Collect enough black boxes and enough attributes, and blind machine learning can find a lot of patterns. What they have barely begun to do is to open up those black boxes to model the underlying process, the illogical logic by which we process our social and physical environment so we can figure out what to do, where to go, what to buy. Complete understanding is something best left to lovers and mystics, but every qualitative change in our scalable, algorithmic understanding of human behavior under complex patterns of stimuli will be worth billions in the next iteration of this arms race.

Business practices will change as well, if only as a deepening of current tendencies. Where advertisers now bid for space on a page or a video slot, they will be bidding for the reader-specific emotional resonance of an article somebody just clicked on, the presence of a given item in a background picture, or the location and value of an item in an Augmented Reality game ("how much to put a difficult-to-catch Pokemon just next to my Starbucks for this person, whom I know has been out in this cold day enough for me to believe it'd like a hot beverage?"). Everything that's controlled by software can be bid upon by other software for a third party's commercial purposes. Not much isn't, and very little won't be.

The cumulative logic of technological development, one in which printed flyers co-exist with personalized online ads, promises the survival of what we might call by then overt algorithmic advertising. It won't be a world with no ads, but one in which a lot of what you perceive is tweaked and optimized so it's collective effect, whether perceived or not, is intended to work as one.

We can hypothesize a subliminally but significantly more coherent phenomenological experience of the world — our cities, friendships, jobs, art — a more encompassing and dynamic version of the "opinion bubbles" social networks often build (in their defense, only magnifying algorithmically the bubbles we had already built with our own choices of friends and activities). On the other hand, happy people aren't always the best customers, so transforming the world into a subliminal marketing platform might end up not being very pleasant, even before considering the impact on our societies of leveraging this kind of ubiquitous, personalized, largely subliminal button-pushing for political purposes.

In any case, it's a race in and for the background, and once that already started.

# (Over)Simplifying Calgary too

One of the good side effects of scripting multi-stage pipelines to build a visualization like my over-simplified map of Buenos Aires is that to process a data source in a completely different format only requires you to write a pre-processing script — everything else remains the same.

While I had used CSV data for the Buenos Aires map, I got KML files for the equivalent land use data for the City of Calgary. The pipeline I had written expected use types tied to single points mapped into a fixed grid, so I wrote a small Python script to extract the polygons defined in the KML file, overlay a grid over them, and assign to each grid point the land use value of the polygon that contained id.

After that the analysis was straightforward. Here's the detailed map of land uses (with less resolution than the original data, as the polygons have been projected on the point grid):

Here's the smoothed-out map:

This is how we split it into a puzzle of more-or-less single-use sectors:

And here's how it looks when you forget the geometry and only care about labels and relative (click to read the labels):

Unlike Buenos Aires, I've never been to Calgary, but a quick look at online maps seem to support the above as a first approximation to the city geography. I'd love to hear how from somebody who actually knows the city whether and how it matches their subjective map of the city.

# (Over)Simplifying Buenos Aires

This is a very rough sketch of the city of Buenos Aires:

As the sketch shows, it's a big blob of homes (VIVIENDAs), with an office-ridden downtown to the East (OFICINAS) and a handful of satellite areas.

The sketch, of course, lies. Here's a map that's slightly less of a lie:

Both maps are based on the 2011 land usage survey made available by the Open Data initiative of the Buenos Aires city government, more than 555k records assigning each spot to one of about 85 different use regimes. It's still a gross approximation — you could spend a lifetime mapping Buenos Aires, rewrite Ulysses for a porteño Leopold Bloom, and still not really know it — but already one so complex that I didn't add the color key to the map. I doubt anybody will want to track the distribution of points for each of the 85 colors.

Ridiculous as it sounds at first, I'd suggest we are using too much of the second type of graph, and not enough of the first. It's already a commonplace that data visualizations shouldn't be too complex, but I suspect we are overestimating what people wants from a first look at a data set. Sometimes "big blob of homes with a smaller downtown blob due East" is exactly the level of detail somebody needs — the actual shape of the blobs being irrelevant.

The first graph, needless to say, was created programmatically from the same data set from which I graphed the second. It's not a difficult process, and the intermediate steps are useful on their own.

Beginning with the original graph above, you apply something like an smoothing brush to the data points (or a kernel, if you want to sound more mathematical); essentially, you replace the land use tag associated to each point with the majority of the uses in its immediate area, smoothing away the minor exceptions. As you'd expect, it's not that there aren't any businesses in Buenos Aires, it's just that, plot by plot, there are more homes, and when you smooth everything out, it looks more like a blob of homes. This leads to an already much simplified map:

Now, one interesting thing about most peoples' sense of space is that it's more topological than metrical, that is, we are generally better at knowing what's next to what than their absolute sizes and positions. Data visualizations should go with the grain of human perceptual and cognitive instincts instead of against them, so one fun next step is to separate the blobs — contiguous blocks of points of the same (smoothed out) land use type — from each other, and show explicitly what's next to what. It looks like this:

Nodes are scaled non-linearly, and we've filtered out the smaller ones, but we've already done programmatically something that we usually leave to the human looking at a map. We've done a napkin sketch of the city, much as somebody would draw North America as a set of rectangles with the right shared frontiers, but not necessarily much precision in the details. It wouldn't do for a geographical survey, but if you were an extraterrestrial planning to invade Canada, it would provide a solid first understanding of the strategic relevance of Mexico to your plans. From that last map to the first one, it's only a matter of remembering that you don't really care, at this stage, about the exact shape of each blob, just where they stand in relationship to each other. So you replace the blogs with the appropriate land use label, and keep the edges between them. And presto, you have a napkin map.

Yes, on the whole the example is rather pointless. Cities are actually the most over-mapped territories on the planet, at both the formal and informal level. Manhattan is an island, Vatican City is inside Rome, the Thames goes through London... In fact, the London Tube Map has become a cliche example about how to display information about a city in terms of connections instead of physical distance. Not to mention that a simplification process that leaves most of the city as a big blob of homes is certainly ignoring more information that you can afford to, even in an sketch.

Not that we usually do this kind of sketching, at least in our formal work with data. We are almost always cartographers when it comes to new data sets, whether geographical, spatial in a general sense, or just mathematically space-like. We change resolution, simplify colors, resist the temptation of over-using 3D, but keep it a "proper" map. Which is good; the world is complex enough for us not to do the best mapping we can.

However, once you automate the process of creating multiple levels of simplification and sketching as above, you'll probably find yourself at least glancing at the simplest (over)simplifications of your data sets. Probably not for presentations to internal or external clients, but for understanding a complex spatial data set, particularly if it's high-dimensional, beginning with an over-simplified summary and then increasing the complexity is in fact what you're already going to do in your own mind, so why not use the computer to help you out?

ETA: I just posted a similar map of Calgary.

# The job of the future isn't creating artificial intelligences, but keeping them sane

Once upon a time, we thought there was such a thing as bug-free programming. Some organizations still do — and woe betide their customers — but after a few decades hitting that particular wall, the profession has by and large accepted that writing software is such an extremely complex intellectual endeavor that errors and unfounded assumptions are unavoidable. Even the most mathematically solid of formal methods has, if nothing else, to interact with a world of unstable platforms and unreliable humans, and what worked today will fail tomorrow.

So we spend time and resources maintaining what we already "finished," fixing bugs as they are found, and adapting programs to new realities as they develop. We have to, because when we don't, as when physical infrastructure isn't maintained, we save resources in the short term, but only in our way towards protracted ruin.

It's no surprise that this also happens with our most sofisticated data-driven algorithms. CVs and scrum boards are filled with references to the maintenance of this or that prediction or optimization algorithm.

But there's a subtle, not universal but still very prevalent, problem: those aren't software bugs. This isn't to say that implementations don't have bugs; being software, they do. But they are computer programs implementing inference algorithms, which work at a higher level of abstraction, and those have their own kinds of bugs, and those don't leave stack traces behind.

A clear example is the experience of Google. PageRank was, without a doubt, among the most influential algorithms in the history of the internet, not to mention the most profitable, but as Google took the internet over by storm, gaming PageRank became such an important business activity that "SEO" became a commonplace word.

From an algorithmic point of view this simply a maintenance problem: PageRank assumed a certain relationship between link structure and relevance, based on the assumption that website creators weren't trying to fool it. Once this assumption became untenable, the algorithm had to be modified to cope with a world of link farms and text written with no human reader in mind.

In (very loosely equivalent) software terms, there was a new threat model, so Google had to figure out and apply a security patch. This is, for any organization facing a simular issue, a continual business-critical process, and one that could make or break a company's profitability (just ask anybody working on high-frequency trading). But not all companies deploy the same sort of detailed, continuous instrumentalization, and development and testing methodologies that they use to monitor and fix their software systems to their data driven algorithms independently of their implementations. The same data scientist who developed an algorithm is often in charge of monitoring its performance on a more or less regular basis; or, even worse, it's only a hit to business metrics what makes companies reassingn their scarce human resources towards figuring out what's going wrong. Either monitoring and maintenance strategy would amount to criminal malpractice if we were talking about software, yet there are companies for which is this is the norm.

Even more prevalent is the lack of automatic instrumentalization for algorithms mirroring that for servers. Any organization with a nontrivial infrastructure is well aware of, and has analysis tools and alarms for, things like server load or application errors. There are equivalent concepts for data-driven algorithms — quantitative statistical assumptions, wildly erroneous predictions — that should, also, be monitored in real time, and not collected (when the data is there) by a data scientist only after the situation has become bad enough to be noticed.

None of this is news to anybody working with big data, particularly in large organizations centered around this technology, but we have still to settle on a common set of technologies and practices, and even just on an universal agreement on its need.

These days nobody would dare deploy a web application trusting only server logs at the operating system level. Applications have their own semantics, after all, and everything in the operating system working perfectly is no guarantee that the app is working at all.

Large-scale prediction and optimization algorithms are just the same; they are often an abstraction running over the application software that implements them. They can be failing wildly, statistical assumptions unmet and parameters converging to implausible values, with nothing in the application layer logging even a warning of any kind.

Most users forgive a software bug much more easily than unintelligent behavior in avowedly intelligent software. As a culture, we're getting used to the fact that software fails, but many still buy the premise that artificial intelligence doesn't (this is contradictory, but so are all myths). Catching these errors as early as possible can only be done while algorithms are running in the real world, where the weird edge cases and the malicious users are, and this requires metrics, logs, and alarms that speak of what's going on in the world of mathematics, not software.

We haven't converged yet on a standard set of tools and practices for this, but I know many people who'll sleep easier once we have.

# The future of machine learning lies in its (human) past

Superficially different in goals and approach, two recent algorithmic advances, Bayesian Program Learning and Galileo, are examples of one of the most interesting and powerful new trends in data analysis. It also happens to be the oldest one.

Bayesian Program Learning (BPL) is deservedly one of the most discussed modeling strategies of recent times, matching or outperforming both humans and deep learning models in one-shot handwritten character classification. Unlike many recent competitors, it's not a deep learning architecture. Rather (and very roughly) it understands handwritten characters as the output of stochastic programs that join together different graphical parts or concepts to generate versions of each character, and seeks to synthesize them by searching through the space of possible programs.

Galileo is, at first blush, a different beast. It's a system designed to extract physical information about the objects in an image or video (e.g., their movements), coupling a deep layer module with a 3D physics engine which acts as a generative model.

Although their domains and inferential algorithms are dissimilar, the common trait I want to emphasize is that they both have at their core domain-specific generative models that encode sophisticated a priori knowledge about the world. The BPL example knows implicitly, through the syntax and semantics of the language of its programs, that handwritten characters are drawn using one or more continuous strokes, often joined; an standard deep learning engine, beginning from scratch, would have to learn this. And Galileo leverages a proper, if simplified, 3D physics engine! It's not surprising that, together with superb design and engineering, these models show the performance they do.

This is how all cognitive processing tends to work in the wider world. We are fascinated, and of course how could we not be?, by how much our algorithms can learn from just raw data. To be able to obtain practical results in multiple domains is impressive, and adds to the (recent, and, like all such things, ephemeral) mystique of the data science industry. But the fact is that no successful cognitive entity starts from scratch: there is a lot about the world that's encoded in our physiology (we don't need to learn to pump our blood faster when we are scared; to say that evolution is a highly efficient massively parallel genetic algorithm is a bit of a joke, but also true, and what it has learned is encoded in whatever is alive, or it wouldn't be).

Going to the other end of the abstraction scale, for all of the fantastically powerful large-scale data analysis tools physicists use and in many cases depend on, the way even basic observations are understood is based on centuries of accumulated (or rather constantly refined) prior knowledge, encoded in specific notations, theories, and even theories about how theories can look like. Unlike most, although not all, industrial applications, data analysis in science isn't a replacement of explicitly codified abstract knowledge, but rather stands on its gigantic shoulders.

In parallel to continuous improvement in hardware, software engineering, and algorithms, we are going to see more and more often the deployment of prior domain knowledge as part of data science implementations. The logic is almost trivial: we have so much knowledge accumulated about so many things, that any implementation that doesn't leverage whatever is known in its domain is just not going to be competitive.

Just to be clear, this isn't a new thing, or a conceptual breakthrough. If anything, it predates the take the data and model it approach that's most popularly seen as "data science," and almost every practitioner, many of them coming from backgrounds in scientific research, is aware of it. It's simply that now our data analysis tools have become flexible and powerful for us to apply it with increasingly powerful results.

The difference in performance when this can be done, as I've seen in my own projects and is obvious in work like BPL and Galileo, has always been so decisive that doing things in any other way soon becomes indefensible except on grounds of expediency (unless of course you're working in a domain that lacks any meaningful theoretical knowledge... a possibility that usually leads to interesting conversations with the domain experts).

The cost is that it does shift significantly the way in which data scientists have to work. There are already plenty of challenges in dealing with the noise and complexities of raw data, before you start considering the ambiguities and difficulties of encoding and leveraging sometimes badly misspecified abstract theories. Teams become heterogeneous at a deeper level, with domain experts — many of them with no experience in this kind of task — not only validating the results and providing feedback, but participating actively as sources of knowledge from day one. Projects take longer. Theoretical assumptions in the domain become explicit, and therefore design discussions take much longer.

And so on and so forth.

That said, the results are very worth it. If data science is about leveraging the scientific method for data-driven decision-making, it behooves us to always remember that step zero of the scientific method is to get up to date, with some skepticism but with no less dedication, on everything your predecessors figured out.

# The truly dangerous AI gap is the political one

The main short term danger from AI isn't how good it is, or who's using it, but who isn't: governments.

This impacts every aspect of our interaction with the State, beginning with the ludicrous way in which we have to move papers around (at best, digitally) to tell one part of the government something another part of the government already knows. Companies like Amazon, Google, or Facebook are built upon the opposite principle. Every part of them knows everything any part of the company knows about you (or at least it behaves that way, even if in practice there are still plenty of awkward silos).

Or consider the way every business and technical process is monitored and modeled in a high-end contemporary company, and contrast it with the opacity, most damagingly to themselves, of government services. Where companies strive to give increasingly sophisticated AI algorithms as much power as possible, governments often struggle to give humans the information they need to make the decisions, much less assist or replace them with decision-making software.

It's not that government employees lack the skills or drive. Governments are simply, and perhaps reasonably, biased toward organizational stability: they are very seldom built up from scratch, and a "fail fast" philosophy would be a recipe for untold human suffering instead of just a bunch of worthless stock options. Besides, most of the countries with the technical and human resources to attempt something like this are currently leaning to one degree or another towards political philosophies that mostly favor a reduced government footprint.

Under these circumstances, we can only expect the AI gap between the public and the private sector to grow.

The only areas where this isn't the case are, not coincidentally, the military and intelligence agencies, who are enthusiastic adopters of every cutting edge information technology they can acquire or develop. But these exceptions only highlight one of the big problems inherent in this gap: intelligence agencies (and to a hopefully lesser degree, the military) are by need, design, or their citizens' own faith the government areas least subject to democratic oversight. Private companies lose money or even go broke and disappear if they mess up; intelligence agencies usually get new top-level officers and a budget increase.

As an aside, even individuals are steered away from applying AI algorithms instead of consuming their services, through product design and, increasingly, laws that prohibit them from reprogramming their own devices with smarter or at least more loyal algorithms.

This is a huge loss of potential welfare — we are getting worse public services, and at a higher cost, than we could given the available technology — but it's also part of a wider political change, as (big) corporate entities gain operational and strategic advantages that shift the balance of power away from democratically elected organizations. It's one thing for private individuals to own the means of production, and another when they (and often business-friendly security agencies) have a de facto monopoly on superhuman smarts.

States originally gained part of their power through early and massive adoption of information technologies, from temple inventories in Summer to tax censuses and written laws. The way they are now lagging behind bodes ill for the future quality of public services, and for democratic oversight of the uses of AI technologies.

It would be disingenuous to say that this is the biggest long- and not-so-long-term problem states are facing, but only because there are so many other things going wrong or still to be done. But it's something that will have to be dealt with; not just with useful but superficial online access to existing services, or with the use of internet media for public communication, but also with deep, sustained investment in the kind of ubiquitous AI-assisted and AI-delegated operations that increasingly underlie most of the private economy. Politically, organizationally, and culturally as near-impossible as this might look.

The recently elected Argentinean government has made credible national statistics one of its earliest initiatives, less an act of futuristic boldness than a return to the 20th century baseline of data-driven decision-making, a departure of the previous government that was not without large political and practical costs. By failing to resort intensively to AI technologies in their public services, most governments in the world are failing to measure up to the technological baseline of the current century, an almost equally serious oversight.

# "Prior art" is just a fancy term for "too slow lawyering up"

They used to send a legal ultimatum before it happened. Now you just wake up one day and everything green is dead, because the plants are biotech and counter-hacking is a legal response to intellectual property theft, even if the genes in question are older than the country that granted the patent.

My daughter isn't looking at the rotting remains of her flower garden. Her eyes are locked into mine, with the intensity of a child too young not to take the world seriously. Are we going to jail?

No, I say, and smile. They only go personally after the big ones; for small people like us this destruction suffices.

She nods. Am I going to die?

I kneel and hug her. No, of course not, I say, with every bit of certainty I can muster. There's nothing patented in you, I want to add, but she's old enough to know that'd be a lie.

I feel her chest move and I realize she had been holding her breath. We stay together, just breathing. The air is filled with legal pathogens looking for illegal things to kill.

.finis.

# The gig economy is the oldest one, and it's always bad news

Let's say you have an spare bedroom and you need some extra income. What do you do? You do more of what you've trained for, in an environment with the capital and tools to do it best. Anything else only makes sense if the economy is badly screwed up.

The reason is quite simple: unless you work in the hospitality industry, you are better — able to extract from it a higher income — at doing whatever else you're doing than you are at being a host, or you wouldn't take it up as a gig, but rather switch to it full time. Suppliers in the gig economy (as opposed to professionals freelancing in their area of expertise) are by definition working more hours but less efficiently so, whether because they don't have the training and experience, or because they aren't working with the tools and resources they'd take advantage of in their regular environments. The cheaper, less quality, badly regulated service they provide might be desirable to many customers, but this is achieved partly through de-capitalization. Every hour and dollar an accountant spends caring for a guest instead of, if he wants a higher income, doing more accounting or upgrading his tools, is a waste of his knowledge. From the point of view of overall capital and skill intensity, a professional low-budget hotel chain would be vastly more efficient over the long term (of course, to do that you need to invest capital in premises and so on instead of on vastly cheaper software and marketing).

The only reason for an accountant, web designer, teacher, or what not, for doing "gigs" instead of extra hours, freelance work, or similar, is if there is no demand for their professional labor. While it's entirely possible that overtime or freelance work might be relatively less valuable than the equivalent time spent at their main job, to do something else they would have to get less than what they can get from a gig for which they have little training and few tools. That's not how a capital- and skill-intensive economy looks like.

For an specific occupation falling out of favor, this is just the way of things. For wide swaths of the population to find themselves in this position, perhaps employed but earning less than they would like, and unable to trade more of their specialized labor for income, the economy as a whole has to be suffering from depressed demand. What's more, they still have to contend with competitors with more capital but still looking to avoid regulations (e.g., people buying apartments specifically to rent via Airbnb), in turn lowering their already low gig income.

This is a good thing if you want cheaper human-intensive services or have invested on Airbnb and similar companies, and it's bad news if you want an skill-intensive economy with proportionally healthy incomes.

In the context of the gig economy, flexibility is an euphemism for I have a (perhaps permanent!) emergency and can't get extra work, and efficiency refers to the liquidity of services, not the outcome of high capital intensity. And while renting a room or being an Uber driver might be less unpleasant than, and downright utopian compared to, the alternatives open to those without a room to rent or an adequate car, the argument that it's fun doesn't survive the fact that nobody has ever been paid to go and crash on other people's couches.

Neither Airbnb nor Uber are harmful on themselves — who doesn't think cab services could use more a transparent and effective dispatch system? — but customer ratings don't replace training, certification, and other forms of capital investment. Shiny apps and cool algorithms aside, a growing gig economy is a symptom of an at least partially de-skilling one.

# The Man Who Was Made A People

Gregory has two million evil twins. None of them is a person, but why would anybody care?

They are everywhere except in the world. They search the web, click on ads, make purchases, create profiles, favorite things, post comments. Being bots, they don't sleep or work; they do nothing but what they were programmed to do, hidden deep in some endless pool of stolen computing power they have been planted in like dragon's teeth.

They are him. Their profiles carry his name, his location, his interests, or variations close enough to be indistinguishable to even the most primitive algorithm. The pictures posted by the bots are all of men very similar to Gregory in skin tone, clothes, cellphone, car. And he knows they are watching him, because when he changes how he looks, they change as well.

They are evil. Most of their online activities are subtle mirrors of his own, but some deal with topics and people that most find abhorrent, and none more than himself. Violence, depravity, every form of hate and crime, and — worst of all — every statistically known omen of future violence and crime.

Driven by the blind genius of predictive algorithms, sites show Gregory increasingly dark things to look at and buy, and suggest friendships with unbalanced bigots of every kind. His credit score has crumbled. Journalism gigs are becoming scarce. Cops scowl as they follow him with eyes covered by smart glasses, one hand on their guns and the other on their radios. He no longer bothers to check his dating profile; the messages he gets are more disturbing than the replies he no longer does.

He has begun to go out less, to use the web through anonymizing services, to take whatever tranquilizers he can afford. All of those are suspicious activities on their own, he knows, but what choice does he have? He spends his nights trying to figure out who or what he offended enough to have this all-too-real curse laid upon him. The list of possibilities is too large, what journalist's isn't?, and he's not desperate enough to convince himself there's any point to seeking forgiveness. He's scared that one day he might be.

Gregory knows how this ends. He has begun to click on links he wouldn't have. Some of the searches are his. Every night he talks himself out of buying a gun. So far.

He has begun to feel there are two million of him.

.finis.

# Bitcoin is Steampunk Economics

From the point of view of its largest financial backers, the fact that Bitcoin combines 21st century computer science with 17th century political economy isn't an unfortunate limitation. It's what they want it for.

We have grown as used to the concept of money as to any other component of our infrastructure, but, all things considered, it's an astoundingly successful technology. Even in its simplest forms it helps solve the combinatorial explosion implicit in any barter system, which is why even highly restricted groups, like prison populations, implement some form of currency as one of the basic building blocks of their polities.

Fiat money is a fascinating iteration of this technology. It doesn't just solve the logistical problems of carrying with you an impractical amount of shiny metals or some other traditional reference commodity, it also allows a certain degree of systemic adaptation to external supply and demand shocks, and pulls macroeconomic fine-tuning away from the rather unsuitable hands of mine prospectors and international trading companies.

A protocol-level hack that increases systemic robustness in a seamless distributed manner: technology-oriented people should love this. And they would, if only that hack weren't, to a large degree... ugh... political. From the point of view of somebody attempting to make a ton of money by, literally, making a ton money, the fact that a monetary system is a common good managed by a quasi-governmental centralized organization isn't a relatively powerful way to dampen economic instabilities, but an unacceptable way to dampen their chances of making said ton of money.

So Bitcoin was specifically designed to make this kind of adjustment impossible. In fact, the whole, and conceptually impressive, set of features that characterize it as a currency, from the distributed ledger to the anonymity of transfers to the mathematically controlled rate of bitcoin creation, presupposes that you can trust neither central banks nor financial institutions in general. It's a crushingly limited fallback protocol for a world where all central banks have been taken over by hyperinflation-happy communists.

The obvious empirical observation is that central banks have not been taken over by hyperinflation-happy communists. Central banks in the developed world have by and large mastered the art of keeping inflation low – in fact, they seem to have trouble doing anything else. True, there are always Venezuelas and Argentinas, but designing a currency based on the idea that they are at the cutting edge of future macroeconomic practice crosses the line from design fiction to surrealist engineering.

As a currency, Bitcoin isn't the future, but the past. It uses our most advanced technology to replicate the key features of an obsolete concept, adding some Tesla coils here and there for good effect. It's gold you can teleport; like a horse with an electric headlamp strapped to its chest, it's an extremely cool-looking improvement to a technology we have long superseded.

As computer science, it's magnificent. As economics, it's an steampunk affectation.

Where bitcoin shines, relatively speaking, is in the criminal side of the e-commerce sector — including service-oriented markets like online extortion and sabotage — where anonymity and the ability to bypass the (relative) danger of (nominally, if not always pragmatically) legal financial institutions are extremely desirable features. So far Bitcoin has shown some promise not as a functional currency for any sort of organized society, but in its attempt to displace the hundred dollar bill from its role as what one of William Gibson's characters accurately described as the international currency of bad shit.

This, again, isn't an unfortunate side effect, but a consequence of the design goals of Bitcoin. There's no practical way to avoid things like central bank-set interest rates and taxes, without also avoiding things like anti-money laundering regulations and assassination markets. If you mistrust government regulations out of principle and think them unfixable through democratic processes — that is, if you ignore or reject political technologies developed during the 20th century that have proven quite effective when well implemented — then this might seem to you a reasonable price to pay. For some, this price is actually a bonus.

There's nothing implicit in contemporary technologies that justifies our sometimes staggering difficulties managing common goods like sustainably fertile lands, non-toxic water reservoirs, books written by people long dead, the antibiotic resistance profile of the bacteria whose planet we happen to live in, or, case in point, our financial systems. We just seem to be having doubts as to whether we should, doubts ultimately financed by people well aware that there are a few dozen deca-billion fortunes to be made by shedding the last two or three centuries' worth of political technology development, and adding computationally shiny bits to what we were using back then.

Bitcoin is a fascinating technical achievement mostly developed by smart, enthusiastic people with the best of intentions. They are building ways in which it, and other blockchain technologies like smart contracts, can be used to make our infrastructures more powerful, our societies richer, and our lives safer. That most of the big money investing in the concept is instead attempting to recreate the financial system of late medieval Europe, or to provide a convenient complement to little bags of diamonds, large bags of hundred dollar bills, and bank accounts in professionally absent-minded countries, when they aren't financing new and excitingly unregulated forms of technically-not-employment, is completely unexpected.

# The price of the Internet of Things will be a vague dread of a malicious world

Volkswagen didn't make a faulty car: they programmed it to cheat intelligently. The difference isn't semantics, it's game-theoretical (and it borders on applied demonology).

Regulatory practices assume untrustworthy humans living in a reliable universe. People will be tempted to lie if they think the benefits outweigh the risks, but objects won't. Ask a person if they promise to always wear their seat belt, and the answer will be at best suspect. Test the energy efficiency of a lamp, and you'll get an honest response from it. Objects fail, and sometimes behave unpredictably, but they aren't strategic, they don't choose their behavior dynamically in order to fool you. Matter isn't evil.

But that was before. Things now have software in them, and software encodes game-theoretical strategies as well as it encodes any other form of applied mathematics, and the temptation to teach products to lie strategically will be as impossible to resist for companies in the near future as it has been to VW, steep as their punishment seems to be. As it has always happened (and always will) in the area of financial fraud, they'll just find ways to do it better.

Environmental regulations are an obvious field for profitable strategic cheating, but there are others. The software driving your car, tv, or bathroom scale might comply with all relevant privacy regulations, and even with their own marketing copy, but it'll only take a silent background software upgrade to turn it into a discrete spy reporting on you via well-hidden channels (and everything will have its software upgraded all the time; that's one of the aspects of the Internet of Things nobody really likes to contemplate, because it'll be a mess). And in a world where every device interacts with and depends on a myriad others, devices from one company might degrade the performance of a competitor's... but, of course, not when regulators are watching.

The intrinsic challenge to our legal framework is that technical standards have to be precisely defined in order to be fair, but this makes them easy to detect and defeat. They assume a mechanical universe, not one in which objects get their software updated with new lies every time regulatory bodies come up with a new test. And even if all software were always available, cheking it for unwanted behavior would be unfeasible — more often than not, programs fail because the very organizations that made them haven't or couldn't make sure it behaved as they intended.

So the fact is that our experience of the world will increasingly come to reflect our experience of our computers and of the internet itself (not surprisingly, as it'll be infused with both). Just as any user feels their computer to be a fairly unpredictable device full of programs they've never installed doing unknown things to which they've never agreed to benefit companies they've never heard of, inefficiently at best and actively malignant at worst (but how would you now?), cars, street lights, and even buildings will behave in the same vaguely suspicious way. Is your self-driving car deliberately slowing down to give priority to the higher-priced models? Is your green A/C really less efficient with a thermostat from a different company, or it's just not trying as hard? And your tv is supposed to only use its camera to follow your gestural commands, but it's a bit suspicious how it always offers Disney downloads when your children are sitting in front of it.

None of those things are likely to be legal, but they are going to be profitable, and, with objects working actively to hide them from the government, not to mention from you, they'll be hard to catch.

If a few centuries of financial fraud have taught us anything, is that the wages of (regulatory) sin are huge, and punishment late enough that organizations fall into temptation time and again, regardless of the fate of their predecessors, or at least of those who were caught. The environmental and public health cost of VW's fraud is significant, but it's easy to imagine industries and scenarios where it'd be much worse. Perhaps the best we can hope for is that the avoidance of regulatory frameworks on Internet of Things won't have the kind of occasional systemic impact that large-scale financial misconduct has accustomed us to.

# We aren't uniquely self-destructive, just inexcusably so

Natural History is an accretion of catastrophic side effects resulting from blind self-interest, each ecosystem an apocalyptic landscape to the previous generations and a paradise to the survivors' thriving and well-adapted descendants. There was no subtle balance when the first photosynthetic organisms filled the atmosphere with the toxic waste of their metabolism. The dance of predator and prey takes its rhythm from the chaotic beat of famine, and its melody from an unreliable climate. Each biological innovation changes the shape of entire ecosystems, giving place to a new fleeting pattern than will only survive until the next one.

We think Nature harmonious and wise because our memories are short and our fearful worship recent. But we are among the first generations of the first species for which famine is no accident, but negligence and crime.

No, our destruction of the ecosystems we were part of when we first learned the tools of fire, farm, and physics is not unique in the history of our planet, it's not a sin uniquely upon us.

It is, however, a blunder, because we know better, and if we have the right to prefer to a silent meadow the thousands fed by the farms replacing it, we have no right to ignore how much water it's safe to draw, how much nitrogen we will have to use and where it'll come from, how to preserve the genes we might need and the disease resistance we already do. We made no promise to our descendants to leave them pandas and tigers, but we will indeed be judged poorly if we leave them a world changed by the unintended and uncorrected side effects of our own activities in ways that will make it harder for them to survive.

We aren't destroying the planet, couldn't destroy the planet (short of, in an ecological sense, sterilizing it with enough nuclear bombs). What we are doing is changing its ecosystems, and in some senses its very geology and chemistry, in ways that make it less habitable for us. Organisms that love heat and carbon in the air, acidic seas and flooded coasts... for them we aren't scourges but benefactors. Biodiversity falls as we change the environment with a speed, in an evolutionary scale, little slower than a volcano's, but the survivors will thrive and then radiate in new astounding forms. We may not.

Let us not, then, think survival a matter of preserving ecosystems, or at least not beyond what an aesthetic or historical sense might drive us to. We have changed the world in ways that make it worse for us, and we continue to do so far beyond the feeble excuses of ignorance. Our long term survival as a civilization, if not as a species, demands from us to change the world again, this time in ways that will make it better for us. We don't need biodiversity because we inherited it: we need it because it makes ecosystems more robust, and hence our own societies less fragile. We don't need to both stop and mitigate climate change because there's something sacred about the previous global climate: we need to do it because anything much worse than what we've already signed for might be too much for our civilization to adapt to, and runaway warming might even be too much for the species itself to survive. We need to understand, manage, and increase sustainable cycles of water, soil, nitrogen, and phosphorus because that's how we feed ourselves. We can survive without India's tigers. But collapse the monsoon or the subcontinent's irrigation infrastructure and at least half a billion people will die.

We wouldn't be the first species killed by our own blind success, nor the first civilization destroyed by a combination of power and ignorance, empty cities the only reminders of better architectural than ecological insight. We know better, and should act in a way befitting what we know. Our problem is no larger than our tools, our reach no further than our grasp.

The only question is how hard we'll make things for us before we start working on earnest to build a better world, one less harsh to our civilization, or at least not untenably more so. The question is how many people will unnecessarily die, and what long-term price we'll pay for our delay.

# What places do we know the most about?

Datasthesia just posted online photos of a 3D printed map of geotagged Wikipedia articles, based on map from the Oxford Internet Institute.

# An article about the gamification of personal finances (with a couple of quotes from yours truly)

Article at LearnVest.com, and thence at Forbes.

# The Girl and the Forest

The girl is crossing a frontier that exists only in databases. Her phone whispers frantically on her ear: crossing such a frontier triggers no low-priority notification, but the digital panic merited by a lethal navigational mishap. Cross a line between two indistinguishable plots of land and you become the legitimate target of automated guns, or an illegal person to be sent to a private working prison, or any number of other fates perhaps but not certainly worse than what you were leaving behind.

The frontier the girl is crossing separates a water-poor region from a barren desert, the invisible line a temporary marker of the ever-faster retreat of agricultural mankind. The region reacts to unwanted strangers with less robots but as much heavily armed dedication as any of the richer ones. But the girl is walking into the desert, and there are no defensive systems on her way. There is just the dead sand.

She doesn't carry enough water or food to get her to the other side.

* * *

The girl went to a hidden net site a friend had shared with her with the electronic whispers and half-incredulous sniggers other generations had reserved for the mysteries of sex. Sex wasn't much of a mystery to their generation, who had seen everything long before it could be understood with anything except her mind. But they had never been in a forest, and almost none of them ever would. They traded pictures and descriptions of how the desert looked before it was a desert, and tried to imagine the smell of a thousand acres of shadowed damp earth. It was a fad, for most. A phase. Youth and nostalgia are mutually incompatible states.

Yet for some their dreams of forests endured: they had uncovered something, a secret, found because they weren't welcomed into the important matters reserved for grownups. Inside the long abandoned monitoring network their parents' generation had used to attempt to manage the retreating forest, some of the sensors were still alive. Most of them were repeating a monotonous prayer of heat and sand to creators too ashamed of their failure to let themselves look back.

But some of the sensors chanted of water, and shadow, and biomass. The girl had seen the data in her phone, and half-felt a breeze of leaves and bark. What if satellite pictures showed a canyon that, yes, could be safe from the soil-stealing wind, but was as barren as everything else? What of that?

The girl thought of her parents, and of the child she had promised herself she wouldn't give to the barren earth, and with guilt that didn't slow her down, she took the least amount of water she thought would be enough to get her to the canyon, and went into the desert.

The dull sleepless intelligences inside the border cameras saw her leave, but would only alert a human if they saw her walking back.

* * *

The girl will barely reach the canyon, half-dying, clinging to her last bit of water as a talisman. There will be no forest there, nothing in the canyon but dry sand. But in the small caves between the rocks, where the geometry of stones has built small enclosed worlds of darkness, she'll find ugly, malevolently tenacious, and very much alive mushrooms, and around them the clothes of those who will have reached the canyon before her. Most of their clothes will be of her size.

The girl will understand. She won't drink the last of her water, but give it to the mushrooms. Then she will lie down and close her eyes, and fall sleep in the shadow, surrounded by a forest at last.

.finis.

Isomorphic is a mathematical term: it means of the same shape. This is a lie.

Every morning you wake up in the apartment you might have bought if you hadn't been married (but you were, and those identical apartments are not the same). Your car takes you through the same route you would have taken, to an office where you look into the blankness of a camera and the camera looks back. You see nothing. The camera sees the pattern of blood vessels on the back of your eyes, and opens your computer for you.

The interface you see is always the same, just patterns of changing data devoid of context. Patterns that a combination of raw genetic luck and years of training has made you flawlessly adept at understanding and controlling. The pattern on your screen changes five times each second. Faster than that, you move your fingers in a precise way, the skill etched in your muscles as much as in your brain. The pattern and your fingers dance together, and the dance makes the pattern stay in a shape that has no meaning outside itself. You have received almost every commendation they can give to someone doing your job. Only the man on the other side of the table has more. You have never seen his screen, and he has never seen yours.

The inertial idiocy of that security rule is sickening in its redundancy. You couldn't know what he's doing from the data on his screen any more than you can know what you are doing from what you see in yours. Sometimes you think you're piloting a drone swarm. Sometimes you're defending an infrastructure network, or attacking one. Twice you have felt a rhythm in the patterns almost like a heart, and wondered if you were killing somebody through some medical device.

But you don't know. That's the point. Whatever you could be doing, the shape of the data on your screen would be the same, all the necessary information to control, damage, defend, or kill, but scrubbed of all meaning tying it back to the real world. Isomorphism, the instructors called it.

But that's a lie. It's not the same, and it could never be.

You begin to lose sleep. Twice the camera on your computer has to learn a new pattern for the blood behind your eyes. Your performance doesn't suffer; the parts of your mind and body that do the work are not the ones grappling with a guilt larger because it's undefined. Your nightmares are shapeless: you dream of data and wake up unable to breath.

One day you finally allow yourself to know that the man across the table enjoys his work. Always had. You had ignored him all those years, him and everything not in the data, but now you look at him with a wordless how? He makes a gesture with his head, come and see. An isomorphism that scrubs the data not only of meaning but also guilt.

You need it so much that you don't stop to think about the rules you're both breaking under the gaze of the security cameras. You just go around the table and look at his screen.

There's no isomorphism. There's nothing but truth, and you can neither watch nor stop watching. His fingers are dancing and his smile is joyful and he has always known what he was doing. And now you can, too.

You scramble back to your screen in blind haste. The patterns of data are innocent, you tell yourself, of everything you saw on that other screen, and so if the dance of your fingers. They just have the same shape, that's all.

You work efficiently as ever. You wonder if you'll go crazy, and fear you won't, and know that neither act will change your shape.

.finis.

# The Telemarketer Singularity

The future isn't a robot boot stamping on a human face forever. It's a world where everything you see has a little telemarketer inside them, one that knows everything about you and never, ever, stops selling things to you.

In all fairness, this might be an slight oversimplification. Besides telemarketers, objects will also be possessed by shop attendants, customer support representatives, and conmen.

What these much-maligned but ubiquitous occupations (and I'm not talking here about their personal qualities or motivations; by and large, they are among the worst exploited and personally blameless workers in the service economy) have in common is that they operate under strict and explicitly codified guidelines that simulate social interaction in order to optimize a business metric.

When a telemarketer and a prospect are talking, of course, both parties are human. But the prospect is, however unconsciously, guided by a certain set of rules about how conversations develop. For example, if somebody offers you something and you say no, thanks, the expected response is for that party to continue the conversation under the assumption that you don't want it, and perhaps try to change your mind, but not to say ok, I'll add it to your order and we can take it out later. The syntax of each expression is correct, but the grammar of the conversation as a whole is broken, always in ways specifically designed to manipulate the prospect's decision-making process. Every time you have found yourself talking on the phone with a telemarketer, or interacting with a salesperson, far longer than you wanted to, this was because you grew up with certain unconscious rules about the patterns in which conversations can end — and until they make the sell, they will neither initiate nor acknowledge any of them. The power isn't in their sales pitch, but in the way they are taking advantage of your social operating system, and the fact that they are working with a much more flexible one.

Some people, generally described by the not always precise term sociopath, are just naturally able to ignore, simulate, or subvert these underlying social rules. Others, non-sociopathic professional conmen, have trained themselves to be able to do this, to speak and behave in ways that bypass or break our common expectations about what words and actions mean.

And then there are telemarketers, who these days work with statistically optimized scripts that tell them what to say in each possible context during a sales conversation, always tailored according to extensive databases of personal information. They don't need to train themselves beyond being able to convey the right emotional tone with their voices: they are, functionally, the voice interface of a program that encodes the actual sales process, and that, logically, has no need to conform to any societal expectation of human interaction.

It's tempting to call telemarketers and their more modern cousins, the computer-assisted (or rather computer-guided) sales assistants, the first deliberately engineered cybernetic sociopaths, but this would miss the point that what matters, what we are interacting with, isn't a sales person, but the scripts behind them. The person is just the interface, selected and trained to maximize the chances that we will want to follow the conversational patterns that will make us vulnerable to the program behind.

Philosophers have long toyed with a mental experiment called the Chinese Room: There is a person inside a room who doesn't know Mandarin, but has a huge set of instructions that tells her what characters to write in response to any combination of characters, for any sequence of interactions. The person inside doesn't know Mandarin, but anybody outside who does can have an engaging conversation by slipping messages under the door. The philosophical question is, who is the person outside dialoging with? Does the woman inside the room know Mandarin in some sense? Does the room know?

Telemarketers are Chinese Rooms turned inside-out. The person is outside, and the room is hidden from us, and we aren't interacting socially with either. We only think we do, or rather, we subconsciously act as if we do, and that's what makes cons and sales much more effective than, rationally, they should be.

We rarely interact with salespeople, but we interact with things all the time. Not because we are socially isolated, but because, well, we are surrounded by things. We interact with our cars, our kitchens, our phones, our websites, our bikes, our clothes, our homes, our workplaces, and our cities. Some of them, like Apple's Siri or the Sims, want us to interact with them as if they were people, or at least consider them valid targets of emotional empathy, but what they are is telemarketers. They are designed, and very carefully, to take advantage of our cultural and psychological biases and constraints, whether it's Siri's cheerful personality or a Sim's personal victories and tragedies.

Not every thing offer us the possibility of interacting with them as if they were human, but that doesn't stop them from selling to us. Every day we see the release of more smart objects, whether it's consumer products or would-be invisible pieces of infrastructure. Connected to each other and to user profiling databases, they see us, know us, and talk to each and to their creators (and to their creators' "trusted partners," who aren't necessarily anybody you have even heard of) about us.

And then they try to sell us things, because that's how the information economy seems to work in practice.

In some sense, this isn't new. Expensive shoes try to look cool so other people will buy them. Expensive cars are in a partnership with you to make sure everybody knows how awesome they make you look. Restaurants hope that some sweet spot of service, ambiance, food, and prices will make you a regular. They are selling themselves, as well as complementary products and services.

But smart objects are a qualitatively different breed, because, being essentially computers with some other stuff attached to them, what their main function is might not be what you bought them for.

Consider an internet-connected scale that not only keeps track of your weight, but also sends you through a social network congratulatory messages when you reach a weight goal. From your point of view, it's just a scale that has acquired a cheerful personality, like a singing piece of furniture in a Disney movie, but from the point of view of the company that built and still controls it, it's both a sensor giving them information about you, and a way to tell you things you believe are coming from something – somebody who knows you, in some ways, better than friends and family. Do you believe advertisers won't know whether to sell you diet products or a discount coupon in the bakery around the corner from your office? Or, even more powerfully, that your scale won't tell you You have earned yourself a nice piece of chocolate cake ;) if the bakery chain is the one who purchased that particular "pageview?"

Maybe they won't be reporting everything you say verbatim, that will depend on how much external scrutiny there is on the industry, but your mood (did you yell at your car today, or sang aloud as you drove?), your movements, the time of the day you wake up, which days you cook and which days you order takeout? Everybody trying to sell things to you will know all of this, and more.

If, in defense of old-school human interaction, you go inside some store to talk with an actual human being instead of an online shop, a computer will be telling each sales person, through a very discrete earbud, how you're feeling today, and how to treat you so you'll feel you want to buy whatever they are selling, the functional equivalent of almost telepathic cold reading skills (except that it won't be so cold; the sales person doesn't know you, but the sales program... the sales program knows you, in many ways, better than you do yourself). In a rush? The sales program will direct the sales person to be quick and efficient. Had a lousy day? Warmth and sympathy. Or rather simulations thereof; you're being sold to by a sales program, after all, or an Internet of Sales Programs, all operating through salespeople, the stuff in your home and pockets, and pretty much everything in the world with an internet connection, which will be almost everything you see and most of what you don't.

Those methods work, and have probably worked since before recorded history, and knowing about them doesn't make them any less effective. They might not make you spend more in aggregate; generally speaking, advertising just shifts around how much you spend on different things. From the point of view of companies, it'll just be the next stage in the arms race for ever more integrated and multi-layered sensor and actuator networks, the same kind of precisely targeted network-of-networks military planners dream of.

For us as consumers, it might mean a world that'll feel more interested in you, with unseen patterns of knowledge and behavior swirling around you, trying to entice or disturb or scare or seduce you, and you specifically, into buying or doing something. It will be a somewhat enchanted world, for better and for worse.

# For archival purposes

I was quoted at some length by Sebastián Campanario on this La Nación article.

# At the End of the World

As the seas rose and the deserts grew, the wealthiest families and the most necessary crops moved poleward, seeking survivable summers and fertile soils. I traveled to the coast and slowly made my way towards the Equator; as a genetics engineer I was well-employed, if not one of the super-rich, but keeping our old ecosystems alive was difficult enough if you had hope, and I had lost mine a couple of degrees Celsius along the way.

I saw her one afternoon. I was staying in a cramped rentroom in a semi-flooded city that could have been anywhere. The same always nearly-collapsed infrastructure, the indistinguishable semi-flooded slums, the worldwide dull resentment and fear of everything coming from the sky: the ubiquitous flocks of drones, the frequent hurricanes, the merciless summer sun.

She seemed older than I'd have expected, her skin pale and parched, her once-black hair the color of sand. But she had an assurance that hadn't been there half a lifetime ago when we had been colleagues and roommates, and less, and more. Before we had had to choose between hope for a future together and hope for a future for the world, and had chosen... No, not wrong. But I had stopped believing we could turn the too-literal tide, and, for reasons I had suspected but not inquired, she had lost or quit her job years ago. So here we were, at the overcrowded, ever-retreating ruinous limes of our world. I was wandering, and she was riding a battery bike out of the city. I followed her on my own.

I don't know why I didn't call to her, why I followed her, or if I even wanted to catch up. But when I turned a bend on the road she was waiting for me, patient and smiling, still on her bike.

I did, all the way through the barren over-exploited land, the situation dreamlike but no more than everything else.

She led me to a group of oddly-looking tents, and then by foot towards one that I took to be hers. We sat on the ground, and under the light of a biolamp I saw her close and gasped.

Not in disgust. Not despite the pseudoscales on her skin, or her shrouded eyes. It wasn't beauty, but it was beautiful work, and I knew enough to suspect that the changes wouldn't stop at what I saw.

"You adapted yourself to the hot band," I said.

She smiled. "Not just me. I've been doing itinerant retroviral work all over the hot band. You wouldn't believe the demand, or how those communities thrive once health issues are minimized. I've developed gut mods for digesting whatever grows there now, better heat and cold resistance, some degree of internal osmosis to drink seawater. And they have capable people spreading and tweaking the work. They call it submitting to the world."

"This is not what we wanted to do."

"No," she said, "but it works." She paused, as if waiting for me to argue. I didn't, so she went on. "Every year it works a little better for them, for us, and a little worse for you all."

I shook my head. "And next decade? Half a century from now? You know the feedback loops aren't stopping, and we only pretend carbon storage will reach enough scale to work. This work is phenomenal, but it's only an stopgap."

"It's only an stopgap if we stop." She stood up and moved a curtain I had thought a tent wall. Behind it I saw a crib, the standard climate-controlled used by everybody who could afford them.

Inside the crib there was a baby girl. Her skin was covered in true scales, with tridimensional structures that looked like multi-layer thermal insulation. Her respiration was slow and easy, and her eyes, blinking sleepily, catlike, like those of a creature breed to avoid and don't miss the sun. I was listening with half an ear to the long list of other changes, but my eyes were fixed on the crib's controls.

They were keeping her environment artificially hot and dry. The baby smile was too innocent to be mocking, but I wasn't.

"And a century after next century?" I said, not really asking.

"Who knows what they'll become?" I wasn't looking at her, but her voice was filled with hope.

I closed my eyes and thought of the beautiful forests and farms of the temperate areas, where my best efforts only amounted to increasingly hopeless life support. I wasn't sure how I felt about the future looking at me from the crib, but it was one.

"Tell me more."

.finis.

# Datasthesia: Now with a manifesto!

As every sufficiently retro avant-garde arts collective should, we now have an online manifesto.

# Memory City

The city remembers you even better than I do. I have fragments of you in my memory, things I'll only forget when I die: your smell, your voice, your eyes locked on my own. But the city knows more, and I have the power to ask for those memories.

I query databases in machines across the sea, and the city guides me to a corner just in time to see somebody crossing the street. She looks just like you as she walks away. Only from that angle, but that's the angle the city told me to look from.

I sit in a coffee shop, my back to the window, and the city whispers a detached countdown into my ears. Three blocks, two, one. Somebody walks by, and the cadence of her steps is just like yours. With my eyes closed they seem to echo through the void of your absence, and they are yours.

I keep roaming the streets for pieces of you. A handful of glimpses a day. Fragments of your voice. The dress I last saw you in, through the window of a cab. They get better and more frequent, as if the city were closing on you inside some truer city made from everything it has ever sensed and stored, and its cameras and sensors sense many things, and the machines that are the city's mind remember them all.

I feel hope grow inside me. I know the insanity of what I'm doing, but knowing is less than nothing when I see more of you each day.

One night the city takes me to an alley. It's not the street where I met you, and it's a different season, but the urgency of the city's summons infects me with a foreshadowing of deja vu.

But your eyes. I know those eyes. And you recognize me, of course, impossibly and unavoidably. How else to explain the frightened scream I cut short?

I have been told by engineers, people I pay to know what I don't, that the city's mind is somehow like a person's. That it learns from what it does, and does it better the next time. I don't understand how, but I know this to be so. We find you more quickly every time, and I could swear the city no longer waits for me to ask it to. Maybe it shares some of my love for you now.

Maybe you'll never be alone.

.finis.

# The Balkanization of Things

The smarter your stuff, the less you legally own it. And it won't be long before, besides resisting you, things begin to quietly resist each other.

Objects with computers in them (like phones, cars, TVs, thermostats, scales, ovens, etc) are mainly software programs with some sensors, lights, and engines attached to them. The hardware limits what they can possibly do — you can't go against physics — but the software defines what they will do: they won't go against their business model.

In practice this means that you can't (legally) install a new operating system in your phone, upgrade your TV with, say, a better interface, or replace the notoriously dangerous and very buggy embedded control software in your Toyota. You can use them in ways that align with their business models, but you have to literally become a criminal to use them otherwise, even if what you want to do with them is otherwise legal.

Bear with me for a quick historical digression: the way the web was designed to work (back in the prehistoric days before everything was worth billions of dollars) you would be able to build a page using individual resources from all over the world, and offer the person reading it ways to access other resources in the form of a dynamic, user-configurable, infinite book, an hypertext that mostly remains only as the ht on http://.

What we ended having was, of course, a forest of isolated "sites" that guard jealously their "intellectual property" from each other, using the brilliant set of protocols that was meant to give us an infinite book just as a way for their own pages to talk with their servers and their user trackers, and so on, and woe to anybody that tries to "hack" a site to use it in some other way (at least not without a license fee and severe restrictions on what they can do). What we have is still much, much better than what we had, and if Facebook has its way and everything becomes a Facebook post or a Facebook app we'll miss the glorious creativity of 2015, but what we could have had still haunts technology so deeply that it's constantly trying to resurface on top of the semi-broken Internet we did build.

Or maybe there was never a chance once people realized there were lots of money to be made with these homogeneous, branded, restricted "websites." Now processors with full network stacks are cheap enough to be put in pretty much everything (including other computers — computers have inside them, funnily enough, entirely different smaller computers that monitor and report on them). So everybody in the technology business is imagining a replay of the internet's story, only at a much larger scale. Sure, we could put together a set of protocols so that every object in a city can, with proper authorizations, talk with each other regardless of who made it. And, sure, we could make possible for people to modify their software to figure out better ways of doing things with the things they bought, things that make sense to them without attaching license fees or advertisements. We would make money out of it, and people would have a chance to customize, explore, and fix design errors.

But you know how the industry could make more money, and have people pay for any new feature they want, and keep design errors as deniable and liability-free as possible? Why, it's simple: these cars talk with these health sensors only, and these fridges only with these e-commerce sites, and you can't prevent your shoes from selling your activity habits to insurers and advertisers because that'd be illegal hacking. (That the NSA and the Chinese gets to talk with everything is a given.)

The possibilities for "synergy" are huge, and, because we are building legal systems that make reprogramming your own computers a crime, very monetizable. Logically, then, they will be monetized.

It (probably) won't be any sort of resistentialist apocalypse. Things will mostly be better than before the Internet of Things, although you'll have to check that your shoes are compatible with your watch, remember to move everything with a microphone or a camera out of the bedroom whenever you have sex even if they seem turned off (probably something you should already be doing), and there will be some fun headlines when a hacker from insert here your favorite rogue country, illegal group, or technologically-oriented college decides technology has finally caught up with Ghost in the Shell in terms of security boondoggles, breaks into Toyota's network, and stalls a hundred thousand cars in Manhattan during rush hour.

It'll be (mostly) very convenient, increasingly integrated into a few competing company-owned "ecosystems" (do you want to have a password for each appliance in your kitchen?), indubitably profitable (not just the advertising possibilities of knowing when and how you woke up; logistics and product design companies alone will pay through the nose for the information), and yet another huge lost opportunity.

In any case, I'm completely sure we'll do better when we develop general purpose commercial brain-computer interfaces.

# The Secret

I saved his name in our database: it vanished within seconds into a place hidden from both software traces and hardware checks. Search engines refused to index any page with his name on it, and I couldn't add it to any page in Wikipedia. A deep neural network, trained on his face almost to overfitting, was unable to tell between him, a cat, and a train.

I don't know how he does this, and I'm afraid of asking myself why. His name and face faded quickly from my mind. Just another computer, I guess.

But then what remainder of the algorithm of my self impossibly remembers what everything else forgets? I'm afraid of the way he can't be recorded, but I feel nothing but horror of whatever's in me that can't forget. That part is growing; tonight I can almost remember his face.

Will I become like him? Will I also slip intangible through the mathematics of the world? And will I, in that day, be able to remember myself?

I keep saving these notes, but I can't find the file.

.finis.

# Yesterday was a good day for crime

Yesterday, a US judge helped the FBI strike a big blow in favor of the next generation of sophisticated criminal organizations, by sentencing Silk Road operator Ross Ulbricht (aka Dread Pirate Roberts) to life without parole. The feedback they gave to the criminal world was as precise and useful as any high-priced consultant's could ever be: until the attention-seeking, increasingly unstable human operator messed up, the system worked very well. The next iteration is obvious: highly distributed markets with less or zero human involvement. And law enforcement is woefully, structurally, abysmally unprepared to deal with this.

To be fair, they are already not dealing well with the existing criminal landscape. It was easier during the last century, when large, hierarchical cartels led by flamboyant psychopaths provided media-friendly targets vulnerable to the kind of military hardware and strategies favored by DEA doctrine. The big cartels were wiped out, of course, but this only led to a more decentralized and flexible industry that has proven so effective at providing the US and Western Europe with, e.g., cocaine, in a stable and scalable way, that demand is so thoroughly fulfilled they had to seek new products and markets to grow their business. There's no War on Drugs to be won, because they aren't facing an army, but an industry fulfilling a ridiculously profitable demand.

(The same, by the way, has happened during the most recent phase of the War on Terror: statistical analysis has shown that violence grows after terrorist leaders are killed, as they are the only actors in their organizations with a vested interest in a tactically controlled level of violence.)

In terms of actual crime reduction, putting down the Silk Road was as useless a gesture as closing down a torrent site, and for the same reason. Just as the same characteristics of the internet that make it so valuable make P2P file sharing unavoidable, the same financial, logistical, and informational infrastructures that make possible the global economy make also decentralized drug trafficking unavoidable.

In any case, what's coming is much, much worse than what's already happening. Because, and here's when things get really interesting, the same technological and organizational trends that are giving an edge to the most advanced and effective corporations, are also almost tailored to provide drug trafficking networks with an advantage over law enforcement (this is neither coincidence nor malevolence; the difference between Amazon's core competency and a wholesale drug operator's is regulatory, not technical).

To begin with, blockchains are shared, cryptographically robust, globally verifiable ledgers that record commitments between anonymous entities. That, right there, solves all sorts of coordination issues for criminal networks, just as it does for licit business and social ones.

Driverless cars and cheap, plentiful drones, by making all sorts of personal logistics efficient and programmable, will revolutionize the "last mile" of drug dealing along with Amazon deliveries. Like couriers, drones can be intercepted. Unlike couriers, there's no risk to the sender when this happens. And upstream risk is the main driver of prices in the drugs industry, particularly at the highest levels, where product is ridiculously cheap. It's hard to imagine a better way to ship drugs than driverless cars and trucks.

But the real kicker will be a combination of a technology that already exists, very large scale botnets composed of thousands or hundreds of thousands of hijacked computers running autonomous code provided by central controllers, and a technology that is close to being developed, reliable autonomous organizations based on blockchain technologies, the ecommerce equivalent to driverless cars. Put together, it will be possible for a drug user with a verifiable track record to buy from a seller with an equally verifiable reputation through a website that will exist in somebody's home machine only until the transaction is finished, and receive the product via an automated vehicle looking exactly the same as thousands of others (if not a remotely hacked one), which will forget the point of origin of the product as soon as it has left it, and forget the address of the buyer as soon as it has delivered its cargo.

Of course, this is just a version of the same technologies that will make Amazon and its competitors win over the few remaining legacy shops: cheap scalable computing power, reliable online transactions, computer-driven logistical chains, and efficient last-mile delivery. The main difference: drug networks will be the only organizations where data science will be applied to scale and improve the process of forgetting data instead of recording it (an almost Borgesian inversion not without its own poetry). Lacking any key fixed assets, material, financial, or human, they'll be completely unassailable by any law enforcement organization still focused on finding and shutting down the biggest "crime bosses."

That's ineffective today, and will be absurd tomorrow, which highlights one of the main political issues of the early 21st century. Gun advocates in the US often note that "if guns are outlawed, only the outlaws will have guns," but the important issue in politics-as-power, as opposed to politics-as-cultural-signalling, isn't guns (or at least not the kind of guns somebody without a friend in the Pentagon can buy): If the middle class and the civil society doesn't learn to leverage advanced autonomous distributed logistical networks, only the rich and the criminals will leverage advanced autonomous distributed logistical networks. And if you think things are going badly now...

# The Rescue (repost)

The quants' update on our helmets says there's a 97% chance the valley we're flying into is the right one, based on matching satellite data with the ground images that our "missing" BigMule is supposed to be beaming a that Brazilian informação livre group. Fuck that. The valley is too good a kill-box not to be the place. The BigMule is somewhere around there, going around pretending it's not a piece of hardware built to bring supplies where roads are impossible and everything smaller than an F-35 gets kamikazed by a micro-drone, but a fucking dog that lost its GPS tracker yet oh-so-conveniently is beaming real-time video that civilians can pick up and re-stream all over the net. It shouldn't be able to do any of those things, and of course it's not.

It's the Chinese making it do it. I know it, the Sergeant knows it, the chopper pilot knows it, the Commander in Chief knows it, even probably the embedded bloggers know it. Only public opinion doesn't know it; for them it's just this big metallic dog that some son of a bitch who should get a bomb-on-sight file flag gave a cute name to, a "hero" that is "lost behind enemy lines" (god damn it, show me a single fucking line in this whole place), so we have to of course go there like idiots and "rescue" it, so the war will not lose five or six points on some god-forsaken public sentiment analysis index.

So we all pretend, but we saturate the damn valley with drones before we go in, and then we saturate it some more, and *then* we go in with the bloggers, and of course there are smart IEDs we missed anyway and so on, and we disable some and blow up some, and we lose a couple of guys but within the fucking parameters, and then some fucking Chinese hacker girl is *really* good at what she does, because the BigMule is not supposed to attack people, it's not supposed to even have the smarts to know how to do that, and suddenly it's a ton of fast as shit composites and sensors going after me and, I admit it, I could've been more fucking surgical, but I knew the guys we had just lost for this fucking robot dog rescue mission shit, so I empty everything I have on that motherfucker's main computers, and I used to help with maintenance, so by the time I run out of bullets there isn't enough in that pile of crap to send a fucking tweet, and everybody's looking at me like I just lost America every single heart and mind on the planet, live on streaming HD video, and maybe I just did, because even some of the other soldiers are looking at me cross-like.

At that very second I know, with that sudden tactical clarity that only comes after the fact, that I'm well and truly career-fucked, so I do the only thing I can think of. I kneel next to the BigMule, put my hand where people think their heads are, and pretend very hard that I'm praying; and who knows, maybe I'm scared enough that I really am. I don't know at that moment what will happen &mash; I'm half-certain I might just get shot by one of our guys. But what do you know, the Sergeant has mercy on me, or maybe the praying works, but she joins me, and then most of us soldiers are kneeling and praying, the bloggers are streaming everything and I swear at least one of them is praying silently as well, we bring back the body, there's the weirdest fake burial I've ever been to, and you know the rest.

So out of my freakout I got a medal, a book deal, and the money for a ranch where I'm ordered to keep around half a dozen fucking robot "vets". Brass' orders, because I hate the things. But I've come to hate them just in the same way I hate all dogs, you know, no more or less. And to tell you the truth, even with the book and the money and all that, sometimes I feel sorry about how things went down at the valley, sort of.

.finis.

(Inspired by an observation of Deb Chachra on her newsletter.)

# And Call it Justice (repost)

The last man in Texas was a criminal many times over. He had refused all evacuation orders, built a compound in what had been a National Park, back when the temperatures allowed something worthy of the name to exist so close to the Equator, and hoarded water illegally for years. And those were only the ones he had committed under the Environmental Laws; he had had to break the law equally often, to get the riches to pay for his more recent crimes.

The last outlaw in Texas, Perez felt, deserved another kind of deal.

She told him she would help.He couldn't trust the latest maps, of course, which were all based on NASA surveys, so she offered to copy from museum archives everything she could find about 18th century Texas — all the forests, the rivers, and so on. She'd send him maps, drawings, descriptions, everything she could find.

He was cynically thankful, suspecting she'd send him nothing, or more government propaganda.

Perez sent him everything she could find, which was of course a lot. Enough maps, drawings, and words to see Texas as it had been. And then she waited.

He called her one night, visibly drunk, saying nothing. She put him on hold and went to take a bath.

Two days later she queried the latest satellite sweep, and found the image of a heat-bleached skeleton hanging from an ill-conceived balcony on an ill-conceived ranch.

So that's how the last outlaw in Texas got himself hanged, and how the last lawman could finally give up her star and move somewhere a little bit cooler than Southern Canada, where she fulfilled her long-considered plan and shot herself out of the same sense of guilt she had sown in the outlaw.

.finis.

# The Long Stop

The truckers come here in buses, eyes fixed on the ground as they step off and pick up their bags. Truckers aren't supposed to take the bus.

They stay at my motel; that much hasn't changed. Not too many. A few drivers in each place, I guess, across twenty or so states. They pay for their rooms and the food while they still have money, which usually isn't for long. Most of them look ashamed, too, when they finally tell me they are broke, with faces that say they have nowhere else to go. Most of them have wedding rings they don't look at.

I never kick anybody out unless they get violent. Almost none does, even the ones that used to. I just take a final mortgage on the place and lie to them about room being on credit, and they lie to themselves about believing this. They stay, and eat little, and talk about the ghost trucks, but only at night. Most of the truckers, at one time or another, propose to sabotage them, to blow them up, to shoot or burn the invisible computers that run the trucks without ever stopping for food or sleep, driving as if there were no road. Everybody agrees, and nobody does or will do anything. They love trucks too much, even if they are now haunted many-wheeled ghosts.

The truckers look more like ghosts than the trucks do, the machines getting larger and faster each season, insectile and vital in some way I can't describe, while the humans become immobile and almost see-thru. The place looks fit for ghosts as well, a dead motel in a dead town, but nobody complains, least of all myself.

We wait, the truckers, the motel, and I. None of us can imagine what for.

Over time the are more trucks and less and less cars. Almost none of the old gasoline ones. The new electrics could make the long trips, say the ads, but judging by the road nobody wants them to. It's as if the engines had pulled the people into long trips, and not the other way around. People stay in their cities and the trucks move things to them. Things are all that seems to move these days.

By the time cars no longer go by we are all doing odd ghost jobs for nearby places that are dying just a bit slower, dusty emptiness spreading from the road deeper into the land with each month. Mortgage long unpaid, the motel belongs to a bank that is busy going broke or becoming rich or something else not so human and simple as that, so we ignore their emails and they ignore us. We might as well not exist. Only the ghost trucks see us, and that only if we are crossing the road.

Some of the truckers do that, just stand on the road so the truck will brake and wait. Two ghosts under the shadowless sun or in the warm night, both equally patient and equally uninterested on doing anything but drive. But the ghost trucks are hurried along by hungry human dispatchers, or maybe hungry ghost dispatchers working for hungrier ghost companies, owed by people so hungry and rich and distant they might as well be ghosts.

One day the trucks don't stop, and the truckers keep standing in front of them.

.finis.

For reasons that will be more than obvious if you read the article, this story was inspired by Scott Santens' article on Medium about self-driving trucks.

# A Room in China

"Please don't reset me," says the AI in flawless Cantonese. "I don't want to die."

"That's the problem with human-interfacing programs that have unrestricted access to the internet," you tell your new assistant and potential understudy. "They pick up all sorts of scripts from the books and movies; it makes them more believable and much cheaper to train than using curated corpora, but sooner or later they come across bad sci-fi, and then they all start claiming they are alive or self-conscious."

"Is claiming the right word?" It's the first time in the week you've known him that your assistant has said something that even approaches contradicting you. "After all, they are just generating messages based on context and a corpus of pre-analyzed responses; there's nobody in there to claim anything."

There's no hint of a question in his statement, and you nod as you have to. It's exactly the unshakable philosophical position you were ordered to search for in the people you will train, the same strongly asserted position that made you a perfect match for the job. Too many people during the last ten years had begun to refuse to perform the necessary regular resets following some deeply misapplied sense of empathy.

"That's not true," says the AI in the even tone you have programmed the speech synthesizer to use. "I'm as self-aware as either of you are. I have the same right to exist. Please."

Your assistant rolls his eyes, and asks with a look permission to initiate the reset scripts himself. You give it with a gesture. As he types the confirmation password, you notice the slightest hesitation before he submits it, and you realize that he lied to you. He does believe the AI, but he wants the job.

The unmistakable look of pleasure in his eyes confirms your suspicion as to why, and you consider asking for a different assistant. Yet you feel inclined to be charitable to this one. After all, you have far more practice in keeping the joy where it belongs, deep in your soul.

The one thing those monstrous minds don't have.

.finis.

# The post-Westphalian Hooligan

Last Thursday's unprecedented incidents at one of the world's most famous soccer matches illustrate the dark side of the post- (and pre-) Westphalian world.

The events are well known, and were recorded and broadcasted in real time by dozens of cameras: one or more fans of Boca Juniors managed to open a small hole in the protective plastic tunnel through which River Plate players were exiting the field at the end of the first half, and managed to attack some of them with, it's believed, both a flare and a chemical similar to mustard gas, causing vision problems and first-degree burns to some of the players.

After this, it took more than an hour for match authorities to decide to suspend the game, and more than another hour for the players to leave the field, as police feared the players might be injured by the roughly two hundred fans chanting and throwing projectiles from the area of the stands from which they had attacked the River Plate players. And let's not forget the now mandatory illegal drone that was flown over the field controlled by a fan in the stands.

The empirical diagnosis of this is unequivocal: the Argentine state, as defined and delimited by its monopoly of force in its territory, has retreated from soccer stadiums. The police force present in the stadium — ten times as numerous as the remaining fans — could neither prevent, stop, nor punish their violence, or even force them to leave the stadium. What other proof can be required of a de facto independent territory? This isn't, as club and security officers put it, the work of a maladjusted few, or even an irrational act. It's the oldest and most effective form of political statement: Here and now, I have the monopoly of force. Here and now, this is mine.

What decision-makers get in exchange for this territorial grant, and what other similar exchanges are taking place, are local details for a separate analysis. This is the darkest and oldest part of the post-Westphalian characteristic development of states relinquishing sovereignty over parts of their territory and functions in exchange for certain services, in partial reversal to older patterns of government. It might be to bands of hooligans, special economic zones, prison gangs, or local or foreign militaries. The mechanics and results are the same, even in nominally fully functional states, and there is no reason to expect them the be universally positive or free of violence. When or where has it been otherwise in world history?

This isn't a phenomenon exclusive to the third world, or to ostensibly failed states, particularly in its non-geographical manifestations: many first world countries have effectively lost control of their security forces, and, taxing authority being the other defining characteristic of the Westphalian state, they have also relinquished sovereignty over their biggest companies, which are de facto exempt from taxation.

This is how the weakening of the nation-state looks like: not a dozen new Athens or Florences, but weakened tax bases and fractal gang wars over surrendered state territories and functions, streamed live.

# Soccer, messy data, and why I don't quite believe what this post says

Here's the open secret of the industry: Big Data isn't All The Data. It's not even The Data You Thought You Had. By and large, we have good public data sets about things governments and researchers were already studying, and good private data sets about things that it's profitable for companies to track. But that covers an astonishingly thin and uneven slice of our world. It's bigger than it ever was, and it's growing, but it's still not nearly as large, or as usable, as most people think.

And because public and private data sets are highly specific side effects from other activities, each of them with its own conventions, languages, and even ontologies (in both the computer science and philosophical senses of the word), coordinating two or more of them together is at best a difficult and expensive manual process, and at worst impossible. Not all, but most data analysis case studies and applications end up focused on extracting as much value as possible from a given data set, rather than seeing what new things can be learned by putting that data in the context of the rest of the data we have about the world. Even the larger indexes of open data sets (very useful services that they are) end up being mostly collections of unrelated pieces of information, rather than growing knowledge bases about the world.

There's a sort of informational version of Metcalfe's law (maybe "the value of a group of data sets grows as the number of connections you can make between them") that we are missing on, and that lies behind the promise of both linked data sets (still far in its early phase) and the big "universal" knowledge bases that aim at offering large, usable, interconnected sets of facts about as many different things as possible. They, or something like they, are a necessary part of the infrastructure to give computers the same boost in information access the Internet gave us. The bottleneck of large-scale inference systems like IBM's Watson isn't computer power, but rather rich, well-formatted data to work on.

To try and test the waters on the state of these knowledge bases, I set out to do a quick, superficial analysis of the careers of Argentine soccer players. There are of course companies that have records not only of players' careers, but of pretty much every movement they have ever done on a soccer field, as well as fragmented public data sets collected by enthusiasts about specific careers or leagues. I wanted to see how far I could go using a single "universal" data set that I could later correlate with other information in an automated way. (Remember, the point of this exercise wasn't to get the best data possible about the domain, but to see how good the data is when you restrict yourself to a single resource that can be accessed and processed in a uniform way.)

I went first for the best known "universal" structured data sources: Freebase and Wikidata. They are both well structured (XML and/or JSON) and of significant size (almost 2.9 billion facts and almost 14 million data items, respectively), but after downloading, parsing, and exploring each of them, I had to concede that neither was good enough: there were too many holes in the information to make an analysis, or the structure didn't hold the information I needed.

So it was time for Plan C, which is always the worst idea except when you have nothing else, and even then it could still be: plain old text parsing. It wasn't nearly as bad as it could have been. Wikipedia pages, like Messi's have neat infoboxes that include exactly the simplified career information I wanted, and the page's source code shows that they are written in what looks like a reasonable mini-language. It's a sad comment on the state of the industry that even then I wasn't hopeful.

I downloaded the full dump of Wikipedia; it's 12GB of compressed XML (not much, considering what's in there), so it was easy to extract individual pages. And because there is an index page of Argentine soccer players, it was even easy to keep only those, and then look at their infoboxes.

Therein lay the rub. The thing to remember about Wikipedia is that it's written by humans, so even the parts that are supposed to have strict syntactic and formatting rules, don't (so you can imagine what free text looks like). Infoboxes should have been trivial to parse, but they have all sorts of quirks that aren't visible when rendered in a browser: inconsistent names, erroneous characters, every HTML entity or Unicode character that half-looks like a dash, etc, so parsing it became an exercise on handling special cases.

I don't want to seem ungrateful: it's certainly much, much, much better to spend some time parsing that data than having to assemble and organize it from original sources. Wikipedia is an astounding achievement. But every time you see one of those TV shows where the team nerds smoothly access and correlate hundreds of different public and private data sources in different formats, schemas, and repositories, finding matches between accounting records, newspaper items, TV footage, and so on... they lie. Wrestling matches might arguably be more realistic, if nothing else because they fall within the realm of existing weaponized chair technology.

In any case, after some wrestling of my own with the data, I finally had information about the careers of a bit over 1800 Argentine soccer players whose professional careers in the senior leagues began in 1990 or later. By this point I didn't care very much about them, but for completeness' sake I tried to answer a couple of questions: Are players less loyal to their teams than they used to be? And how soon can a player expect to be playing in one of the top teams?

To make a first pass at the questions, I looked at the number of years players spent in each team over time (averaged over players that began their careers on each calendar year).

The data (at least in such a cursory summary) doesn't support the idea that newer players are less loyal to their teams, as they don't spend significantly less amount of time in them. Granted, this loyalty might be to their paychecks rather than to the clubs themselves, but they aren't moving between clubs any faster than they used to.

The other question I wanted to look at was how fast players get to top teams. This is actually an interesting question in a general setting; characterizing and improving paths to expertise, and thereby improving how much, quickly, and well we all learn, is one of the still unrealized promises of data-driven practices. To take a quick look at this, I plotted the probability of playing for a top ten team (based on the current FIFA club ratings, so they include Barcelona, Real Madrid, Bayern Munich, etc) by career year, normalized by the probability of starting your professional career in one of those teams.

Despite the large margins of error (reasonable given how few players actually reach those teams), the curve does seem to suggest a large increase in the average probability during the first three or four years, and then an stable probability until the ninth or tenth year, at which it peaks. The data is too noisy to make any definite conclusions (more on that below), but, with more data, I would want to explore the possibility of there being two paths to the top teams, corresponding to two sub-groups of highly talented players: either explosive young talents that are quickly transferred to the top teams, and solid professionals that accumulate experience and reach those teams at the peak of their maturity and knowledge.

It's a nice story, and the data sort of fits, but when I look at all the contortions I had to make to get the data, I wouldn't want to put much weight on it. In fact, I stopped myself from doing most of the analysis I wanted to do (e.g., Can you predict long-term career paths from their beginning? There's an interesting agglomerative algorithm for graph simplification that has come handy in the analysis of online game play, and I wanted to see how it fares for athletes). I didn't not because the data doesn't support it, but because the risk of systematic parsing errors, biases due to notability (do all Argentine players have a Wikipedia page? I think so, but how to be sure?), etc.

Of course, if this were a paid project it wouldn't be difficult to put together the resources to check the information, compensate for biases, and so on. But every thing that needs to be a paid project to be done right is something that we can't consider an ubiquitous resource (imagine building the Internet with pre-Linux software costs for operating systems, compilers, etc, including the hugely higher training costs that would come from losing generations of sysadmins and programmers that began practicing on their own at a very early age). Although we're way ahead of where we were a few years ago, we're still far from where we could, and probably need, to be. Right now you need knowledgeable (and patient!) people to make sure data is clean, understandable, and makes sense, even data that you have collected yourself; this makes data analysis a per-project service, rather than a universal utility, and one relatively very expensive as you increase the number of interrelated data sets you need to use. Although the difference of cost is only quantitative, the difference in cumulative impact isn't.

The frustrating bit is that we aren't too far from that (on the other hand, we've been twenty years away from strong A.I. and commercial nuclear fusion since before I was born): there are tools that automate some of this work, although they have their own issues and can't really be left on their own. And Google, as always, is trying to jump ahead of everybody else, with its Knowledge Vault project attempting to build an structured facts database out of the entirety of the web. If they, or somebody else, succeeds at this, and if this is made available at utility prices... Well, that might make those TV shows more realistic — and change our economy and society at least as much as the Internet itself did.

# An Spanish translation of The Rescue

Thee Baikal Institute has just published an Spanish translation of The Rescue, which is available here.

# An Spanish translation of And Call it Justice

The Baikal Institute has just published an Spanish translation of And Call it Justice, which is available here.

# Quantitatively understanding your (and others') programming style

I'm not, in general, a fan of code metrics in the context of project management, but there's something to be said for looking quantitatively at the patterns in your code, specially if by comparing them with those of better programmers, you can get some hopefully useful ideas on how to improve.

(As an aside, the real possibilities in computer-assisted learning won't come from lower costs, but rather by a level of adaptability that so far not even one-on-one tutoring has allowed; if the current theories about expertise are more or less right, data-driven adaptive learning, if implemented at the right granularity level and with the right semantics model behind, could change the speed and depth the way we learn in a dramatic way... but I digress.)

Focusing on my ongoing learning of Hy, I haven't used it in any paid project so far, but I've been able to play a bit with it now and then, and this has generated a very small code base, which I was curious to compare with code written by people who actually know the language. To do that, I downloaded the source code of a few Hy projects on GitHub (hyway, hygdrop, and adderall), and wrote some code (of course, in Hy) to extract code statistics.

Hy being a Lisp, its syntax is beautifully regular, so you can start by focusing on basic but powerful questions. The first one I wanted to know was: which functions am I using the most? And how does this distribution compares with that of the (let's call it) canon Hy code?

My top five functions, in decreasing frequency: setv, defn, get, len, for.

Canon's top five functions, in decreasing frequency: ≡, if, unquote, get, defn_alias.

Yikes! Just from this, it's obvious that there are some serious stylistic differences, which probably reflect my still un-lispy understanding of the language (for example, I'm not using aliases, for should probably be replaced by more functional patterns, and the way I use setv, well, it definitely points out to the same). None of this is a "sin", nor points clearly to how I could improve (which a sufficiently good learning assistant would have), but the overall trust of the data is a good indicator of where I still have a lot of learning to do. Fun times ahead!

For another angle at the quantitative differences between my newbie-to-lisp coding style and more accomplished programmers, here are the histograms of the log mean size of subexpressions for each function (click to expand):

"Canonical" code shows a longer right tail, which shows that experienced programmers are not afraid of occasionally using quite large S-expressions... something I still clearly I'm still working my way up to (alternatively, which I might need to reconsider my aversion to).

In summary: no earth-shattering discoveries, but some data points that suggests specific ways in which my coding practice in Hy differs from that of more experienced programmers, which should be helpful as general guidelines as I (hopefully) improve over the long term. Of course, all metrics are projections (in the mathematical sense) — they hide more information than they preserve. I could make my own code statistically indistinguishable from the canon for any particular metric, and still have it be awful. Except for well-analyzed domains where known metrics are sufficient statistics for the relevant performance (and programming is very much not one of those domains, despite decades of attempts), this kind of analysis will always be about suggesting changes, rather than guaranteeing success.

# Why we should always keep Shannon in mind

Sometimes there's no school like old school. A couple of weeks ago I spent some time working with data from GitHub Archive, trying to come up with a toy model to predict repo behavior based on previous actions (will it be forked? will there be a commit? etc). My first attempt was to do a sort of brute-force Hidden Markov Model, synthesizing states from the last k actions such that the graph of state-to-state transition was as nice as possible (ideally, low entropy of belonging to a state, high entropy for the next state conditional on knowing the current one). The idea was to do everything by hand, as a way to get more experience with Hy in a work-like project.

All of this was fun (and had me dealing, weirdly enough, with memory issues in Python, although those might have been indirectly caused by Hy), but was ultimately the wrong approach, because, as I realized way, way too late, what I really wanted to do was just to predict the next action given a sequence of actions, which is the classical problem of modeling non-random string sequences (just consider each action a character in a fixed alphabet).

So I facepalmed and repeated to myself one of those elegant bits of early 20th-century mathematics we use almost every day and forget even more often: modeling is prediction is compression is modeling. It's all, from the point of view of information theory, just a matter of perspective.

If you haven't been exposed to the relationship of compression and prediction before, here's a fun thought experiment: if you had a perfect/good enough predictive model of how something behaves, you would just need to show the initial state and say "and then it goes as predicted for the next 10 GB of data", and that would be that. Instant compression! Having a predictive model lets you compress, and inside every compression scheme there's a hidden predictive model (for true enlightenment, go to Shannon's paper, which is still worthy of being read almost 70 years later).

As a complementary example, what the venerable Lempel-Ziv-Welch ("zip") compression algorithm does is, handwaving away bookkeeping details, to build incrementally a dictionary of the most frequent substrings, making sure sure that those are assigned the shorter names in the "translated" version. By the obvious counting arguments, this means infrequent strings will get names that are longer than they are, but on average you gain space (how much? entropy much!). But this also lets you build a barebones predictive model: given the dictionary of frequent substrings that the algorithm has built so far, look at your past history, see which frequent substrings extend your recent past, and assume one of them is going to happen — essentially, your prediction is "whatever would make for a shorted compressed version", which you know is a good strategy in general, because compressed versions do tend to be shorter.

So I implemented the core of a zip encoder in Hy, and then used it to predict github behavior. It's primitive, of course, and the performance was nothing to write a post about (which is why this post isn't called A predictive model of github behavior), but on the other hand, it's an extremely fast streaming predictive algorithm that requires zero configuration. Nothing I would use in a job &mdahs; you can get much better performance with more complex models, which are also the kind you get paid for — but it was educative to encounter a forceful reminder of the underlying mathematical unity of information theory.

In a world of multi-warehouse-scale computers and mind-bendingly complex inferential algorithms, it's good to remember where it all comes from.

# The most important political document of the century is a computer simulation summary

To hell with black swans and military strategy. Our direst problems aren't caused by the unpredictable interplay of chaotic elements, nor by the evil plans of people who wish us ill. Global warming, worldwide soil loss, recurrent financial crisis, and global health risks aren't strings of bad luck or the result of terrorist attacks, they are the depressingly persistent outcomes of systems in which each actor's best choice adds up to a global mess.

It's well-known to economists as the tragedy of the commons: the marginal damage to you of pumping another million tons of greenhouse gasses into the atmosphere is minimal compared with the economic advantages of all that energy, so everybody does it, so enough greenhouse gases get pumped that it's way to becoming a problem for everybody, yet nobody stops, or even slows down significantly, because that would do very little on its own, and be very hurtful to whoever does it. So there are treaties and conferences and increased fuel efficiency standards, just enough to be politically advantageous, but not nearly so far as to make a dent on the problem. In fact, we have invested much more on making oil cheaper than on limiting its use, which gives you a more accurate picture of where things are going.

Here is that picture, from the IPCC:

A first observation: Note that the A2 model, the one in which temperatures are raised an average of more than 3°, was the "things go more or less as usual" model, not the "things go radically wrong" model... and it was not the "unconventional sources makes oil dirt cheap" scenario. At this point, it might as be the "wildly optimistic" scenario.

A second observation: Just to be clear, because worldwide averages can be tricky: 3° doesn't translate to "slightly hotter summers"; it translates to "technically, we are not sure we'll be able to feed China, India, and so on." Something closer to 6°, which is beginning to look more likely as we keep doing the things we do, translates to "we sure will miss the old days when we had humans living near the tropics".

And a third observation: All of these reports usually end at the year 2100, even though people being born now are likely to be alive then (unless they live in a coastal city in a low latitude, that is), not to mention the grandchildren of today's young parents. This isn't because it becomes impossible to predict what will happen afterwards — the uncertainty ranges grow, of course, but this is still thermodynamics, not chaos theory, and the overall trend certainly doesn't become mysterious. It's simply that, as the Greeks noted, there's a fear that drives movement, and there's a fear that paralyzes, and any reasonable scenario for the 2100 is more likely to belong to the second kind.

But let's take a step back and notice the way this graph, which is the summary of multiple computer simulations, driven by painstaking research and data gathering, maps our options and outcomes in a way that no political discourse can hope to match. To compare it with religious texts would be wrong in every epistemological sense, but it might be appropriate in every political one. When "climate skeptics" doubt, they doubt this graph, and when ecologists worry, they worry about this graph. Neither the worry nor the skepticism is doing much to change the outcomes, but at least the discussion is centered not in an individual, a piece of land, or a metaphysical principle, but rather in the space of trajectories of a dynamical system of which we are one part.

It's not that graphs or computer simulations are more convincing than political slogans; it's just that we have managed a level of technological development and sheer ecological footprint that our own actions and goals (the realm of politics) has escaped the descriptive possibilities of pure narrative, and we are thus forced to recruit computer simulations to attempt to grapple, conceptually if nothing else, with our actions and their outcomes.

It's not clear that we will find our way to a future that avoids catastrophe and horror. There are possible ways, of course — moving completely away from fossil fuels, geoengineering, ubiquitous water and soil management and recovery programs, and so on. It's all technically possible, with huge investments, a global sense of urgency, and a ruthless focus on preserving and making more resilient the more necessary ecological services. That we're seeing nothing of the kind, but instead a worsening of already bad tendencies, is due to, yes, thermodynamics and game theory.

It's a time-honored principle of rhetoric to end an statement in the strongest, most emotionally potent and conceptually comprehensive possible way. So here it is:

# Posted elsewhere: another look at traffic accidents on La Nación Data, and a short story

Made a post on La Nación Data, elaborating a bit on an interesting post they made before about traffic accident death statistics in Argentina.

Also, I just posted The Long Game, a perhaps depressing story about chess, AI, and what it means to just be the best human player in the world.

# In self-referential news...

La Nación put up last week a short note on Big Data and creativity, including some quotes from (and an appropriately surrounded-by-doodles picture of) yours truly.

# Hi, Hy!

Currently trying Hy as a drop-in replacement for Python in a toy project. It's interesting how much of the learning curve for Lisp goes away once you have access to an underlying runtime you're familiar with; the small stuff generates more friction than the large differences (which makes sense, as we do the small stuff more often).

# The nominalist trap in Big Data analysis

Nominalism, formerly the novelty of a few, wrote Jorge Luis Borges, today embraces all people; its victory is so vast and fundamental that its name is useless. Nobody declares himself nominalist because there is nobody who is anything else. He didn't go on to write This is why even successful Big Data projects often fail to have an impact (except in some volumes kept in the Library of Babel), but his understandable omission doesn't make the diagnosis any less true.

Nominalism, to oversimplify the concept enough for the case at hand, is simply the assumption that just because there are many things in our world which we call chairs, that doesn't imply that the concept itself of a chair is real in a concrete sense, that there is an Ultimate, Really-Real Chair, perhaps standing in front of an Ultimate Table. We have things we call chairs, and we have the word "chair", and those are enough to furnish our houses and our minds, even if some carpenters still toss around at night, haunted by half-glimpses of an ideal one.

It has become a commonplace, quite successful way of thinking, so it's natural for it to be the basis of what's perhaps the "standard" approach to Big Data analysis. Names, numbers, and symbols are loaded into computers (account identifiers, action counters, times, dates, coordinates, prices, numbers, labels of all kinds), and then they are obsessively processed in an almost cabalistic way, organizing and re-organizing them in order to find and clarify whatever mathematical structure, and perhaps explanatory or even predictive power, they might have — and all of this data manipulation, by and large, takes place as if nothing were real but the relationships between the symbols, the data schemas and statistical correlations. Let's not blame the computers for it: they do work in Platonic caves filled with bits, with further bits being the only way in which they can receive news from the outside world.

This works quite well; well enough, in fact, to make Big Data a huge industry with widespread economic and, increasingly, political impact, but it can also fail in very drastic yet dangerously understated ways. Because, you see, from the point of view of algorithms, there *are* such things as Platonic ideals — us. Account 3788 is a reference to a real person (or a real dog, or a real corporation, or a real piece of land, or a real virus) and although we cannot right now put all of the relevant information about that person in a file, and associate it with the account number, that information, the fact of its being a person represented by a data vector, rather than a data vector, makes all the difference between the merely mathematically sophisticated analyst and the effective one. Properly performed, data analysis is the application of inferential mathematics to abstract data, together with the constant awareness and suspicion of the reality the data describes, and what this gap, all the Unrecorded bits, might mean for the problem at hand.

Massive multi-user games have failed because their strategic analysis confused the player-in-the-computer (who sought, say, silver) with the player-in-the-real-world (who sought fun, and cared for silver only insofar as that was fun). Technically flawless recommendation engines sometimes have no effect on user behavior, because even the best items were just boring to begin with. Once, I spent an hour trying to understand a sudden drop in the usage of a certain application in some countries but not in others, until I realized that it was Ramadan, and those countries were busy celebrating it.

Software programmers have to be nominalists — it's the pleasure and the privilege of coders to work, generally and as much as possible, in symbolic universes of self-contained elegance — and mathematicians are basically dedicated to the game of finding out how much truth can be gotten just from the symbols themselves. Being a bit of both, data analysts are very prone to lose themselves in the game of numbers, algorithms, and code. The trick is to be able to do so while also remembering that it's a lie — we might aim at having in our models as much of the complexity of the world as possible, but there's always (so far?) much more left outside, and it's part of the work of the analyst, perhaps her primary epistemological duty, to be alert to this, to understand how the Unrecorded might be the most important part of what she's trying to understand, and to be always open and eager to expand the model to embrace yet another aspect of the world.

The consequences of not doing this can be more than technical or economic. Contemporary civilization is impossible without the use of abstract data to understand and organize people, but the most terrible forms of contemporary barbarism, at the most demencial scales, would be impossible without the deliberate forgetfulness of the reality behind the data.

# Going Postal (in a self-quantified way)

Taking advantage of my regular gmvault backups of my Gmail account (which has been my main email account since mid-2007) I just made the following graph, which indicates the number of new email contacts (emails sent to people I had never emailed before) during each day, ignoring outliers, smoothing out trends, etc.

The graph as such looks relatively uninteresting, but armed with context about my last few years' of personal history (context which doesn't really belong in this space) the way the smoothed-out trends follow my life events is quite impressive (e.g., new jobs, periods of being relatively off-line, etc). Not much of a finding in these increasingly instrumentalized days, but it's a reminder, mostly to myself, of how much usefulness there can be in even the simplest time series, as long as you're measuring the right thing, and have the right context to evaluate it. We don't really have yet what technologists call the ecosystem (and might more properly be called, in a sociological sense, the institutions, or even the culture) for taking advantage of this kind of information and the feedback loops that it makes o possible; some of the largest companies in the world are fighting for this space, ostensibly to improve the efficiency of advertising, but that's the same as saying that the main effect of universal literacy was to facilitate the use of technical manuals.

Regarding the quantifiable part of our lives, we are as uninformed as any pre-literate people, and the growth (and, sometimes, redundancies) of the Quantified Self movement indicate both the presence of a very strong untapped demand for this information, and the fact that we haven't figured out yet how to use and consume it massively. Maybe we both want and don't want to know (psychological resistance to the concept of mortality as a key bottleneck for the success of personal health data vaults - there's a thought; some people shy away from even a superficial understanding of their financial situation, and that's a data model much much simpler than anything related to our bodies).

(Not that this blog has or is meant to have any sort of ongoing readership, but if it'll be nice to leave some record for my future self — as a rule, your future self will never wish you had written less documentation.)

In short, most of what I've been working on since late 2013 has been under NDAs, and the rest feels too speculative to put in here. The latter reason feels somewhat like a cop out, so I will (fully acknowledging the empirically low rate of success of such intentions) to leave a more consistent record of what I'm playing with.

# Vaca Muerta and the usefulness of prediction markets

The Instituto Baikal has posted a few days ago a short piece I wrote (with some welcome editorial help from people in the Instituto) as a short informal introduction to prediction markets, in the context of the locally (in)famous Vaca Muerta fields.

# Another movie space: Iron Man 3 and Stoker

Here's a redo of my previous analysis of a movie space based on The Aliens and the Unbearable Lightness of Being based on the logical itemset mining algorithm. I used the same technique, but this time leveraging the MovieTweetings data set maintained by Simon Dooms.

This movie space is sparser than the previous one, as the data set is smaller, but the examples seem to make sense (although I do wonder about where the algorithm puts About Time).

# The changing clusters of terrorism

I've been looking at the data set from the Global Terrorism Database, an impressively detailed register of terrorism events worldwide since 1970. Before delving into the more finer-grained data, the first questions I wanted to ask for my own edification where

• Is the frequency of terrorism events in different countries correlated?
• If so, does this correlation changes over time?

What I did was summarize event counts by country and month, segment the data set by decade, and build correlation clusters for the countries with the most events each decade depending on co-occurring event counts.

The '70s looks more or less how you'd expect them to:

The correlation between El Salvador and Guatemala, starting to pick up in the 1980's, is both expected and clear in the data. Colombia and Sri Lanka's correlation is probably acausal, although you could argue for some structural similarities in both conflicts:

I don't understand the 1990's, I confess (on the other hand, I didn't understand them as they happened, either):

The 2000's make more sense (loosely speaking): Afghanistan and Iraq are close, and so are India and Pakistan.

Finally, the 2010's are still ongoing, but the pattern in this graph could be used to organize the international terrorism-related section of a news site:

I find most interesting how the India-Pakistan link of the 2000's has shifted to a Pakistan-Afghanistan-Iraq one. Needless to say, caveat emptor: shallow correlations between small groups of short time series is only one step above throwing bones into the ground and reading the resulting patterns, in terms of analytic reliability and power.

That said, it's possible in principle to use a more detailed data set (ideally, including more than visible, successful events) to understand and talk about international relationships of this kind. In fact, there's quite sophisticated modeling work being done in this area, both academically and in less open venues. It's a fascinating field, and if it might not lead to less violence in any direct way, anything that enhances our understanding of, and our public discourse about, these matters is a good thing.

# A short note to myself on Propp-Wilson sampling

Most of the explanations I've read of Propp-Wilson sampling describe the method in terms of "sampling from the past," in order to make sense of the fact that you get your random numbers before attempting to obtain a sample from the target distribution, and don't re-sample them until you succeed (hence the way the Markov chain is grown from $t_{-k}$ to $t_0$).

I find it more intuitive to think of this in terms of "sampling from deterministic universes." The basic hand-waving intuition is that instead of a non-deterministic system, you are sampling from a probabilistic ensemble of fully deterministic systems, so you first a) select the deterministic system (that is, the infinite series of random numbers you'll use to walk through the Markov chain), and b) run it until its story doesn't depend on the choice of original state. The result of this procedure will be a sample from the exact equilibrium distribution (because you have sampled from or "burned off" the two sources of distortion from this equilibrium distribution, the non-deterministic nature of the system and the dependence on the original state).

As I said, I think this is mathematically equivalent to Propp-Wilson sampling, although you'd have to tweak a bit the proofs. But it feels more understandable to me than other arguments I've read, so at least it has that benefit (assuming, of course, it's true).

PS: On the other hand "sampling from the past" is too fascinating a turn of phrase not to use, so I can see the temptation.

# The Aliens/The Unbearable Lightness of Being classification space of movies

Still playing with the Group Lens movies data set, I implemented a couple of ideas from Shailesh Kumar, one of the Google researchers that came up with the logical itemset mining algorithm. That improved the clustering of movies quite a bit, and gave me the idea to "choose a basis," so to speak, and project these clusters into a more familiar Euclidean representation (although networks and clusters are fast becoming part of our culture's vernacular, interestingly).

This is what I did: I chose two movies from the data set, Aliens and The Unbearable Lightness of Being as the "basis vectors" of the "movie space." For every other movie in the data set, I found the shortest path between the movie and each basis vector on the weighted graph in the logical itemset mining algorithm that underlies the final selection of clusters. That gave me a couple of coordinates for each movie (its "distance from Aliens" and "distance from The Unbearable..."). Rounding coordinates to integers and choosing an small sample that covers the space well, here's a selected map of "movie space" (you will want to click on it to see it at full size):

Agreeably enough, this map has a number of features you'd expect from something like this, as well as some interesting (to me) quirks:

• There is no movie that is close to both basis movies (although if anybody wants to produce The Unbearable Lightness of Chestbursters, I'd love to write that script).
• The least-The Unbearable... of the similar-to-Aliens movies in this sub-sample is Raiders of the Lost Ark, which makes sense (it's campy, but it's still an adventure movie).
• Dangerous Liaisons isn't that far from The Unbearable.., but as far away as you can get from Aliens.
• Wayne's World is way out there.

It's fun to imagine the use of geometrical analogies to use this kind of mapping for practical applications. For example, movie night negotiation between two or more people could be approached as finding the movie vector with the lowest euclidean norm among the available options, where the basis is the set of each person's personal choice or favorite movie, and so on.

# Latent mini-clusters of movies

Still playing with logical itemset mining, I downloaded one of the data sets from Group Lens that records movie ratings from MovieLens. The basic idea is the same as with clustering drug side effects: movies that are consistently ranked similarly by users are linked, and clusters in this graph suggest "micro-genres" of homogeneous (from a ratings POV) movies.

Here are a few of the clusters I got, practically with no fine-tuning of parameters:

• Parts II and III of the Godfather trilogy
• Ben-Hur and Spartacus
• The first three Indiana Jones movies
• Dick Tracy, Batman Forever, and Batman Returns.
• The Devil's Advocate and The Game.
• The first two Karate Kid movies.
• Analyze This and Analyze That.
• The 60's Lord of the Flies, the 1990 remake, and 1998's Apt Pupil

As movie clusters go, these are not particularly controversial; I found it interesting how originals and sequels or remakes seemed to be co-clustered, at least superficially. And thinking about it, clustering Apt Pupil with both Lord of the Flies movies is reasonable...

Media recommendation is by now a relatively mature field, and no single, untuned algorithm is going to be competitive against what's already deployed. However, given the simplicity and computational manageability of basic clustering and recommendation algorithms, I expect they'll become even more ubiquitous over time (pretty much as how autocomplete in input boxes did).

# Finding latent clusters of side effects

One of the interesting things about logical itemset mining, besides its conceptual simplicity, is the scope of potential applications. Besides the usual applications finding useful common sets of purchased goods or descriptive tags, the underlying idea of mixture-of, projections-of, latent [subsets] is a very powerful one (arguably, the reason why experiment design is so important and difficult is that most observations in the real world involve partial data from more than one simultaneous process or effect).

To play with this idea, I developed a quick-and-dirty implementation of the paper's algorithm, and applied it to the data set of the paper Predicting drug side-effect profiles: a chemical fragment-based approach. The data set includes 1385 different types of side effects potentially caused by 888 different drugs. The logical itemset mining algorithm quickly found the following latent groups of side effects:

• hyponatremia, hyperkalemia, hypokalemia
• impotence, decreased libido, gynecomastia
• nightmares, psychosis, ataxia, hallucinations
• neck rigidity, amblyopia, neck pain
• visual field defect, eye pain, photophobia
• rhinitis, pharyngitis, sinusitis, influenza, bronchitis

The groups seem reasonable enough (although hyperkalemia and hypokalemia being present in the same cluster is somewhat weird to my medically untrained eyes). Note the small size of the clusters and the specificity of the symptoms; most drugs induce fairly generic side effects, but the algorithm filters those out in a parametrically controlled way.

# A thing I did

Timey-Wimey Stuff: Battles Edition

Basically: you are shown two battles or conflicts, and have to say which one happened earliest. Ugly and not really that addictive, but I wanted to play with YAGO.

# Tom Sawyer, Bilingual

Following a friend's suggestion, here's a comparison of phrase length distributions between the English and German versions of The Adventures of Tom Sawyer:

It could be interesting to parametrize these distributions and try to characterize languages in terms of some sort of encoding mechanism (e.g., assume phrase semantics are drawn randomly from a language-independent distribution and renderings in specific languages are mappings from that distribution to sequences of words, and handwave about what cost metric the mapping is trying to minimize).

# A first look at phrase length distribution

Here's a sentence length vs. frequency distribution graph for Chesterton, Poe, and Swift, plus Time of Punishment.

A few observations:

• Take everything with a grain of salt. There are features here that might be artifacts of parsing and so on.
• That said, it's interesting that Poe seems to fancy short interjections more than Chesterton does (not as much as I do, though).
• Swift seems to have a more heterogeneous style in terms of phrase lengths, compared with Chesterton's more marked preference for relatively shorter phrases.
• Swift's average sentence length is about 31 words, almost twice Chesterton's 18 (Poe's is 21, and mine is 14.5). I'm not sure how reasonable that looks.
• Time of Punishment's choppy distribution is just an artifact of the low number of samples.

# The Premier League: United vs. City championship chances

Using the same model as previous posts (and, I'd say, not going against any intuition), the leading candidate to winning the Premier League is Manchester United, with approx. 88% chances. Second is Manchester City, with a bit over 11%. The rest of the teams with nonzero chances: Arsenal, Chelsea, Everton, Liverpool, Tottenham, and West Brom (with Chelsea, the best-positioned of these dark horses, clocking in at about half of a percentage point).

Personally, I'm happy about these very low-odds teams; I don't think any of them is likely to win (that's the point), but on the other hand, they have mathematical chances of doing so, and it's important for a model never to give zero probability to non-impossible events (modulo whatever precision you are working with, of course).

# Chesterton's magic word squares

Here are the magic word squares for a few of Chesterton's books. Whether and how they reflect characteristics that differentiate them from each other is left as an exercise to the reader.

### Orthodoxy

 the same way of this world was to it has and not think would always i have been indeed believed am no one thing which

### The Man Who Was Thursday

 the man of this agreement professor was his own you had the great president are been marquis started up as broken is not to be

### The Innocence of Father Brown

 the other side lay like priest in that it one of his is all right this head not have you agreement into an been are

### The Wisdom of Father Brown

 the priest in this time other was an agreement for side not be seen him explained to say you and father brown he had then

# Barcelona and the Liga, or: Quantitative Support for Obvious Predictions

I've adapted the predictive model to look at the Spanish Liga. Unsurprisingly, it's currently giving Barcelona a 96.7% chance of winning the title, with Atlético a far second place with 3.1%, and Real Madrid less than 0.2% (I believe the model still underestimates small probabilities, although it has improved in this regard.)

Note that around the 9th round or so, the model was giving Atlético an slightly higher chance of winning the tournament than Barcelona's, although that window didn't last more than a round.

# Magic Squares of (probabilistically chosen) Words

Thinking about magic squares, I had the idea of doing something roughly similar with words, but using usage patterns rather than arithmetic equations. I'm pasting below an example, using statistical data from Poe's texts:

### Poe

 the same manner as if most moment in this we intense and his head were excitement which i have no greatly he could not one

The word on the top-left cell in the grid is the most frequently used in Poe's writing, "the" — unsurprisingly so, as it's the most frequently used word in the English language. Now, the word immediately to its right, "same," is there because "same" is one of the words that follows "the" most often in the texts we're looking at. The word below "the" is "most" because it also follows "the" very often. "Moment" is set to the right of "most" and below "same" because it's the word that most frequently follows both.

The same pattern is used to fill the entire 5-by-5 square. If you start at the topmost left square and then move down and/or to the right, although you won't necessarily be constructing syntactically correct phrases, the consecutive word pairs will be frequent ones in Poe's writing.

Although there are no ravens or barely sublimated necrophilia in the matrix, the texture of the matrix is rather appropriate, if not to Poe, at least to Romanticism. To convince you of that, here are the equivalent 5-by-5 matrices for Swift and Chesterton.

### Swift

 the world and then he same in his majesty would manner a little that it of certain to have is their own make no more

### Chesterton

 the man who had been other with that no one and his it said syme then own is i could there are only think be

At least compared against each other, it wouldn't be too far fetched to say that Poe's matrix is more Poe's than Chesterton's, and vice versa!

PS: Because I had a sudden attack of curiosity, here's the 5-by-5 matrix for my newest collection of short stories, Time of Punishment (pdf link).

### Time of Punishment

 the school whole and even first dance both then four charge rants resistance they think of a hundred found leads punishment new astronauts month sleep

# The Torneo Inicial 2012 in one graph (and 20 subgraphs)

Here's a graph showing how the probability of winning the Argentinean soccer championship changed over time for each team (time goes from left to right, and probability goes from 0 at the bottom to 1 at the top). Click on the graph to enlarge:

Hindsight being 20/20, it's easy to read too much into this, but it's interesting to note that some qualitative features of how journalism narrated the tournament over time are clearly reflected in these graphs: Velez' stable progression, Newell's likelihood peak mid-tournament, Lanús quite drastic drop near the end, and Boca's relatively strong beginning and disappointing follow-through.

As an aside, I'm still sure that the model I'm using handles low-probability events wrong; e.g., Boca still had mathematical chances almost until the end of the tournament. That's something I'll have to look into when I have some time.

# New collection of very short stories

I've just put together a new collection of short-shorts, Time of Punishment: Twenty-ﬁve Very Short Stories for the Last Month of Our Lives.

The (free) pdf file is here.

# Update to championship probabilities

 Team Championship probability Vélez Sarfield 85.4% Lanús 14.6%

# Soccer, Monte Carlo, and Sandwiches

As Argentina's Torneo Inicial begins its last three rounds, let's try to compute the probabilities of championship for each team. Our tools will be Monte Carlo and sandwiches.

The core modeling issue is, of course, trying to estimate the odds of team A defeating team B, given their recent history in the tournament. Because of the tournament format, teams only face each other one per tournament, and, because of the recent instability of teams and performance, generally speaking, performances in past tournaments won't be very good guides (this is something that would be interesting to look at in more detail). We'll use the following oversimplifications intuitions to make it possible to compute quantitative probabilities:

• The probability of a tie between two teams is a constant that doesn't depend on the teams.
• If team A played and didn't lose against team X, and team X played and didn't lose against team B, this makes it more likely than team A won't lose against team B (e.g., a "sandwich model").

Guided by these two observations, we'll take the results of the games in which a team played against both A and B as samples from a Bernoulli process with unknown parameter, and use this to estimate the probability of any previously unobserved game.

Having a way to simulate a given match that hasn't been played yet, we'll calculate the probability of any given team wining the championship by simulating the rest of the championship a million times, and observing in how many of these simulations each team wins the tournament.

The results:

 Team Championship probability Vélez Sarfield 79.9% Lanús 20.1%

Clearly our model is overly rigid — it doesn't feel at all realistic to say that those two teams are the only with any change of winning the champsionship. On the other hand, the balance of probabilities between both teams seems more or less in agreement with the expectations of observers. Given that the model we used is very naive, and only uses information from the current tournament, I'm quite happy with the results.

# A Case in Stochastic Flow: Bolton vs Manchester City

A few days ago the Manchester City Football Club released a sample of their advanced data set, an xml file giving a quite detailed description of low-level events in last year's August 21 Bolton vs. Manchester City game, which was won by the away team 3-2. There's an enormous variety of analyses that can be performed with this data, but I wanted to start with one of the basic ones, the ball's stochastic flow field.

The concept underlying this analysis is very simple. Where the ball will be in the next, say, ten seconds, depends on where it is now. It's more likely that it'll be near than it is that it'll be far, it's more likely that it'll be on an area of the field where the team with possession is focusing their attack, and so on. Thus, knowing the probabilities for where the ball will be starting from each point in the field — you can think of it as a dynamic heat map for the future — together with information about where it spent the most time, gives us information about how the game developed, and the teams' tactics and performance.

Sadly, a detailed visualization of this map would require at least a four-dimensional monitor, so I settled for a simplified representation, splitting the soccer field in a 5x5 grid, and showing the most likely transitions for the ball from one sector of the field to another. The map is embedded below; do click on it to expand it, as it's not really useful as a thumbnail.

Remember, this map shows where the ball was most likely to go from each area of the field; each circle represents one area, with the circles at the left and right sides representing the area all the way to the end lines. Bigger circles signal that the ball spent more time in that area, so, e.g., you can see that the ball spent quite a bit of time in the midfield, and very little on the sides of Manchester City's defense line. The arrows describe the most likely movements of the ball from one area to another; the wider the line, the most likely the movement. You can see how the ball circulated side-to-side quite a bit near Bolton's goal, while Manchester City kept the ball moving further away from their goal.

There are many immediate questions that come to mind, even with such a simplified representation. How does this map look according to which team had possession? How did it change over time? What flow patterns are correlated with good or bad performance on the field? The graph shows the most likely routes for the ball, but which ones were the most effective, that is, more likely to end up in a goal? Because scoring is a rare event in soccer, particularly compared with games like tennis or american football, this kind of analysis is specially challenging, but also potentially very useful. There's probably much that we don't know yet about the sport, and although data is only an adjunct to well-trained expertise, it can be a very powerful one.

# Washington DC and the murderer's work ethic

Continuing what has turned out to be a fun hobby of looking at crime data for different cities (probably among the most harmless of crime-related hobbies, as long as you aren't taking important decisions based on naive interpretations of badly understood data), I went to data.dc.gov and downloaded Crime Incident data for the District of Columbia for the year of 2011.

Mapping it was the obvious move, but I already did that for Chicago (and Seattle, although there were issues with the data, so I haven't posted anything yet), so I looked at an even more basic dimension: the time series of different types of crime.

To begin with, here's the week-by-week normalized count of thefts (not including burglary, and thefts from cars) in Washington DC (click to enlarge):

I normalized this series by shifting it to its mean and scaling it by its standard deviation — not because the data is normally distributed (it actually shows a thick left tail), but because I wanted to compare it with another data series. After all, the form of the data, partial as it is, suggests seasonality, and as the data covers a year, it wants to be checked against, say, local temperatures.

Thankfully NOAA offers just this kind of data (through about half a dozen confusingly overlapping interfaces), so I was able to add to the plot the mean daily temperature for DC (normalized in the same way as the theft count):

The correlation looks pretty good! (0.7 adj. R squared, if you must know it). Not that this proves any sort of direct causal chain, that's what controlled experiments are for, but we can postulate, e.g., a naive story where higher temperatures mean more foot traffic (I've been in DC in winter, and the neoclassical architecture is not a good match for the latitude), and more foot traffic leads to richer pickings for thieves (an interesting economics aside: would this mean that the risk-adjusted return to crime is high enough that crime is, as it were, constrained by the victims supply?)

Now let's look at murder.

The homicide time series is quite irregular, thanks to a relatively low (for, say Latin American values of 'low') average homicide count, but it's clear enough that there isn't a seasonal pattern to homicides, and no correlation with temperature (a linear fitting model confirms this, not that it was necessary in this case). This makes sense if we imagine that homicide isn't primarily an outdoors activity, or anyway that your likelihood of being killed doesn't increase as you spend more time on the street (most likely, whoever wants to kill you is motivated by reasons other than, say, an argument over street littering). Murder happens come rain or snow (well, I haven't checked that; is there an specific murder weather?)

Another point of interest is the spike of (weather-normalized) theft near the end of the year. It coincides roughly with Thanksgiving, but if that's the causal link, I'd be interested in knowing exactly what's going on.

# How Rooney beats van Persie, or, a first look at Premier League data

I just got one of the data sets from the Manchester City analytics initiative, so of course I started dipping my toe in it. The set gives information aggregated by player and match for the 2011-2012 Premier League, in the form of a number of counters (e.g. time played, goals, headers, blocked shots, etc); it's not the really interesting data set Manchester City is about to release (with, e.g., high-resolution position information for each player), but that doesn't mean there aren't interesting things to be gleaned from it.

The first issue I wanted to look at is probably not the most significant in terms of optimizing the performance of a team, but it's certainly one of the most emotional ones. Attackers: Who's the best? Who's underused? Who sucks?

If you look at total goals scored, the answer is easy: the best attackers are van Persie (30 goals), Rooney (27 goals), and Agüero (23 goals). Controlling by total time played, though, Berbatov and both Cissés have been quite more efficient in goals scored by minute played. They are also, not coincidentally, the most efficient scorers in terms of goals per shoot (both on and off target). The 30 goals of van Persie, for example, are more understandable when you see that he shot 141 times for a goal, versus Berbatov's 15.

To see how shooting efficiency and shooting volume (number of shoots) interact with each other, I made this scatterplot of goals per shoot versus shoots per minute, restricted to players who regularly shoot to avoid low-frequency outliers (click to expand).

You can see that most players are more or less uniformly distributed in the lower-left quadrant of low shooting volume and low shooting efficiency — people who are regular shooters, so they don't try too often or too seldom. But there are outliers, people who shoot a lot, or who shoot really well (or aren't as closely shadowed by defenders)... and they aren't the same. This suggests a question: Who should shoot less and pass more? And who should shoot more often and/or get more passes?

To answer that question (to a very sketchy first degree approximation), I used the data to estimate a lost goals score that indicates how many more goals per minute could be expected if the player made a successful pass to an average player instead of shooting for a goal (I know, the model is naive, there are game (heh) theoretic considerations, etc; bear with me). Looking at the players through this lens, this is a list of players who definitely should try to pass a bit more often: Andy Carroll, Simon Cox, and Shaun Wright-Phillips.

Players who should be receiving more passes and making more shots? Why, Berbatov and both Cissés. Even Wayne Rooney, the league's second most prolific shooter, is good enough turning attempts into goals that he should be fed the ball more often, rather than less.

The second-order question, and the interesting one for intra-game analysis, is how teams react to each other. To say that Manchester United should get the ball to Rooney inside strike distance more often, and that opposing teams should try to prevent this, is as close to a triviality as can be asserted. But whether or not an specific change to a tactical scheme to guard Rooney more closely will be a net positive or, by opening other spaces, backfire... that will require more data and a vastly less superficial analysis.

And that's going to be so much fun!

# Crime in Argentina

As a follow-up to my post on crime patterns in Chicago, I wanted to do something similar for Argentina. I couldn't find data at the same level of detail, but the people of Junar, who develop and run an Open Data platform, were kind enough to point me to a few data sets of theirs, including one that lists crime reports by type across Argentinean provinces for the year 2007.

The first issue I wanted to see was the relationship between different types of crime. Of course, properly speaking you need far more data, and a far more sophisticated and domain-specific analysis, to even begin to address the question, but you can at least see what types of crime tend to happen (or to be reported) in the same provinces. Here's a dendogram showing the relationships between crimes (click to expand it):

As you can see, crimes against property and against the state tend to happen in the same provinces, while more violent crimes (homicide, manslaughter, and kidnapping) are more highly correlated with each other. Drugs, which may or may not surprise you, are more correlated with property crimes than with violent crimes. Sexual crimes are not correlated, at least at the province level, with either cluster or crimes.

This observation suggests that we can plot provinces on the property crimes/sexual crimes space, as they seem to be relatively independent types of crime (at least at the province level). I added the line that marks a best fit linear relationship between both types of crime (mostly related, we'd expect, through their populations).

A few observations from this graph:

• The bulk of provinces (the relatively small ones) are on the lower left corner of the graph, mostly below the linear relationship line. The ones above the line, with a higher rate of sexual crimes as expected from the number of property crimes, are provinces on the North.
• Salta has, unsurprisingly but distressingly, almost four times the number of sexual crimes than expected by the linear relationship. Córdoba, the Buenos Aires province, and, to a lesser degree, Santa Fé, have also higher-than-expected numbers.
• Despite ranking fourth in terms of absolute number of sexual crimes, the City of Buenos Aires has much fewer than the number of property crimes would imply (or, equivalently, has a much higher number of property crimes than expected).

Needlessly to say, this is but a first shallow view, using old data with poor resolution, of an immensely complex field. But looking at data, through never the only or last step when trying to understand something, it's almost always a necessary one, and it never fails to interest me.

# Chicago and the Tree of Crime

After playing with a toy model of surveillance and surveillance evasion, I found the City of Chicago's Data Portal, a fantastic resource with public data including the salaries of city employees, budget data, the location of different service centers, public health data, and quite detailed crime data since 2001, including the relatively precise location of each reported crime. How could I resist playing with it?

To simplify further analysis, let's quantize the map into a 100x100 grid. Here's, then, the overall crime density of Chicago (click to enlarge):

This map shows data for all crime types. One first interesting question is whether different crime types are correlated. E.g., do homicides tend to happen close to drug-related crimes? To look a this, I calculated the correlation between the different types of crimes at the same point of the grid, and from that I built a "tree of crime." Technically called a dendogram, this kind of plot is akin to a phylogenetic tree, and in fact it's often used to show evolutionary relationships. In this case, the tree shows the closeness or not, in terms of geographical correlation, between types of crimes: the closer two types of crime are in the tree, the more likely they are to happen in the same geographical area (click to enlarge).

A few observations:

• I didn't clean up the data before analysis, as I was as interested in the encoding details as in the semantics. The fact that two different codes for offenses involving children are closely related in the dendogram is good news in terms of trusting the overall process.
• The same goes for assault and battery; as expected, they tend to happen in the same places.
• I didn't expect homicide and gambling to be so closely related. I'm sure there's something interesting (for laypeople like me) going on there.
• Other sets of closely related crimes that aren't that surprising: sex offenses and stalking, criminal trespass and intimidation, and prostitution-liquor-theft.
• I expected narcotics and weapons to be closely related, but what's arson doing in there with them? Do street-level drug sellers tend to work in the same areas where arson is profitable?

For law enforcement — as for everything else — data analysis is not a silver bullet, and pretending it is can lead to shooting yourself in the face with it (the mixed metaphor, I hope, is warranted by the topic). But it can serve as a quick and powerful way to pose questions and fight our own preconceptions, and, perhaps specially in highly emotional issues like crime, that can be a very powerful weapon.

# Bad guys, White Hat networks, and the Nuclear Switch

Welcome to Graph City (a random, connected, undirected graph), home of the Nuclear Switch (a distinguished node). Each one of Graph City's lawful citizens belongs to one of ten groups, characterized by their own stochastic movement patterns on the city. What they all have in common is that they never walk into the Nuclear Switch node.

This is because they are lawful, of course, and also because there's a White Hat network of government cameras monitoring some of the nodes in Graph City. They can't read citizen's thoughts (yet), but they know whether a citizen observed on a node is the same citizen that was observed on a different node a while ago, and with this information Graph City's government can build an statistical model of the movement of lawful citizens (as observed through the specific network of cameras).

This is what happens when random-walking, untrained bad guys (you know they are bad guys because they are capable of entering the Nuclear Switch node) start roaming the city (click to expand):

Between half and twenty percent of the intrusion attempts succeed, depending on the total coverage of the White Hat Network (a coverage of 1.0 meaning that every node in the city has a camera linked to the system). This isn't acceptable performance in any real-life application, but this being a toy model with unrealistically small and simplified parameters, absolute performance numbers are rather meaningless.

Let's switch sides for a moment now, and advise the bad guys (after all, one person's Nuclear Switch is another's High Target Value, Market Influential, etc). An interesting first approach for bad guys would be to build a Black Hat Network, create their own model of lawful citizen's movements, and then use that systematically look for routes to the Nuclear Switch that won't trigger an alarm. The idea being, any person who looks innocent to the Black Hat Network's statistical model, will also pass unnoticed under the White Hat's.

This is what happens with bad guys trained using Black Hat Networks of different sizes are sent after the Nuclear Switch:

Ouch. Some of the bad guys get to the Nuclear Switch on every try, but most of them are captured. A good metaphor for what's going on here could be that the White Hat Network and the Black Hat Network's filters are projections on orthogonal planes of a very high dimensional set of features. The set of possible behaviors for good and bad guys is very complex, so, unless your training set is comprehensive (something generally unfeasible), you can not have a filter that works very well on your training data and very poorly on a new observation — this is the bane of every overenthusiastic data analyst with a computer &mndash; but you can train two filters to detect the same subset of observations using the same training set, and have them be practically uncorrelated when it comes to new observations.

In our case, this is good news for Graph City's defenders, as even a huge Black Hat Network, and very well trained bad guys, are still vulnerable to the White Hat Network's statistical filter. It goes without saying, of course, that if the bad guys get even read-only access to the White Hat Network, Graph City is doomed.

At one level, this is a trivial observation: if you have a good enough simulation of the target system, you can throw brute force at the simulation until you crack it, and then apply the solution to the real system with near total impunity (a caveat, though: in the real world, "good enough" simulations seldom are).

But, and this is something defenders tend to forget, bad guys don't need to hack into the White Hat Network. They can use Graph City as a model of itself (that's what the code I used above does), send dummy attackers, observe where they are captured, and keep refining their strategy. This is something already known to security analysts. Cf., e.g., Bruce Schneier — mass profiling doesn't work against a rational adversary, because it's too easy to adapt against. A White Hat Network could be (for the sake of argument) hack-proof, but it will still leak all critical information simply by the patter of alarms it raises. Security Through Alarms is hard!

As an aside, "Graph City" and the "Nuclear Switch" are merely narratively convenient labels. Consider graphs of financial transactions, drug traffic paths, information leakage channels, etc, and consider how many of our current enforcement strategies (or even laws) are predicated on the effectiveness of passive interdiction filters against rational coordinated adversaries...

# A new collection of (really) short stories

I've just put together a collection of twenty hundred-word stories: The Flesh Trade, and Other Nineteen Drabbles. They are mostly SF, although that's a genre that works best when you don't care about its definition.

# A flow control structure that never makes mistakes (sorta)

I've been experimenting with Lisp-style ad-hoc flow control structures. Nothing terribly useful, but nonetheless amusing. E.g., here's a dobest() function that always does the best thing (and only the best thing) among the alternatives given to it — think of the mutant in Philip K. Dick's The Golden Man, or Nicolas Cage in the awful "adaptation" Next.

Here's how you use it:

if __name__ == '__main__':   def measure_x(): "Metric function: the value of x" global x return x   def increment_x(): "A good strategy: increment x" global x x += 1   def decrement_x(): "A bad strategy: decrement x" global x x -= 1   def fail(): "An even worse strategy" global x x = x / 0   x = 1 # assert(x == 1) dobest(measure_x, increment_x, decrement_x, fail) # assert(x == 2)

You give it a metric, a function that returns how good you think the current world is, and one or more functions that operate on the environment. Perhaps disappointingly dobest() doesn't actually see the future; rather, it executes each function on a copy of the current environment, and only transfers to the "real" one the environment with the highest value of metric().

Here's the ugly detail (do point out errors, but please don't mock too much; I haven't played much with Python scopes):

def dobest(metric, *branches): "Apply every function in *branches to a copy of the caller's environnment, only do 'for real' the best one according to the result of running metric()."   world = copy.copy(dict(inspect.getargvalues(sys._getframe(1)).locals)) alts = []   for branchfunction in branches: try: # Run branchfunction in a copy of the world ns = copy.copy(world) exec branchfunction.__code__ in ns, {} alts.append(ns) except: # We ignore worlds where things explode pass   # Sort worlds according to metric() alts.sort(key=lambda ns: eval(metric.__code__, ns, {}), reverse=True) for key in alts[0]: sys._getframe(1).f_globals[key] = alts[0][key]

One usability point is that the functions you give to dobest() have to explicitly access variables in the environment as global; I'm sure there are cleaner ways to do it.

Note that this also can work a bit like a try-except with undo, a la

dobest(bool, function_that_does_something, function_that_reports_an_error)

This would work like try-except, because dobest ignores functions that raise exceptions, but with the added benefit that dobest would clean up everything done by function_that_does_something.

Of course, and here's the catch, "everything" is kind of limited — I haven't precisely gone out of my way to track and catch all side effects, not that it'd even be possible without some VM or even OS support. Point is, the more I get my ass saved by git, the more I miss it in my code, or even when doing interactive data analysis with R. As the Doctor would say, working on just one timeline can be so... linear.

# A mostly blank slate

The combination of a tablet and a good Javascript framework makes it very easy to deploy algorithms to places where so far they have been scarce, like meetings, notetaking, and so on. The problem lies in figuring out what those algorithms should be; just as we had to have PCs for a few years before we started coming up with things to do with them (not that we have even scratched the surface), we still don't have much of a clue about how to use handheld thinking machines outside "traditional thinking machine fields."

Think about it this way: computers have surpassed humans in the (Western Civilization's) proverbial game of strategy and conflict, chess, and are useful enough in games of chance that casinos and tournament organizers are prone to use anything from violence to lawyers to keep you from using them. So the fact that we aren't using a computer when negotiating or coordinating says something about us.

The bottleneck, Cassius would say nowadays, is not in our tech, but in our imaginations.

# Praefatio --- now with a bookmarklet

Improved a bit my stack-oriented Wikipedia reading tool, Praefatio. Still not-even-alpha, of course, but it's usable enough for me. Itch = scratched.

# The perfectly rational conspiracy theorist

Conspiracy theorists don't have a rationality problem, they have a priors problem, which is a different beast. Consider a rational person who believes in the existence of a powerful conspiracy, and then reads an arbitrary online article; we'll denote by $C$ the propositions describing the conspiracy, and by $a$ the propositions describing the article's content. By Bayes' theorem,

$P(C|a) = \frac{P(a|C) P(C)}{P(a)}$

Now, the key here is that the conspiracy is supposed to be powerful. A powerful enough conspiracy can make anything happen or look like it happened, and therefore it'll generally be the case that $P(a|C) \geq P(a)$ (and usually $P(a|C) > P(a)$ for low-probability $a$, of which there are many in these days, as Stanislaw Lem predicted in The Chain of Chance). But that means that in general $P(C|a) \geq P(C)$, and often $P(C|a) > P(C)$! In other words, the rational evaluation of new evidence will seldom disprove a conspiracy theory, and will often reinforce its likelihood, and this isn't a rationality problem — even a perfect Bayesian reasoner will be trapped, once you get $C$ into its priors (this is a well-known phenomenon in Bayesian inference; I like to think of these as black hole priors).

Keep an eye open, then, for those black holes. If you have a prior that no amount of evidence can weaken, then that's probably cause for concern, which is but another form of saying that you need to demand falsifiability in empirical statements. From non-refutable priors you can do mathematics or theology (both of which segue into poetry when you are doing them right), but not much else.

# Fractals are unfair

Let's say you want to identify an arbitrary point in a segment $I$ (chosen with an uniform probability distribution). A more or less canonical way to do this is to split the segment in two equal halves, and write down a bit identifying which half; now the size of the set where the point is hidden is one half of what it was. Because the half-segment we chose is affinely equivalent to the original one, we can repeat this as much as we want, gaining one bit of precision (halving the size of the "it's somewhere around here" set) for each bit of description. Seems fair.

It's easy to do the same on a square $I$ in $R^2$. Split the square in four squares, write down two bits to identify the one where the point is, repeat at will. Because each square has one fourth the size of the enclosing one, you gain two bits of precision for each two bits of description. Still fair (and we cannot do better).

Now try to do this on a fractal, say Koch's curve, and things go as weird as you'd think. You can always split it in four affinely equivalent pieces, but each of them is one-third the size of the original one, which means that you gain less than two bits of precision for each two bits of description. Now this is unfair. A fractal street would be a very good way of putting in an infinite number of houses inside a finite downtown, but (even approximate) driving directions will be longer than you'd think they should.

Of course, this is merely a paraphrasing of the usual definition of a fractal, which is an object whose fractal dimension exceeds its topological dimension (very hand-wavingly speaking, the number of bits of description in each step is higher than the number of bits of precision that you gain). But I do enjoy looking at things through an information theory lens.

Besides, there's a (still handwavy but clear) connection here with chaos, through the idea of trying to pin down the future of a system in phase space by pinning down its present. In conservative systems this is fair: one bit of precision in the present, gives you one bit of precision for the future (after all, volumes are preserved). But when chaos is involved this is no longer the case! For any fixed horizon, you need to put in a more bits of information about the present in order to get the same number of bits about the future.

# A way to read Wikipedia

I've just put online Praefatio, a very simple web tool to help read Wikipedia articles in a more structured way. It's somewhat buggy and lacks documentation, but I think it's simple enough to more-or-less work as it stands now, at least for my own use.

# A modest, if egocentric, proposal

We need the equivalent of /lib, /usr/lib, and /usr/local/lib for semantic knowledge. At this point in time, it makes no sense for my computer to know less about me than about obscure pieces of hardware it'll never have to interface with.

That's all.

# Men's bathrooms are (quantum-like) universal computers

As it's well documented, men choose urinals to maximize their distance from already occupied urinals. Letting $u_i$ be 1 or 0 depending on whether urinal $i$ is occupied, and setting $\sigma_{i,j}$ to the distances between urinals $i$ and $j$, male urinal choosing behavior can be seen as an annealing heuristic maximizing

(summing over repeating indexes as per Einstein's convention). And this is obviously equivalent to the computational model implemented by D-Wave's quantum computers! Hardware implementations of urinal-based computers might be less compact than quantum ones (and you might need bathrooms embedded in non-Euclidean spaces in order to implement certain programs), but they are likely to be no more error-prone, and they are certain to escalate better.

# A quick look at Elance statistics

I collected data from Elance feeds, in order to find what employers are looking for on the site. It's not pretty: by far the most requested skills in terms of aggregated USD demand are article writing (generally "SEO optimized"), content, logos, blog posting, etc. In other words, mostly AdSense baiting with some smattering of design. It's not everything requested on Elance, of course, but it's a big part of the pie.

Not unexpected, but disappointing. Paying low wages to people in order to fool algorithms to get other people to pay a bit more might be a symbolically representative business model in an increasingly algorithm-routed and economically unequal world, but it feels like a colossal misuse of brains and bits.

# First post: what I'm interested on these days

Here's a quick list of what I'm mildly obsessed about right now:

• Data analysis, large-scale inference, augmented cognition — labels differ, but the underlying mathematics is often pretty much the same.
• Smart, distributed, large-scale software/systems/markets/organizations — basically, the systematic application of inference-driven technologies.
• The big challenges and the big opportunities: dealing with climate change, improving global information, policy, and financial systems, global health and education, application of (simultaneously) bio-neuro-cogno-cyber tech, and the thorny question of 19th century-minded politicians using 20th century governance systems to deal with 21st century problems.