Category Archives: Politics

Don't blame algorithms for United's (literally) bloody mess

It's the topical angle, but let's not blame algorithms for the United debacle. If anything, algorithms might be the way to reduce how often things like this happen.

What made it possible for a passenger to be hit and dragged off a plane to avoid inconveniencing an airline's personnel logistics wasn't the fact that the organization implements and follows quantitative algorithms, but the fact that it's an organization. By definition, organizations are built to make human behavior uniform and explicitly determined.

A modern bureaucratic state is an algorithm so bureaucrats will behave in homogeneous, predictable ways.

A modern army is an algorithm so people with weapons will behave in homogeneous, predictable ways.

And a modern company is an algorithm so employees will behave in homogeneous, predictable ways.

It's not as if companies used to be loose federations of autonomous decision-making agents applying both utilitarian and ethical calculus to their every interaction with customers. The lower you are in an organization's hierarchy, the less leeway you have to deviate from rules, no matter how silly or evil they prove to be in a specific context, and customers (or, for that matter, civilians in combat areas) rarely if ever interact with anybody who has much power.

That's perhaps an structural, and certainly a very old, problem in how humans more or less manage to scale up our social organizations. The specific problem in Dao's case was simply that the rules were awful, both ethically ("don't beat up people who are behaving according to the law just because it'll save you some money") and commercially ("don't do things that will get people viscerally and virally angry with you somewhere with cameras, which nowadays is anywhere with people.")

Part of the blame could be attributed to United CEO's Muños and his tenuous grasp of at least simulated forms of empathy, as manifested by his first and probably most sincere reaction. But hoping organizations will behave ethically or efficiently when and because they have ethical and efficient leaders is precisely why we have rules: one of the major points of a Republic is that there are rules that constrain even the highest-ranking officers, so we limit both the temptation and the costs of unethical behavior.

Something of a work in progress.

So, yes, rules are or can be useful to prevent the sort of thing that happened to Dao. And to focus on current technology, algorithms can be an important part of this. In a perhaps better world, rules would be mostly about goals and values, not methods, and you would trust the people on the ground to choose well what to do and how to do it. In practice, due to a combination of the advantages of homogeneity and predictability of behavior, the real or perceived scarcity of people you'd trust to make those choices while lightly constrained, and maybe the fact that for many people the point of getting to the top is partially to tell people what to do, employees, soldiers, etc, have very little flexibility to shape their own behavior. To blame this on algorithms is to ignore that this has always been the case.

What algorithms can do is make those rules more flexible without sacrificing predictability and homogeneity. While it's true that algorithmic decision-making can have counterproductive behaviors in unexpected cases, that's equally true of every system of rules. But algorithms can take into account more aspects of a situation than any reasonable rule book could handle. As long as you haven't given your employees the power to override rules, it's irrelevant whether the algorithm can make better ethical choices than them — the incremental improvement happens because it can make a better ethical choice than a static rule book.

In the case of United, it'd be entirely possible for an algorithm to learn to predict and take into account the optics of a given situation. Sentiment analysis and prediction is after all a very active area of application and research. "How will this look on Twitter?" can be part of the utility function maximized by an algorithm, just as much as cost or time efficiencies.

It feels quite dystopic to think that, say, ride hailing companies should have machine learning models to prevent them from suddenly canceling trips for pregnant women going to the hospital to pick up a more profitable trip elsewhere; shouldn't that be obvious from everybody from Uber drivers to Uber CEOs? Yes, it should. And no, it isn't. Putting "morality" (or at least "a vague sense of what's likely to make half the Internet think you're scum") in code that can be reviewed, as — in the best case — a redundancy backup to a humane and reasonable corporate culture, is what we already do in every organization. What we can and should do is to teach algorithms to try to predict the ethical and PR impact of every recommendation they make, and take that into account.

Whether they'll be better than humans at this isn't the point. The point is that, as long as we're going to have rules and organizations where people don't have much flexibility not to follow them, the behavioral boundaries of those organizations will be defined by that set of rules, and algorithms can function as more flexible and careful, and hence more humane, rules.

The problem isn't that people do what computers tell them to do (if you want, you can say that the root problem is when people do bad things other people tell them to do, but that has nothing to do with computers, algorithms, or AI). Computers do what people tell them. We just need to, and can, tell them to be more ethical, or at least to always take into account how the unavoidable YouTube video will look.

The new (and very old) political responsibility of data scientists

We still have a responsibility to prevent the ethical misuse of new technologies, as well as helping make their impact on human welfare a positive one. But we now have a more fundamental challenge: to help defend the very concept and practice of the measurement and analysis of quantitative fact.

To be sure, a big part of practicing data science consists of dealing with the multiple issues and limitations we face when trying to observe and understand the world. Data seldom means what its name implies it means; there are qualifications, measurement biases, unclear assumptions, etc. And that's even before we engage the useful but tricky work of making inferences off that data.

But the end result of what we do — and not only, or even mainly us, for this collective work of observation and analysis is one of the common threads and foundations of civilization — is usually a pretty good guess, and it's always better than closing your eyes and giving whatever number provides you with an excuse to do what you'd rather do. Deliberately messing with the measurement of physical, economic, or social data is a lethal attack on democratic practices, because it makes impossible for citizens to evaluate government behavior. Defending the impossibility of objective measurement (as opposed to acknowledging and adapting to the many difficulties involved) is simply to give up on any form of societal organization different from mystical authoritarianism.

Neither attitude is new, but both have gained dramatically in visibility and influence during the last year. This adds to the existing ethical responsibilities of our profession a new one, unavoidably in tension with them. We not only need to fight against over-reliance on algorithmic governance driven by biased data (e.g. predicting behavior from records compiled by historically biased organizations) or the unethical commercial and political usage of collected information, but also, paradoxically, we need to defend and collaborate in the use of data-driven governance based on best-effort data and models.

There are forms of tyranny based on the systematic deployment of ubiquitous algorithmic technologies, and there are forms of obscurantism based on the use of cargo cult pseudo-science. But there are also forms of tyranny and obscurantism predicated on the deliberate corruption of data or even the negation of the very possibility of collecting it, and it's part of our job to resist them.

Economists and statisticians in Argentina, when previous governments deliberately altered some national statistics and stopped collecting others, rose to the challenge by providing parallel, and much more widely believed, numbers (among the first, the journalist and economist — a combination of skills more necessary with every passing year — Sebastián Campanario). Theirs weren't the kind of arbitrary statements that are frequently part of political discourse, nor did they reject official statistics because they didn't match ideological preconceptions or it was politically convenient to do so. Official statistics were technically wrong in their process of measurement and analysis, and for any society that aspires to meaningful self-government the soundness and availability of statistics about itself are an absolute necessity.

Data scientists are increasingly involved in the process of collection and analysis of socially relevant metrics, both in the private and the public sectors. We need to consistently refuse to do it wrong, and to do our best to do it correctly even, and specially, when we suspect other people are choosing not to. Nowcasting, inferring the present from the available information, can be as much of a challenge, and as important, as predicting the future. The fact that we might end up having to do it without the assumption of possibly flawed but honest data will be a problem we have in other contexts already began to work on. Some of the earliest applications of modern data-driven models in finance, after all, were in fraud detection.

We are all potentially climate scientists now, massive observational efforts to be refuted based on anecdotes, disingenuous visualizations to be touted as definitive proof, and eventually the very possibility of quantitative understanding to be violently mocked. We (still) have to make sure the economic and social impact of things like ubiquitous predictive surveillance and technology-driven mass unemployment are managed in positive ways, but this new responsibility isn't one we can afford to ignore.

The Mental Health of Smart Cities

Not the mental health of the people living in smart cities, but that of the cities themselves. Why not? We are building smart cities to be able to sense, think, and act; their perceptions, thoughts, and actions won't be remotely human, or even biological, but that doesn't make them any less real.

Cities can monitor themselves with an unprecedented level of coverage and detail, from cameras to government records to the wireless information flow permeating the air. But these perceptions will be very weakly integrated, as information flows slowly, if at all, between organizational units and social groups. Will the air quality sensors in a hospital be able to convince most traffic to be rerouted further away until rush hour passes? Will the city be able to cross-reference crime and health records with the distribution of different business, and offer tax credits to, say, grocery stores opening in a place that needs them? When a camera sees you having trouble, will the city know who you are, what's happening to you, and who it should call?

This isn't a technological limitation. It comes from the way our institutions and business are set up, which is in turn reflected in our processes and infrastructure. The only exception in most parts of the world is security, particularly against terrorists and other rare but high-profile crimes. Organizations like the NSA or the Department of Homeland Security (and its myriad partly overlapping versions both within and outside the United States) cross through institutional barriers, most legal regulations, and even the distinction between the public and the private in a way that nothing else does.

The city has multiple fields of partial awareness, but they are only integrated when it comes to perceiving threats. Extrapolating an overused psychological term, isn't this an heuristic definition of paranoia? The part of the city's mind that deals with traffic and the part that deals with health will speak with each other slowly and seldom, the part who manages taxes with the one who sees the world through the electrical grid. But when scared, and the city is scared very often, and close to being scared every day, all of its senses and muscles will snap together in fear. Every scrap of information correlated in central databases, every camera and sensor searching for suspects, all services following a single coordinated plan.

For comparison, shopping malls are built to distract and cocoon us, to put us in the perfect mood to buy. So smart shopping malls see us like customers: they track where we are, where we're going, what we looked at, what we bought. They try to redirect us to places where we'll spend more money, ideally away from the doors. It's a feeling you can notice even in the most primitive "dumb" mall: the very shape of the space is built as a machine to do this. Computers and sensors only heighten this awareness; not your awareness of the space, but the space's awareness of you.

We're building our smart cities in a different direction. We're making them see us as elements needing to get from point A to point B as quickly as possible, taking little or no care of what's going on at either end... except when it sees us, and it never sees or thinks as clearly and as fast, as potential threats. Much of the mind of the city takes the form of mobile services from large global companies that seldom interact locally with each other, much less with the civic fabric itself. Everything only snaps together with an alert is raised and, for the first time, we see what the city can do when it wakes up and its sensors and algorithms, its departments and infrastructure, are at least attempting to work coordinately toward a single end.

The city as a whole has no separate concept of what a person is, no way of tracing you through its perceptions and memories of your movements, actions, and context except when you're a threat. As a whole, it knows of "persons of interest" and "active situations." It doesn't know about health, quality of life, a sudden change in a neighborhood. It doesn't know itself as anything else than a target.

It doesn't need to be like that. The psychology of a smart city, how it integrates its multiple perceptions, what it can think about, how it chooses what to do and why, all of that is up to us. A smart city is just an incredibly complex machine we live in and whom we give life to. We could build it to have a sense of itself and of its inhabitants, to perceive needs and be constantly trying to help. A city whose mind, vaguely and perhaps unconsciously intuited behind its ubiquitous and thus invisible cameras, we find comforting. A sane mind.

Right now we're building cities that see the world mostly in terms of cars and terrorism threats. A mind that sees everything and puts together very little except when it scares it, where personal emergencies are almost entirely your own affair, but becomes single-minded when there's a hunt.

That's not a sane mind, and we're planning to live in a physical environment controlled by it.

The best political countersurveillance tool is to grow the heck up

The thing is, we're all naughty. The specifics of what counts as "wrong" depend on the context, but there isn't anybody on Earth so boring that haven't done or aren't doing something they'd rather not be known worldwide.

Ordinarily this just means that, as every other social species, we learn pretty early how to dissimulate. But we aren't living in an ordinary world. As our environment becomes a sensor platform with business models bolted on top of it, private companies have access to enormous amounts of information about things that were ordinarily very difficult to find, non-state actors can find even more, and the most advanced security agencies... Well. Their big problem is managing and understanding this information, not gathering it. And all of this can be done more cheaply, scalably, and just better than ever before.

Besides issues of individual privacy, this has a very dangerous effect on politics wherever it's coupled with overly strict standards: it essentially gives a certain degree of veto power over candidates to any number of non-democratic actors, from security agencies to hacker groups. As much as transparency is an integral part of democracy, we haven't yet adapted to the kind of deep but selective transparency this makes possible, the US election being but the most recent, glaring, and dangerous example.

It will happen again, it will keep happening, and the prospect of technical or legal solutions is dim. This being politics, the structural solution isn't technical, but human. While we probably aren't going to stop sustaining the fiction that we are whatever our social context considers acceptable, we do need to stop reacting to "scandals" in an indiscriminate way. There are individual advantages to doing so, of course, but the political implications of this behavior, aggregated over an entire society, are extremely deleterious.

Does this mean this anything goes? No, quite the contrary. It means we need to become better at discriminating between the embarrassing and the disqualifying, between the hurtful crime and the indiscretion, between what makes somebody dangerous to give power to, and what makes them somebody with very different and somewhat unsettling life choices. Because everybody has something "scandalous" in their lives that can and will be digged up and displayed to the world whenever it's politically convenient to somebody with the power to do it, and reacting to all of it in the same way will give enormous amounts of direct political power to organizations and individuals, everywhere and at all points in the spectrum of legality, that are among the least transparent and accountable in the world.

This means knowing the difference between the frowned upon and the evil. It's part of growing up, yet it's rarer, and more difficult, the larger and more interconnected a group becomes. Eventually the very concept of evil as something other than a faux pas disappears, and, historically, socially sanctioned totalitarianism follows because, while political power in nominally democratic societies seldom arrogates to itself the power to define what's evil, it has enormous power to change the scope of "adequate behavior."

We aren't going to shift our public morals to fully match our private behavior. We aren't really wired that way; we are social primates, and lying to each other is the way we make our societies work. But we are social primates living in an increasingly total surveillance environment vulnerable to multiple actors, a new (geo)political development with impossible technical solutions, but a very simple, very hard, and very necessary sociological fix: we just need to grow the heck up.

The informal sector Singularity

At the intersection of cryptocurrencies and the "gig economy" lies the prospect of almost self-contained shadow economies with their own laws and regulations, vast potential for fostering growth, and the possibility of systematic abuse.

There have always been shadow, "unofficial" economies overlapping and in some places overruling their legal counterparts. What's changing now is that technology is making possible the setup and operation of extremely sophisticated informational infrastructures with very few resources. The disruptive impact of blockchains and related technologies isn't any single cryptocurrency, but the fact that it's another building block for any group, legal or not, to operate their own financial system.

Add to this how easy it is to create fairly generic e-commerce marketplaces, reputation tracking systems, and, perhaps most importantly, purely online labor markets. For employers, the latter can be a flexible and cost-efficient way of acquiring services, while for many workers it's becoming an useful, and for some an increasingly necessary, source of income. Large rises in unemployment, especially those driven by new technologies, always increase the usefulness of this kind of labor markets for employers in both regulated and unregulated activities, as a "liquid" market over sophisticated platforms makes it easy to continuously optimize costs.

You might call it a form of "Singularity" of the informal sector: there are unregulated or even fully criminal sectors that are technologically and algorithmically more sophisticated than the average (or even most) of the legal economy.

While most online labor markets are fully legal, this isn't always the case, even when the activity being contracted isn't per se illegal. One current example is Uber's situation in Argentina: their operation is currently illegal due to regulatory non-compliance, but, short of arresting drivers — something that's actually being considered, due in some measure to the clout of the cab driver's union — there's nothing the government can do to completely stop them. Activities less visible than picking somebody up in a car — for example, anything you can do from a computer or a cellphone in your home — contracted over the internet and paid in a cryptocurrency or in any parallel payment system anywhere in the world are very unlikely to be ever visible to, or regulated by, the state or states who theoretically govern the people involved.

There are clear potential upsides to this. The most immediate one is that these shadow economies are often very highly efficient and technologically sophisticated by design. They can also help people avoid some of the barriers of entry that keep many people from full-time legal employment. A lack of academic accreditations, a disadvantaged socioeconomic background, or membership in an unpopular minority or age bracket can be a non-issue for many types of online work. In other cases they simply make possible types of work so new there's no regulatory framework for them, or that are impeded by obsolete ones. And purely online activities are often one of the few ways in which individuals can respond to economic downturns in their own country by supplying services overseas without intermediate organizations capturing most or all of the wage differential.

The main downside is, of course, that a shadow economy isn't just free from obsolete regulatory frameworks, but also free from those regulations meant to prevent abuse, discrimination, and fraud: minimum wages, safe working conditions, protection against sexual harassment, etc.

These issues might seem somewhat academic right now: most of the "gig economy" is either a secondary source of income, or the realm of relatively well-paid professionals. But technological unemployment and the increase in inequality suggest that this kind of labor markets are likely to become more important, particularly for the lower deciles of the income distribution.

Assuming a government has the political will to attack the problem of a growing, technologically advanced, and mostly unregulated labor economy — for some, at least, this seems to be a favoured outcome rather than a problem — fines, arrests, etc, are very unlikely to work, at least in moderately democratic societies. The global experience with software and media piracy shows how extremely difficult it is to stop an advanced decentralized digital service regardless of its legality. Silk Road was shut down, but it was one site, and run by a conveniently careless operator. The size, sophistication, and longevity of the on-demand network attacks, hacked information, and illegal pornography sectors are a better indicator of the impossibility of blocking or taxing this kind of activity once supply and demand can meet online.

A more fruitful approach to the problem is to note that, given the choice, most people prefer to work inside the law. It's true that employers very often prefer the flexibility and lower cost of an unregulated "high-frequency" labor economy, but people offer their work in unregulated economies when the regulated economy is blocked to them by discrimination, the legal framework hasn't kept up with the possibilities of new technologies, or there simply isn't enough demand in the local economy, making "virtual exports" an attractive option.

The point isn't that online labor markets, reputation systems, cryptocurrencies, etc, are unqualified evils. Quite the contrary. They offer the possibility of wealthier, smarter economies with a better quality of life, less onerous yet more effective regulations for both employers and employees, and new forms of work. However, these changes have to be fully implemented. Upgrading the legal economy to take advantage of new technologies — and doing it very soon — isn't a matter of not missing an opportunity, particularly for less developed economies. Absent a technological overhaul of how the legal economy works, more effective and flexible unregulated shadow economies are only going to keep growing; a lesser evil than effective unemployment, but not without a heavy social price.

For the unexpected innovations, look where you'd rather not

Before Bill Gates was a billionaire, before the power, the cultural cachet, and the Robert Downey Jr. portrayals, computers were for losers who would never get laid. Their potential was of course independent of these considerations, but Steve Jobs could become one of the richest people on Earth because he was fascinated with, and dedicated time to, something that cool kids — specially from the wealthy families who could most easily afford access to them — wouldn't have been caught dead playing with, or at least loving.

Geek, once upon a time, was an unambiguous insult. It was meant to humiliate. Dedicating yourself to certain things meant you'd pay a certain social price. Now, of course, things are better for that particular group; if nothing else, an entire area of intellectual curiosity is no longer stigmatized.

But as our innovation-driven society is locked into computer geeks as the source of change, that means it's going to be completely blindsided by whatever comes next.

Consider J. K. Rowling. Stephenie Meyer. E. L. James. It's significant that you might not recognize the last two names: Meyer wrote Twilight and James Fifty Shades of Grey. Those three women (and it's also significant that they are women) are among the best-selling and most widely influential writers of our time, and pretty much nobody in the publishing industry was even aware that there was a market for what they were doing. Theirs aren't just the standard stories of talented artists struggling to be published. By the standards of the (mostly male) people who ran and by and large still run the publishing industry, the stories they wrote were, if they were to be kind, pointless and low-brow. A school for wizards where people died during a multi-volume malignant cou d'état? The love story of a teenager torn between her possessive werewolf friend and a teenage-looking centuries old vampire struggling to maintain self-control? Romantic sadomasochism from a female point of view?

Who'd read that?

Millions upon millions did. And then they watched the movies, and read the books again. Many of them were already writing the things they wanted to read — James' story was originally fan fiction in the Twilight universe — and wanted more. The publishing industry, supposedly in the business of figuring out that, had ignored them because they weren't a prestigious market (they were women, to be blunt, including very young women who "weren't supposed" to read long books, and older women who "weren't supposed" to care about boy wizards), and those weren't prestigious stories. When it comes to choosing where to go next, industries are as driven by the search for reputation as they are for the search of profit (except finance, where the search for profit regardless of everything else is the basis of reputation). Rowling and Meyer had to convince editors, and James first surge of sales came through self-published Kindle books. The next literary phenomenon might very well bypass publishers, and if that becomes the norm then the question will be what the publishing industry is for.

Going briefly back to the IT industry, gender and race stereotypes are still awfully prevalent. The next J. K. Rowling of software — and there will be one — will have to go through a much more difficult path than she should've had to. On the other hand, a whole string of potential early investors will have painful almost-did-it stories they'll never tell anyone.

This isn't a modern development, but rather a well-established historical pattern. It's the underdogs — the sidelined, the less reputable — who most often come up with revolutionary practices. The "mechanical arts" that we now call engineering were once a disreputable occupation, and no land-owning aristocrat would have guessed that one day they'll sell their bankrupted ancestral homes to industrialists. Rich, powerful Venice began, or so its own legend tells, as a refugee camp. And there's no need to recount the many and ultimately fruitful ways in which the Jewish diaspora adapted to and ultimately leveraged the restrictions imposed everywhere upon them.

Today geographical distances have greatly diminished, and are practically zero when it comes to communication and information. The remaining gap is social — who's paid attention to, and what about.

To put it in terms of a litmus test, if you wouldn't be somewhat ashamed of putting it in a pitch deck, it might be innovative, brilliant, and a future unicorn times ten, but it's something people already sort-of see coming. And a candidate every one of your competitors would consider hiring is one that will most likely go to the biggest or best-paying one, and will give them the kind of advantage they already have. To steal a march on them — to borrow a tactic most famously used by Napoleon, somebody no king would have appointed as a general until he won enough wars to appoint kings himself — you need to hire not only the best of the obvious candidates, but also look at the ones nobody is looking at, precisely because nobody is looking at them. They are the point from which new futures branch.

The next all-caps NEW thing, the kind of new that truly shifts markets and industries, is right now being dreamed and honed by people you probably don't talk to about this kind of thing (or at all) who are doing weird things they'd rather not tell most people about, or that they love discussing but have to go online to find like-minded souls who won't make fun of them or worse.

Diversity isn't just a matter of simple human decency, although it's certainly that as well, and that should be enough. In a world of increasingly AI-driven hyper-corporations that can acquire or reproduce any technological, operational, or logistical innovation anybody but their peer competitors might come up with, it's the only reliable strategy to compete against them. "Black swans" only surprise you if you never bothered looking at the "uncool" side of the pond.

The Differentiable Organization

Neural networks aren't just at the fast-advancing forefront of AI research and applications, they are also a good metaphor for the structures of the organizations leveraging them.

DeepMind's description of their latest deep learning architecture, the Differentiable Neural Computer highlights one of the core properties of neural networks: they are differentiable systems to perform computations. Generalizing the mathematical definition, for a system to be differentiable implies that it's possible to work backwards quantitatively from its current behavior to figure out the changes that should be done to the system to improve it. Very roughly speaking — I'm ignoring most of the interesting details — that's a key component of how neural networks are usually trained, and part of how they can quickly learn to match or outperform humans in complex activities beginning from a completely random "program." Each training round provides not only a performance measurement, but also information about how to tweak the system so it'll perform better the next time.

Learning from errors and adjusting processes accordingly is also how organizations are supposed to work, through project postmortems, mission debriefings, and similar mechanisms. However, for the majority of traditional organizations this is in practice highly inefficient, when at all possible.

  • Most of the details of how they work aren't explicit, but encoded in the organizational culture, workflow, individual habits, etc.
  • They have at best a vague informal model — encoded in the often mutually contradictory experience and instincts of personnel — of how changes to those details will impact performance.
  • Because most of the "code" of the organization is encoded in documents, culture, training, the idiosincratic habits of key personnel, etc, they change only partially, slowly, and with far less control than implied in organizational improvement plans.

Taken together, these limitations — which are unavoidable in any system where operational control is left to humans — make learning organizations almost chimerical. Even after extensive data collection, without a quantitative model of how the details of its activities impact performance and a fast and effective way of changing them, learning remains a very difficult proposition.

By contrast, organizations that have automated low-level operational decisions and, most importantly, have implemented quick and automated feedback loops between their performance and their operational patterns, are, in a sense, the first truly learning organizations in history. As long as their operations are "differentiable" in the metaphorical sense of having even limited quantitative models allowing to work out in a backwards faction desirable changes from observed performance — you'll note that the kind of problems the most advanced organizations have chosen to tackle are usually of this kind, beginning in fact relatively long ago with automated manufacturing — then simply by continuing their activities, even if inefficiently at first, they will be improving quickly and relentlessly.

Compare this pattern with an organization where learning only happens in quarterly cycles of feedback, performed by humans with a necessarily incomplete, or at least heavily summarized, view of low-level operations and the impact on overall performance of each possible low-level change. Feedback delivered to humans that, with the best intentions and professionalism, will struggle to change individual and group behavior patterns that in any case will probably not be the ones with the most impact on downstream metrics.

It's the same structural difference observed between manually written software and trained and constantly re-trained neural networks; the former can perform better at first, but the latter's improvement rate is orders of magnitude higher, and sooner or later leaves them in the dust. The last few years in AI have shown the magnitude of this gap, with software routinely learning in hours or weeks from scratch to play games, identify images, and other complex tasks, going poor or absolutely null performance to, in some cases, surpassing human capabilities.

Structural analogies between organizations and technologies are always tempting and usually misleading, but I believe the underlying point is generic enough to apply: "non-differentiable" organizations aren't, and cannot be, learning organizations at the operational level, and sooner or later aren't competitive with other that set up from the beginning automation, information capture, and the appropriate, automated, feedback loops.

While the first two steps are at the core of "big data" organizational initiatives, the latter is still a somewhat unappreciated feature of the most effective organizations. Rare enough, for the moment, to be a competitive advantage.

The truly dangerous AI gap is the political one

The main short term danger from AI isn't how good it is, or who's using it, but who isn't: governments.

This impacts every aspect of our interaction with the State, beginning with the ludicrous way in which we have to move papers around (at best, digitally) to tell one part of the government something another part of the government already knows. Companies like Amazon, Google, or Facebook are built upon the opposite principle. Every part of them knows everything any part of the company knows about you (or at least it behaves that way, even if in practice there are still plenty of awkward silos).

Or consider the way every business and technical process is monitored and modeled in a high-end contemporary company, and contrast it with the opacity, most damagingly to themselves, of government services. Where companies strive to give increasingly sophisticated AI algorithms as much power as possible, governments often struggle to give humans the information they need to make the decisions, much less assist or replace them with decision-making software.

It's not that government employees lack the skills or drive. Governments are simply, and perhaps reasonably, biased toward organizational stability: they are very seldom built up from scratch, and a "fail fast" philosophy would be a recipe for untold human suffering instead of just a bunch of worthless stock options. Besides, most of the countries with the technical and human resources to attempt something like this are currently leaning to one degree or another towards political philosophies that mostly favor a reduced government footprint.

Under these circumstances, we can only expect the AI gap between the public and the private sector to grow.

The only areas where this isn't the case are, not coincidentally, the military and intelligence agencies, who are enthusiastic adopters of every cutting edge information technology they can acquire or develop. But these exceptions only highlight one of the big problems inherent in this gap: intelligence agencies (and to a hopefully lesser degree, the military) are by need, design, or their citizens' own faith the government areas least subject to democratic oversight. Private companies lose money or even go broke and disappear if they mess up; intelligence agencies usually get new top-level officers and a budget increase.

As an aside, even individuals are steered away from applying AI algorithms instead of consuming their services, through product design and, increasingly, laws that prohibit them from reprogramming their own devices with smarter or at least more loyal algorithms.

This is a huge loss of potential welfare — we are getting worse public services, and at a higher cost, than we could given the available technology — but it's also part of a wider political change, as (big) corporate entities gain operational and strategic advantages that shift the balance of power away from democratically elected organizations. It's one thing for private individuals to own the means of production, and another when they (and often business-friendly security agencies) have a de facto monopoly on superhuman smarts.

States originally gained part of their power through early and massive adoption of information technologies, from temple inventories in Summer to tax censuses and written laws. The way they are now lagging behind bodes ill for the future quality of public services, and for democratic oversight of the uses of AI technologies.

It would be disingenuous to say that this is the biggest long- and not-so-long-term problem states are facing, but only because there are so many other things going wrong or still to be done. But it's something that will have to be dealt with; not just with useful but superficial online access to existing services, or with the use of internet media for public communication, but also with deep, sustained investment in the kind of ubiquitous AI-assisted and AI-delegated operations that increasingly underlie most of the private economy. Politically, organizationally, and culturally as near-impossible as this might look.

The recently elected Argentinean government has made credible national statistics one of its earliest initiatives, less an act of futuristic boldness than a return to the 20th century baseline of data-driven decision-making, a departure of the previous government that was not without large political and practical costs. By failing to resort intensively to AI technologies in their public services, most governments in the world are failing to measure up to the technological baseline of the current century, an almost equally serious oversight.

The gig economy is the oldest one, and it's always bad news

Let's say you have an spare bedroom and you need some extra income. What do you do? You do more of what you've trained for, in an environment with the capital and tools to do it best. Anything else only makes sense if the economy is badly screwed up.

The reason is quite simple: unless you work in the hospitality industry, you are better — able to extract from it a higher income — at doing whatever else you're doing than you are at being a host, or you wouldn't take it up as a gig, but rather switch to it full time. Suppliers in the gig economy (as opposed to professionals freelancing in their area of expertise) are by definition working more hours but less efficiently so, whether because they don't have the training and experience, or because they aren't working with the tools and resources they'd take advantage of in their regular environments. The cheaper, less quality, badly regulated service they provide might be desirable to many customers, but this is achieved partly through de-capitalization. Every hour and dollar an accountant spends caring for a guest instead of, if he wants a higher income, doing more accounting or upgrading his tools, is a waste of his knowledge. From the point of view of overall capital and skill intensity, a professional low-budget hotel chain would be vastly more efficient over the long term (of course, to do that you need to invest capital in premises and so on instead of on vastly cheaper software and marketing).

The only reason for an accountant, web designer, teacher, or what not, for doing "gigs" instead of extra hours, freelance work, or similar, is if there is no demand for their professional labor. While it's entirely possible that overtime or freelance work might be relatively less valuable than the equivalent time spent at their main job, to do something else they would have to get less than what they can get from a gig for which they have little training and few tools. That's not how a capital- and skill-intensive economy looks like.

For an specific occupation falling out of favor, this is just the way of things. For wide swaths of the population to find themselves in this position, perhaps employed but earning less than they would like, and unable to trade more of their specialized labor for income, the economy as a whole has to be suffering from depressed demand. What's more, they still have to contend with competitors with more capital but still looking to avoid regulations (e.g., people buying apartments specifically to rent via Airbnb), in turn lowering their already low gig income.

This is a good thing if you want cheaper human-intensive services or have invested on Airbnb and similar companies, and it's bad news if you want an skill-intensive economy with proportionally healthy incomes.

In the context of the gig economy, flexibility is an euphemism for I have a (perhaps permanent!) emergency and can't get extra work, and efficiency refers to the liquidity of services, not the outcome of high capital intensity. And while renting a room or being an Uber driver might be less unpleasant than, and downright utopian compared to, the alternatives open to those without a room to rent or an adequate car, the argument that it's fun doesn't survive the fact that nobody has ever been paid to go and crash on other people's couches.

Neither Airbnb nor Uber are harmful on themselves — who doesn't think cab services could use more a transparent and effective dispatch system? — but customer ratings don't replace training, certification, and other forms of capital investment. Shiny apps and cool algorithms aside, a growing gig economy is a symptom of an at least partially de-skilling one.

Bitcoin is Steampunk Economics

From the point of view of its largest financial backers, the fact that Bitcoin combines 21st century computer science with 17th century political economy isn't an unfortunate limitation. It's what they want it for.

We have grown as used to the concept of money as to any other component of our infrastructure, but, all things considered, it's an astoundingly successful technology. Even in its simplest forms it helps solve the combinatorial explosion implicit in any barter system, which is why even highly restricted groups, like prison populations, implement some form of currency as one of the basic building blocks of their polities.

Fiat money is a fascinating iteration of this technology. It doesn't just solve the logistical problems of carrying with you an impractical amount of shiny metals or some other traditional reference commodity, it also allows a certain degree of systemic adaptation to external supply and demand shocks, and pulls macroeconomic fine-tuning away from the rather unsuitable hands of mine prospectors and international trading companies.

A protocol-level hack that increases systemic robustness in a seamless distributed manner: technology-oriented people should love this. And they would, if only that hack weren't, to a large degree... ugh... political. From the point of view of somebody attempting to make a ton of money by, literally, making a ton money, the fact that a monetary system is a common good managed by a quasi-governmental centralized organization isn't a relatively powerful way to dampen economic instabilities, but an unacceptable way to dampen their chances of making said ton of money.

So Bitcoin was specifically designed to make this kind of adjustment impossible. In fact, the whole, and conceptually impressive, set of features that characterize it as a currency, from the distributed ledger to the anonymity of transfers to the mathematically controlled rate of bitcoin creation, presupposes that you can trust neither central banks nor financial institutions in general. It's a crushingly limited fallback protocol for a world where all central banks have been taken over by hyperinflation-happy communists.

The obvious empirical observation is that central banks have not been taken over by hyperinflation-happy communists. Central banks in the developed world have by and large mastered the art of keeping inflation low – in fact, they seem to have trouble doing anything else. True, there are always Venezuelas and Argentinas, but designing a currency based on the idea that they are at the cutting edge of future macroeconomic practice crosses the line from design fiction to surrealist engineering.

As a currency, Bitcoin isn't the future, but the past. It uses our most advanced technology to replicate the key features of an obsolete concept, adding some Tesla coils here and there for good effect. It's gold you can teleport; like a horse with an electric headlamp strapped to its chest, it's an extremely cool-looking improvement to a technology we have long superseded.

As computer science, it's magnificent. As economics, it's an steampunk affectation.

Where bitcoin shines, relatively speaking, is in the criminal side of the e-commerce sector — including service-oriented markets like online extortion and sabotage — where anonymity and the ability to bypass the (relative) danger of (nominally, if not always pragmatically) legal financial institutions are extremely desirable features. So far Bitcoin has shown some promise not as a functional currency for any sort of organized society, but in its attempt to displace the hundred dollar bill from its role as what one of William Gibson's characters accurately described as the international currency of bad shit.

This, again, isn't an unfortunate side effect, but a consequence of the design goals of Bitcoin. There's no practical way to avoid things like central bank-set interest rates and taxes, without also avoiding things like anti-money laundering regulations and assassination markets. If you mistrust government regulations out of principle and think them unfixable through democratic processes — that is, if you ignore or reject political technologies developed during the 20th century that have proven quite effective when well implemented — then this might seem to you a reasonable price to pay. For some, this price is actually a bonus.

There's nothing implicit in contemporary technologies that justifies our sometimes staggering difficulties managing common goods like sustainably fertile lands, non-toxic water reservoirs, books written by people long dead, the antibiotic resistance profile of the bacteria whose planet we happen to live in, or, case in point, our financial systems. We just seem to be having doubts as to whether we should, doubts ultimately financed by people well aware that there are a few dozen deca-billion fortunes to be made by shedding the last two or three centuries' worth of political technology development, and adding computationally shiny bits to what we were using back then.

Bitcoin is a fascinating technical achievement mostly developed by smart, enthusiastic people with the best of intentions. They are building ways in which it, and other blockchain technologies like smart contracts, can be used to make our infrastructures more powerful, our societies richer, and our lives safer. That most of the big money investing in the concept is instead attempting to recreate the financial system of late medieval Europe, or to provide a convenient complement to little bags of diamonds, large bags of hundred dollar bills, and bank accounts in professionally absent-minded countries, when they aren't financing new and excitingly unregulated forms of technically-not-employment, is completely unexpected.

The price of the Internet of Things will be a vague dread of a malicious world

Volkswagen didn't make a faulty car: they programmed it to cheat intelligently. The difference isn't semantics, it's game-theoretical (and it borders on applied demonology).

Regulatory practices assume untrustworthy humans living in a reliable universe. People will be tempted to lie if they think the benefits outweigh the risks, but objects won't. Ask a person if they promise to always wear their seat belt, and the answer will be at best suspect. Test the energy efficiency of a lamp, and you'll get an honest response from it. Objects fail, and sometimes behave unpredictably, but they aren't strategic, they don't choose their behavior dynamically in order to fool you. Matter isn't evil.

But that was before. Things now have software in them, and software encodes game-theoretical strategies as well as it encodes any other form of applied mathematics, and the temptation to teach products to lie strategically will be as impossible to resist for companies in the near future as it has been to VW, steep as their punishment seems to be. As it has always happened (and always will) in the area of financial fraud, they'll just find ways to do it better.

Environmental regulations are an obvious field for profitable strategic cheating, but there are others. The software driving your car, tv, or bathroom scale might comply with all relevant privacy regulations, and even with their own marketing copy, but it'll only take a silent background software upgrade to turn it into a discrete spy reporting on you via well-hidden channels (and everything will have its software upgraded all the time; that's one of the aspects of the Internet of Things nobody really likes to contemplate, because it'll be a mess). And in a world where every device interacts with and depends on a myriad others, devices from one company might degrade the performance of a competitor's... but, of course, not when regulators are watching.

The intrinsic challenge to our legal framework is that technical standards have to be precisely defined in order to be fair, but this makes them easy to detect and defeat. They assume a mechanical universe, not one in which objects get their software updated with new lies every time regulatory bodies come up with a new test. And even if all software were always available, cheking it for unwanted behavior would be unfeasible — more often than not, programs fail because the very organizations that made them haven't or couldn't make sure it behaved as they intended.

So the fact is that our experience of the world will increasingly come to reflect our experience of our computers and of the internet itself (not surprisingly, as it'll be infused with both). Just as any user feels their computer to be a fairly unpredictable device full of programs they've never installed doing unknown things to which they've never agreed to benefit companies they've never heard of, inefficiently at best and actively malignant at worst (but how would you now?), cars, street lights, and even buildings will behave in the same vaguely suspicious way. Is your self-driving car deliberately slowing down to give priority to the higher-priced models? Is your green A/C really less efficient with a thermostat from a different company, or it's just not trying as hard? And your tv is supposed to only use its camera to follow your gestural commands, but it's a bit suspicious how it always offers Disney downloads when your children are sitting in front of it.

None of those things are likely to be legal, but they are going to be profitable, and, with objects working actively to hide them from the government, not to mention from you, they'll be hard to catch.

If a few centuries of financial fraud have taught us anything, is that the wages of (regulatory) sin are huge, and punishment late enough that organizations fall into temptation time and again, regardless of the fate of their predecessors, or at least of those who were caught. The environmental and public health cost of VW's fraud is significant, but it's easy to imagine industries and scenarios where it'd be much worse. Perhaps the best we can hope for is that the avoidance of regulatory frameworks on Internet of Things won't have the kind of occasional systemic impact that large-scale financial misconduct has accustomed us to.

We aren't uniquely self-destructive, just inexcusably so

Natural History is an accretion of catastrophic side effects resulting from blind self-interest, each ecosystem an apocalyptic landscape to the previous generations and a paradise to the survivors' thriving and well-adapted descendants. There was no subtle balance when the first photosynthetic organisms filled the atmosphere with the toxic waste of their metabolism. The dance of predator and prey takes its rhythm from the chaotic beat of famine, and its melody from an unreliable climate. Each biological innovation changes the shape of entire ecosystems, giving place to a new fleeting pattern than will only survive until the next one.

We think Nature harmonious and wise because our memories are short and our fearful worship recent. But we are among the first generations of the first species for which famine is no accident, but negligence and crime.

No, our destruction of the ecosystems we were part of when we first learned the tools of fire, farm, and physics is not unique in the history of our planet, it's not a sin uniquely upon us.

It is, however, a blunder, because we know better, and if we have the right to prefer to a silent meadow the thousands fed by the farms replacing it, we have no right to ignore how much water it's safe to draw, how much nitrogen we will have to use and where it'll come from, how to preserve the genes we might need and the disease resistance we already do. We made no promise to our descendants to leave them pandas and tigers, but we will indeed be judged poorly if we leave them a world changed by the unintended and uncorrected side effects of our own activities in ways that will make it harder for them to survive.

We aren't destroying the planet, couldn't destroy the planet (short of, in an ecological sense, sterilizing it with enough nuclear bombs). What we are doing is changing its ecosystems, and in some senses its very geology and chemistry, in ways that make it less habitable for us. Organisms that love heat and carbon in the air, acidic seas and flooded coasts... for them we aren't scourges but benefactors. Biodiversity falls as we change the environment with a speed, in an evolutionary scale, little slower than a volcano's, but the survivors will thrive and then radiate in new astounding forms. We may not.

Let us not, then, think survival a matter of preserving ecosystems, or at least not beyond what an aesthetic or historical sense might drive us to. We have changed the world in ways that make it worse for us, and we continue to do so far beyond the feeble excuses of ignorance. Our long term survival as a civilization, if not as a species, demands from us to change the world again, this time in ways that will make it better for us. We don't need biodiversity because we inherited it: we need it because it makes ecosystems more robust, and hence our own societies less fragile. We don't need to both stop and mitigate climate change because there's something sacred about the previous global climate: we need to do it because anything much worse than what we've already signed for might be too much for our civilization to adapt to, and runaway warming might even be too much for the species itself to survive. We need to understand, manage, and increase sustainable cycles of water, soil, nitrogen, and phosphorus because that's how we feed ourselves. We can survive without India's tigers. But collapse the monsoon or the subcontinent's irrigation infrastructure and at least half a billion people will die.

We wouldn't be the first species killed by our own blind success, nor the first civilization destroyed by a combination of power and ignorance, empty cities the only reminders of better architectural than ecological insight. We know better, and should act in a way befitting what we know. Our problem is no larger than our tools, our reach no further than our grasp.

The only question is how hard we'll make things for us before we start working on earnest to build a better world, one less harsh to our civilization, or at least not untenably more so. The question is how many people will unnecessarily die, and what long-term price we'll pay for our delay.

The Telemarketer Singularity

The future isn't a robot boot stamping on a human face forever. It's a world where everything you see has a little telemarketer inside them, one that knows everything about you and never, ever, stops selling things to you.

In all fairness, this might be an slight oversimplification. Besides telemarketers, objects will also be possessed by shop attendants, customer support representatives, and conmen.

What these much-maligned but ubiquitous occupations (and I'm not talking here about their personal qualities or motivations; by and large, they are among the worst exploited and personally blameless workers in the service economy) have in common is that they operate under strict and explicitly codified guidelines that simulate social interaction in order to optimize a business metric.

When a telemarketer and a prospect are talking, of course, both parties are human. But the prospect is, however unconsciously, guided by a certain set of rules about how conversations develop. For example, if somebody offers you something and you say no, thanks, the expected response is for that party to continue the conversation under the assumption that you don't want it, and perhaps try to change your mind, but not to say ok, I'll add it to your order and we can take it out later. The syntax of each expression is correct, but the grammar of the conversation as a whole is broken, always in ways specifically designed to manipulate the prospect's decision-making process. Every time you have found yourself talking on the phone with a telemarketer, or interacting with a salesperson, far longer than you wanted to, this was because you grew up with certain unconscious rules about the patterns in which conversations can end — and until they make the sell, they will neither initiate nor acknowledge any of them. The power isn't in their sales pitch, but in the way they are taking advantage of your social operating system, and the fact that they are working with a much more flexible one.

Some people, generally described by the not always precise term sociopath, are just naturally able to ignore, simulate, or subvert these underlying social rules. Others, non-sociopathic professional conmen, have trained themselves to be able to do this, to speak and behave in ways that bypass or break our common expectations about what words and actions mean.

And then there are telemarketers, who these days work with statistically optimized scripts that tell them what to say in each possible context during a sales conversation, always tailored according to extensive databases of personal information. They don't need to train themselves beyond being able to convey the right emotional tone with their voices: they are, functionally, the voice interface of a program that encodes the actual sales process, and that, logically, has no need to conform to any societal expectation of human interaction.

It's tempting to call telemarketers and their more modern cousins, the computer-assisted (or rather computer-guided) sales assistants, the first deliberately engineered cybernetic sociopaths, but this would miss the point that what matters, what we are interacting with, isn't a sales person, but the scripts behind them. The person is just the interface, selected and trained to maximize the chances that we will want to follow the conversational patterns that will make us vulnerable to the program behind.

Philosophers have long toyed with a mental experiment called the Chinese Room: There is a person inside a room who doesn't know Mandarin, but has a huge set of instructions that tells her what characters to write in response to any combination of characters, for any sequence of interactions. The person inside doesn't know Mandarin, but anybody outside who does can have an engaging conversation by slipping messages under the door. The philosophical question is, who is the person outside dialoging with? Does the woman inside the room know Mandarin in some sense? Does the room know?

Telemarketers are Chinese Rooms turned inside-out. The person is outside, and the room is hidden from us, and we aren't interacting socially with either. We only think we do, or rather, we subconsciously act as if we do, and that's what makes cons and sales much more effective than, rationally, they should be.

We rarely interact with salespeople, but we interact with things all the time. Not because we are socially isolated, but because, well, we are surrounded by things. We interact with our cars, our kitchens, our phones, our websites, our bikes, our clothes, our homes, our workplaces, and our cities. Some of them, like Apple's Siri or the Sims, want us to interact with them as if they were people, or at least consider them valid targets of emotional empathy, but what they are is telemarketers. They are designed, and very carefully, to take advantage of our cultural and psychological biases and constraints, whether it's Siri's cheerful personality or a Sim's personal victories and tragedies.

Not every thing offer us the possibility of interacting with them as if they were human, but that doesn't stop them from selling to us. Every day we see the release of more smart objects, whether it's consumer products or would-be invisible pieces of infrastructure. Connected to each other and to user profiling databases, they see us, know us, and talk to each and to their creators (and to their creators' "trusted partners," who aren't necessarily anybody you have even heard of) about us.

And then they try to sell us things, because that's how the information economy seems to work in practice.

In some sense, this isn't new. Expensive shoes try to look cool so other people will buy them. Expensive cars are in a partnership with you to make sure everybody knows how awesome they make you look. Restaurants hope that some sweet spot of service, ambiance, food, and prices will make you a regular. They are selling themselves, as well as complementary products and services.

But smart objects are a qualitatively different breed, because, being essentially computers with some other stuff attached to them, what their main function is might not be what you bought them for.

Consider an internet-connected scale that not only keeps track of your weight, but also sends you through a social network congratulatory messages when you reach a weight goal. From your point of view, it's just a scale that has acquired a cheerful personality, like a singing piece of furniture in a Disney movie, but from the point of view of the company that built and still controls it, it's both a sensor giving them information about you, and a way to tell you things you believe are coming from something – somebody who knows you, in some ways, better than friends and family. Do you believe advertisers won't know whether to sell you diet products or a discount coupon in the bakery around the corner from your office? Or, even more powerfully, that your scale won't tell you You have earned yourself a nice piece of chocolate cake ;) if the bakery chain is the one who purchased that particular "pageview?"

Let's go to the core of advertising: feelings. Much of the Internet is paid for by advertisers' belief that knowing your internet behavior will tell them how you're feeling and what you're interested on, which will make it easier to sell things to you. Yet browsing is only one of the things we do that computers know about in intimate detail. Consider the increasing number of internet-connected objects in your home that are listening to you. Your phone is listening for your orders, but that doesn't mean it's all it's listening for. The same goes for your computer, your smart TV (some of which are actually looking at you as well), even some children's dolls. As the Internet of Things grows way beyond the number of screens we can deal with, or apps we are willing to use to control them, voice will become the user interface of choice, just like smartphones overtook desktop computers. That will mean that possibly dozens of objects, belonging to a handful of companies, will be listening to you and selling that information to whatever company pays enough to become a "trusted partner." (Yes, this is and will remain legal. First, because we either don't read EULAs or do and try not to think about them. And second, because there's no intelligence agency in the planet who won't lobby to keep it legal.)

Maybe they won't be reporting everything you say verbatim, that will depend on how much external scrutiny there is on the industry, but your mood (did you yell at your car today, or sang aloud as you drove?), your movements, the time of the day you wake up, which days you cook and which days you order takeout? Everybody trying to sell things to you will know all of this, and more.

That will be just an extension of the steady erosion of our privacy, and even of our expectation of it. More delicate will be the way in which our objects will actively collaborate in this sales process. Your fridge's recommendations when you run out of something might be oddly restricted to a certain brand, and if you never respond to them, shift to the next advertiser with the best offer — that is, the most profitable for whoever is running the fridge's true program, back in some data center somewhere. Your watch might choose to delay low-priority notifications while you're watching a commercial from a business partner or, more interestingly, choose to interrupt you every time there's a competitor's commercial. Your kitchen will tell you that it needs some preventive maintenance, but there's a discount on Chinese takeover if you press that button or just say "Sure, Kitchen Kate." If you put it on shuffle, your cloud-based music service will tailor its very much only random-looking selection based on where you are and what the customer tracking companies tells them you're doing. No sad music when you're at the shopping mall or buying something online! (Unless they have detected that you're considering buying something out of nostalgia or fear.) There's already a sophisticated industry dedicated to optimizing the layout, sonic background, and even smells of shopping malls to maximize sales, much in the same way that casinos are thoroughly designed to get you in and keep you inside. Doing this through the music you're listening to is just a personalized extension of these techniques, an edge that every advertiser is always looking for.

If, in defense of old-school human interaction, you go inside some store to talk with an actual human being instead of an online shop, a computer will be telling each sales person, through a very discrete earbud, how you're feeling today, and how to treat you so you'll feel you want to buy whatever they are selling, the functional equivalent of almost telepathic cold reading skills (except that it won't be so cold; the sales person doesn't know you, but the sales program... the sales program knows you, in many ways, better than you do yourself). In a rush? The sales program will direct the sales person to be quick and efficient. Had a lousy day? Warmth and sympathy. Or rather simulations thereof; you're being sold to by a sales program, after all, or an Internet of Sales Programs, all operating through salespeople, the stuff in your home and pockets, and pretty much everything in the world with an internet connection, which will be almost everything you see and most of what you don't.

Those methods work, and have probably worked since before recorded history, and knowing about them doesn't make them any less effective. They might not make you spend more in aggregate; generally speaking, advertising just shifts around how much you spend on different things. From the point of view of companies, it'll just be the next stage in the arms race for ever more integrated and multi-layered sensor and actuator networks, the same kind of precisely targeted network-of-networks military planners dream of.

For us as consumers, it might mean a world that'll feel more interested in you, with unseen patterns of knowledge and behavior swirling around you, trying to entice or disturb or scare or seduce you, and you specifically, into buying or doing something. It will be a somewhat enchanted world, for better and for worse.

The Balkanization of Things

The smarter your stuff, the less you legally own it. And it won't be long before, besides resisting you, things begin to quietly resist each other.

Objects with computers in them (like phones, cars, TVs, thermostats, scales, ovens, etc) are mainly software programs with some sensors, lights, and engines attached to them. The hardware limits what they can possibly do — you can't go against physics — but the software defines what they will do: they won't go against their business model.

In practice this means that you can't (legally) install a new operating system in your phone, upgrade your TV with, say, a better interface, or replace the notoriously dangerous and very buggy embedded control software in your Toyota. You can use them in ways that align with their business models, but you have to literally become a criminal to use them otherwise, even if what you want to do with them is otherwise legal.

Bear with me for a quick historical digression: the way the web was designed to work (back in the prehistoric days before everything was worth billions of dollars) you would be able to build a page using individual resources from all over the world, and offer the person reading it ways to access other resources in the form of a dynamic, user-configurable, infinite book, an hypertext that mostly remains only as the ht on http://.

What we ended having was, of course, a forest of isolated "sites" that guard jealously their "intellectual property" from each other, using the brilliant set of protocols that was meant to give us an infinite book just as a way for their own pages to talk with their servers and their user trackers, and so on, and woe to anybody that tries to "hack" a site to use it in some other way (at least not without a license fee and severe restrictions on what they can do). What we have is still much, much better than what we had, and if Facebook has its way and everything becomes a Facebook post or a Facebook app we'll miss the glorious creativity of 2015, but what we could have had still haunts technology so deeply that it's constantly trying to resurface on top of the semi-broken Internet we did build.

Or maybe there was never a chance once people realized there were lots of money to be made with these homogeneous, branded, restricted "websites." Now processors with full network stacks are cheap enough to be put in pretty much everything (including other computers — computers have inside them, funnily enough, entirely different smaller computers that monitor and report on them). So everybody in the technology business is imagining a replay of the internet's story, only at a much larger scale. Sure, we could put together a set of protocols so that every object in a city can, with proper authorizations, talk with each other regardless of who made it. And, sure, we could make possible for people to modify their software to figure out better ways of doing things with the things they bought, things that make sense to them without attaching license fees or advertisements. We would make money out of it, and people would have a chance to customize, explore, and fix design errors.

But you know how the industry could make more money, and have people pay for any new feature they want, and keep design errors as deniable and liability-free as possible? Why, it's simple: these cars talk with these health sensors only, and these fridges only with these e-commerce sites, and you can't prevent your shoes from selling your activity habits to insurers and advertisers because that'd be illegal hacking. (That the NSA and the Chinese gets to talk with everything is a given.)

The possibilities for "synergy" are huge, and, because we are building legal systems that make reprogramming your own computers a crime, very monetizable. Logically, then, they will be monetized.

It (probably) won't be any sort of resistentialist apocalypse. Things will mostly be better than before the Internet of Things, although you'll have to check that your shoes are compatible with your watch, remember to move everything with a microphone or a camera out of the bedroom whenever you have sex even if they seem turned off (probably something you should already be doing), and there will be some fun headlines when a hacker from insert here your favorite rogue country, illegal group, or technologically-oriented college decides technology has finally caught up with Ghost in the Shell in terms of security boondoggles, breaks into Toyota's network, and stalls a hundred thousand cars in Manhattan during rush hour.

It'll be (mostly) very convenient, increasingly integrated into a few competing company-owned "ecosystems" (do you want to have a password for each appliance in your kitchen?), indubitably profitable (not just the advertising possibilities of knowing when and how you woke up; logistics and product design companies alone will pay through the nose for the information), and yet another huge lost opportunity.

In any case, I'm completely sure we'll do better when we develop general purpose commercial brain-computer interfaces.

Yesterday was a good day for crime

Yesterday, a US judge helped the FBI strike a big blow in favor of the next generation of sophisticated criminal organizations, by sentencing Silk Road operator Ross Ulbricht (aka Dread Pirate Roberts) to life without parole. The feedback they gave to the criminal world was as precise and useful as any high-priced consultant's could ever be: until the attention-seeking, increasingly unstable human operator messed up, the system worked very well. The next iteration is obvious: highly distributed markets with less or zero human involvement. And law enforcement is woefully, structurally, abysmally unprepared to deal with this.

To be fair, they are already not dealing well with the existing criminal landscape. It was easier during the last century, when large, hierarchical cartels led by flamboyant psychopaths provided media-friendly targets vulnerable to the kind of military hardware and strategies favored by DEA doctrine. The big cartels were wiped out, of course, but this only led to a more decentralized and flexible industry that has proven so effective at providing the US and Western Europe with, e.g., cocaine, in a stable and scalable way, that demand is so thoroughly fulfilled they had to seek new products and markets to grow their business. There's no War on Drugs to be won, because they aren't facing an army, but an industry fulfilling a ridiculously profitable demand.

(The same, by the way, has happened during the most recent phase of the War on Terror: statistical analysis has shown that violence grows after terrorist leaders are killed, as they are the only actors in their organizations with a vested interest in a tactically controlled level of violence.)

In terms of actual crime reduction, putting down the Silk Road was as useless a gesture as closing down a torrent site, and for the same reason. Just as the same characteristics of the internet that make it so valuable make P2P file sharing unavoidable, the same financial, logistical, and informational infrastructures that make possible the global economy make also decentralized drug trafficking unavoidable.

In any case, what's coming is much, much worse than what's already happening. Because, and here's when things get really interesting, the same technological and organizational trends that are giving an edge to the most advanced and effective corporations, are also almost tailored to provide drug trafficking networks with an advantage over law enforcement (this is neither coincidence nor malevolence; the difference between Amazon's core competency and a wholesale drug operator's is regulatory, not technical).

To begin with, blockchains are shared, cryptographically robust, globally verifiable ledgers that record commitments between anonymous entities. That, right there, solves all sorts of coordination issues for criminal networks, just as it does for licit business and social ones.

Driverless cars and cheap, plentiful drones, by making all sorts of personal logistics efficient and programmable, will revolutionize the "last mile" of drug dealing along with Amazon deliveries. Like couriers, drones can be intercepted. Unlike couriers, there's no risk to the sender when this happens. And upstream risk is the main driver of prices in the drugs industry, particularly at the highest levels, where product is ridiculously cheap. It's hard to imagine a better way to ship drugs than driverless cars and trucks.

But the real kicker will be a combination of a technology that already exists, very large scale botnets composed of thousands or hundreds of thousands of hijacked computers running autonomous code provided by central controllers, and a technology that is close to being developed, reliable autonomous organizations based on blockchain technologies, the ecommerce equivalent to driverless cars. Put together, it will be possible for a drug user with a verifiable track record to buy from a seller with an equally verifiable reputation through a website that will exist in somebody's home machine only until the transaction is finished, and receive the product via an automated vehicle looking exactly the same as thousands of others (if not a remotely hacked one), which will forget the point of origin of the product as soon as it has left it, and forget the address of the buyer as soon as it has delivered its cargo.

Of course, this is just a version of the same technologies that will make Amazon and its competitors win over the few remaining legacy shops: cheap scalable computing power, reliable online transactions, computer-driven logistical chains, and efficient last-mile delivery. The main difference: drug networks will be the only organizations where data science will be applied to scale and improve the process of forgetting data instead of recording it (an almost Borgesian inversion not without its own poetry). Lacking any key fixed assets, material, financial, or human, they'll be completely unassailable by any law enforcement organization still focused on finding and shutting down the biggest "crime bosses."

That's ineffective today, and will be absurd tomorrow, which highlights one of the main political issues of the early 21st century. Gun advocates in the US often note that "if guns are outlawed, only the outlaws will have guns," but the important issue in politics-as-power, as opposed to politics-as-cultural-signalling, isn't guns (or at least not the kind of guns somebody without a friend in the Pentagon can buy): If the middle class and the civil society doesn't learn to leverage advanced autonomous distributed logistical networks, only the rich and the criminals will leverage advanced autonomous distributed logistical networks. And if you think things are going badly now...

The post-Westphalian Hooligan

Last Thursday's unprecedented incidents at one of the world's most famous soccer matches illustrate the dark side of the post- (and pre-) Westphalian world.

The events are well known, and were recorded and broadcasted in real time by dozens of cameras: one or more fans of Boca Juniors managed to open a small hole in the protective plastic tunnel through which River Plate players were exiting the field at the end of the first half, and managed to attack some of them with, it's believed, both a flare and a chemical similar to mustard gas, causing vision problems and first-degree burns to some of the players.

After this, it took more than an hour for match authorities to decide to suspend the game, and more than another hour for the players to leave the field, as police feared the players might be injured by the roughly two hundred fans chanting and throwing projectiles from the area of the stands from which they had attacked the River Plate players. And let's not forget the now mandatory illegal drone that was flown over the field controlled by a fan in the stands.

The empirical diagnosis of this is unequivocal: the Argentine state, as defined and delimited by its monopoly of force in its territory, has retreated from soccer stadiums. The police force present in the stadium — ten times as numerous as the remaining fans — could neither prevent, stop, nor punish their violence, or even force them to leave the stadium. What other proof can be required of a de facto independent territory? This isn't, as club and security officers put it, the work of a maladjusted few, or even an irrational act. It's the oldest and most effective form of political statement: Here and now, I have the monopoly of force. Here and now, this is mine.

What decision-makers get in exchange for this territorial grant, and what other similar exchanges are taking place, are local details for a separate analysis. This is the darkest and oldest part of the post-Westphalian characteristic development of states relinquishing sovereignty over parts of their territory and functions in exchange for certain services, in partial reversal to older patterns of government. It might be to bands of hooligans, special economic zones, prison gangs, or local or foreign militaries. The mechanics and results are the same, even in nominally fully functional states, and there is no reason to expect them the be universally positive or free of violence. When or where has it been otherwise in world history?

This isn't a phenomenon exclusive to the third world, or to ostensibly failed states, particularly in its non-geographical manifestations: many first world countries have effectively lost control of their security forces, and, taxing authority being the other defining characteristic of the Westphalian state, they have also relinquished sovereignty over their biggest companies, which are de facto exempt from taxation.

This is how the weakening of the nation-state looks like: not a dozen new Athens or Florences, but weakened tax bases and fractal gang wars over surrendered state territories and functions, streamed live.

The most important political document of the century is a computer simulation summary

To hell with black swans and military strategy. Our direst problems aren't caused by the unpredictable interplay of chaotic elements, nor by the evil plans of people who wish us ill. Global warming, worldwide soil loss, recurrent financial crisis, and global health risks aren't strings of bad luck or the result of terrorist attacks, they are the depressingly persistent outcomes of systems in which each actor's best choice adds up to a global mess.

It's well-known to economists as the tragedy of the commons: the marginal damage to you of pumping another million tons of greenhouse gasses into the atmosphere is minimal compared with the economic advantages of all that energy, so everybody does it, so enough greenhouse gases get pumped that it's way to becoming a problem for everybody, yet nobody stops, or even slows down significantly, because that would do very little on its own, and be very hurtful to whoever does it. So there are treaties and conferences and increased fuel efficiency standards, just enough to be politically advantageous, but not nearly so far as to make a dent on the problem. In fact, we have invested much more on making oil cheaper than on limiting its use, which gives you a more accurate picture of where things are going.

Here is that picture, from the IPCC:

figure-spm-5

A first observation: Note that the A2 model, the one in which temperatures are raised an average of more than 3°, was the "things go more or less as usual" model, not the "things go radically wrong" model... and it was not the "unconventional sources makes oil dirt cheap" scenario. At this point, it might as be the "wildly optimistic" scenario.

A second observation: Just to be clear, because worldwide averages can be tricky: 3° doesn't translate to "slightly hotter summers"; it translates to "technically, we are not sure we'll be able to feed China, India, and so on." Something closer to 6°, which is beginning to look more likely as we keep doing the things we do, translates to "we sure will miss the old days when we had humans living near the tropics".

And a third observation: All of these reports usually end at the year 2100, even though people being born now are likely to be alive then (unless they live in a coastal city in a low latitude, that is), not to mention the grandchildren of today's young parents. This isn't because it becomes impossible to predict what will happen afterwards — the uncertainty ranges grow, of course, but this is still thermodynamics, not chaos theory, and the overall trend certainly doesn't become mysterious. It's simply that, as the Greeks noted, there's a fear that drives movement, and there's a fear that paralyzes, and any reasonable scenario for the 2100 is more likely to belong to the second kind.

But let's take a step back and notice the way this graph, which is the summary of multiple computer simulations, driven by painstaking research and data gathering, maps our options and outcomes in a way that no political discourse can hope to match. To compare it with religious texts would be wrong in every epistemological sense, but it might be appropriate in every political one. When "climate skeptics" doubt, they doubt this graph, and when ecologists worry, they worry about this graph. Neither the worry nor the skepticism is doing much to change the outcomes, but at least the discussion is centered not in an individual, a piece of land, or a metaphysical principle, but rather in the space of trajectories of a dynamical system of which we are one part.

It's not that graphs or computer simulations are more convincing than political slogans; it's just that we have managed a level of technological development and sheer ecological footprint that our own actions and goals (the realm of politics) has escaped the descriptive possibilities of pure narrative, and we are thus forced to recruit computer simulations to attempt to grapple, conceptually if nothing else, with our actions and their outcomes.

It's not clear that we will find our way to a future that avoids catastrophe and horror. There are possible ways, of course — moving completely away from fossil fuels, geoengineering, ubiquitous water and soil management and recovery programs, and so on. It's all technically possible, with huge investments, a global sense of urgency, and a ruthless focus on preserving and making more resilient the more necessary ecological services. That we're seeing nothing of the kind, but instead a worsening of already bad tendencies, is due to, yes, thermodynamics and game theory.

It's a time-honored principle of rhetoric to end an statement in the strongest, most emotionally potent and conceptually comprehensive possible way. So here it is:

figure-spm-5

The changing clusters of terrorism

I've been looking at the data set from the Global Terrorism Database, an impressively detailed register of terrorism events worldwide since 1970. Before delving into the more finer-grained data, the first questions I wanted to ask for my own edification where

  • Is the frequency of terrorism events in different countries correlated?
  • If so, does this correlation changes over time?

What I did was summarize event counts by country and month, segment the data set by decade, and build correlation clusters for the countries with the most events each decade depending on co-occurring event counts.

The '70s looks more or less how you'd expect them to:

cluster1970

The correlation between El Salvador and Guatemala, starting to pick up in the 1980's, is both expected and clear in the data. Colombia and Sri Lanka's correlation is probably acausal, although you could argue for some structural similarities in both conflicts:

cluster1980

I don't understand the 1990's, I confess (on the other hand, I didn't understand them as they happened, either):

cluster1990

The 2000's make more sense (loosely speaking): Afghanistan and Iraq are close, and so are India and Pakistan.

cluster2000

Finally, the 2010's are still ongoing, but the pattern in this graph could be used to organize the international terrorism-related section of a news site:

cluster2010

I find most interesting how the India-Pakistan link of the 2000's has shifted to a Pakistan-Afghanistan-Iraq one. Needless to say, caveat emptor: shallow correlations between small groups of short time series is only one step above throwing bones into the ground and reading the resulting patterns, in terms of analytic reliability and power.

That said, it's possible in principle to use a more detailed data set (ideally, including more than visible, successful events) to understand and talk about international relationships of this kind. In fact, there's quite sophisticated modeling work being done in this area, both academically and in less open venues. It's a fascinating field, and if it might not lead to less violence in any direct way, anything that enhances our understanding of, and our public discourse about, these matters is a good thing.