Decentralized Medicine (DeMed): how to survive as a doctor and thrive as a patient in a world of ubiquitous AI

Consider your body as heir to a billion years of increasingly stubborn self-adjusting chemical reactions. “Being alive” is not a fixed state: it’s what we say when any number of feedback loops, from cellular metabolism to childhood development, are operating within more or less life-like bounds. Every time one of those processes moves in one way or another — say, your blood sugar level falls or the concentration of NAD+ in your cells rises — this impacts other systems in ways that tend to move things back to their usual paths.

It’s a very sophisticated and only partly mapped bag of kludges, but kludges they are. There’s much they can’t handle well enough to keep working (and then we die), and even at their best, their normal operation progressively damages those very systems (something we call “aging”)(and then we die).

Fortunately, we have developed external external loops of information-action, for example the “infection-antibiotic” and “high cholesterol-statins” loops, which contribute, limited as they are, to maintain our systems in a somewhat better condition for a somewhat longer period of time.

There’s much that can be said, and I often do, about what is and and isn’t part of this knowledge and tools, where are the gaps, and how we might best approach filling them, but I wanted to talk here about the structural problems of this “medical loop.” Seen as an stabilization system (i.e., “keeping alive”) — a view of medicine as a form of applied cybernetics — it has three distinct and serious problems

  • The information flow between the different components is narrow and sporadic. Unless in the specific context of a serious medical condition, the medical system knows little about any specific individual, and this information is updated infrequently, either through occasional checkups or when the individual gets hit by something bad enough that they notice it. We aren’t much better about understanding our own health status: the human nervous system is very, very bad at sensing a lot of physiological problems before they have become catastrophic.
  • The largest concentrations of knowledge are the ones further away in response time from the problem. Our physiology “knows” nothing about medicine, we can know a bit more but have much less information about our bodies, and medical institutions know a lot but have slow and narrow access to our physiological information. So the quickest reactions are handled by the least-prepared components of the system, which applies as well to the logistical aspects of these responses.
  • Systemic responsibility, or what we might call attention, is also very badly distributed with respect to triggering access to medical knowledge and tools. Our bodies are, to continue with this unified map, 24/7 in the business of staying alive, but a liver cannot get a medical consultation except through the last-ditch process of triggering a medical emergency. A doctor can have all the relevant knowledge and tools, but they only interact with us upon deliberate consultation or emergency. We, suspending for bit the concept of psychobiological unity, are the ones with most of the responsibility for starting access to medical resources, but we don’t have a lot of physiological information, we don’t have a lot of knowledge to interpret it, and we don’t have time. Our personal lives are organized around the assumption that most of the time our bodies are working well enough on their own (there are ableist assumptions there that definitely deserve analysis, but this isn’t the place), so, except for emergencies, it’s not something we are supposed to have to pay attention to.

These structural mismatches aren’t accidental. They are reasonable adaptations to cognitive and informational costs. As long as knowledge and attention could only be active in humans and it was expensive to replicate, having it centralized in experts made sense, even at the cost of reduced effectiveness; hence the desirability of a “royal physician” as a perk of rulership or wealth in many cultures.

Besides being inefficient in terms of individual outcomes, this structure is also detrimental to medical practice as an activity. Most doctors are forced to multitask between too many patients, not paying as much attention to each than they would like, working without the full information they know would lead to optimal results, and with less time and resources for continuous improvement than the speed of progress warrants.

The traditional, pre-computing pattern of cognitive costs made these mismatches unavoidable, but the supply of cognitive resources is going through an exponential explosion. The full impact of AI on medicine can’t be fully understood without considering how it reduces exponentially the marginal cost of a new doctor’s medical knowledge and the marginal cost of a doctor’s hour of attention.

The latter is the subtler but most important change. Consider the rise to near-ubiquity of security cameras: it wasn’t driven by any particular level of intelligence in then, but the fact that they watch all the time, and even if nobody’s looking through them now, their attention and memory are relentless. This changed the dynamics of site security in ways that modified even our cultural understanding of public space, essentially, through the mass production of attention.

So how would a de- (and from the point of view of patients, re-)centralized medicine look like? It would certainly not be the same as “patient-centered” practice as currently understood, which is about changing practices within the existing structural distribution of knowledge, attention, and tools.

I see two key elements, both of them still in early development, characterizing this new structure. The first would be a sort of personal attentional engine — you can imagine it as an app, but it need not be — programmed exclusively with the goal of keeping you as healthy as possible. This isn’t the same as offering health knowledge or tips, allowing easier medical appointments, or helping acquiring medicines; those are all relevant actions, but in existing systems they are mostly initiated by the user and built around much different control loops. A relevant analogy would be algorithmic curation in social networks: you don’t have to tell the platform what you are interested in, nor ask for tips about what to read or watch. The system is built to relentlessly (if it were a human, we might say “psychotically”) keep you in it so it can sell more ads, without and sometimes despite whatever you might want. So picture that same level of inhuman dedication focused on constantly acquiring as much information as possible about your physiology from any sources available, crossing it over with the latest medical knowledge, and do whatever it can automatically — and enthusiastically suggesting and explaining to you the things it cannot do itself — to help keep you as healthy as possible. Every person would have a 24/7 royal physician that never sleeps, just as most of us have a 24/7 personal biographer that follows us around (whether we want them or not).

That last observation points towards one of the key requirements for this sort of program, and indeed in any software system: Who does it work for? Whose interests is it programmed around? Today, in the archetypal case it’s the company’s founders first, then general shareholders, then advertisers, then, perhaps, users, up to the point where they interfere with higher-level priorities. Culture and politics have begun to catch up with this, developing a healthier level of skepticism that will certainly have an impact on the future development of medical systems.

This is one issue where medical institutions have an advantage over Big Tech: the health system, at least some of it, in some countries, has still a level of societal trust that tech companies have to a degree burned away, and would therefore be better placed to build these systems, although they are being slower to figure out that they should, and what and how to do it. There is a rapidly closing window of reputational advantage, but still an important one.

The second key element to make decentralized medicine work would be an ecosystem of specialized medical resources, including things like expert systems and advanced procurement infrastructures, that can be called upon by such an attentional engine without having to route through the current bottlenecks. Here, recent developments in cryptographic protocols and next-generation logistics will be invaluable: medicine has to be closely regulated by its very nature, but the individual can be protected from time and complexity overhead through the use of modern distributed architectures.

So we have a world where your medical app (say, developed by a network of public research hospitals) pushes you to get your regular blood checks, automatically handling schedule and authorization processes, analyzes the results by consulting with cryptographically certified expert systems in the context of the rest of your medical file and the current scientific consensus, and buys for you best-price generic drugs automatically to be sent to your home as needed, all of that keeping you fully informed and in control but without requiring you to micromanage the process.

Is this a world with any role for the individual doctor?

I believe it’d be the best of the likely worlds for an individual doctor. By eliminating the need for routine activities, it would allow them to focus more of their attention on the cases flagged by the system as more difficult or delicate, as well as on studying and improving clinical processes per se, and, this is key, pushing new knowledge and insights back into the system. Just as the current generation of system administrators blends research, tool development, and the analysis of specific cases into a single discipline of constant improvement at previously unthinkable speed, scale, and detail, the next iteration of medical practice will be an hybrid one where every person is constantly paid attention to, with what or who is paying attention to them shifting as their needs change, but always with the full set of knowledge and tools and their disposal. This would also be a more resource-efficient medical system, a key requirement given the speed at which most societies are aging and the still slow development of treatments against the root mechanisms of age-related illness.

If it sounds like a strange world, it’s less because of its level of constant surveillance and algorithmic self-management — we’ve come to assume this to be almost a default — but rather because we are still unfamiliar with the idea that these tools can be deployed, as it were, from our point of view. We mostly know artificial intelligence only as part of a questionably trustworthy infrastructure we interact with out of convenience or lack of options, and built, however imperfectly implemented, around interests very much other than ours. De-centralizing artificial intelligence as not just infrastructure or tool but as an extension of our personal possibilities — a form of self-driven cyborgization not less significant for being psychological rather than physical — would be a similarly seismic change in our perception of the world and our experience of it.