What AI designers can learn from Dracula

(Not the vampire, the novel about how they killed him.)

This is a commonplace in literary criticism, but most adaptations downplay how much Dracula is a story about information management.

From the beginning, Dracula doesn’t keep Jonathan Harker in his castle, with the risks involved, just for the fun of it: He’s getting information about London that will be necessary to operate there. And Van Helsing has, and needs to have, two very different sorts of expertise, medicine and vampirology. Without his medical reputation he wouldn’t have been called or paid attention to, without his knowledge of vampires — necessarily built during long, specialized, and entirely self-directed research collating other people’s knowledge — he wouldn’t have had anything useful to say.

But Mina Harker (née Murray) is, in the modern sense, the brains of the operation. When most of the characters’ paths finally meet, she’s the one that sees the need to collate all their information in a single chronological narrative. Putting together personal journals and newspaper clippings, she rearranges the information in a way that makes transparent to everybody what’s been happening and therefore have a chance to make it stop.

By doing this information collation and analysis, Mina turns a horror story into a monster-hunting adventure.

The Count has a direct advantage against his neighbors back in Transylvania, but — and this is of course as imperialist as you’d expect a 1897 British novel to be — against the British (with some help from Americans in the shooting department) the situation is very different. Dracula has the upper hand in London only while he’s unknown or poorly understood: in the cold light of day, pun intended, he has too many weaknesses to prevail against an organized party prepared against him, so at last he tries, and fails, to escape. If he had found and destroyed all journals, or just killed Mina right away, he probably would have won.

What sort of AI would have helped catch him? A reflex response might handwave towards putting all the texts in a single place and looking for patterns, but which texts? A newspaper database would only have mentions of a weird ship and some deaths, but those are single events, not a pattern – without an hypothesis to look for a link there isn’t one. The hypothesis might come from Van Helsing’s knowledge, but he’s not an avid newspaper reader and newspapers don’t really carry his sort of expertise. And never mind the inclusion of unpublished first-hand knowledge like Jonathan’s journal, available to Mina for the ad hoc analysis but not in any common repository.

It’s not that you can’t, conceptually, do what Mina did with some careful use of large language models. You could, but first you would have to put together all those disparate sources in a single place. There had never been a vampire before in England; there was no data set that would have identified Dracula or helped figure out his patterns and plans. Mina’s genius was in the identification of this need, the acquisition of the necessary data — “multi-modal” and “cross-silo” — and its collation in a way that made clear what was happening and therefore was a necessary step to making it stop. It’s not something common in horror tales, or in business practices for that matter: as a rule of thumb, in most companies if one type of information “belongs” to one C-level executive and another type to another, the chances of the company building an AI that takes advantage of both is quite low.

One of the main tasks of an AI designer is to work across these frontiers. Companies are usually good at dealing with things they know they have to deal with and have a name for, and flounder when they get hit by something they can’t find no matter how much data they look at because it’s something no data set has been built for before.

This is hard. Dracula isn’t killed by cops or the army not because they lack the capability of doing so but because they lack the capability of conceptualizing him; he’s killed because the people going after him had enough personal resources — once more, this is a 1897 British novel — to mount a private international high-tech chase. They wouldn’t have been able to convince the authorities to do it.

It’s the same for AI designers in companies attempting new things. If you’ll excuse the continued militaristic analogy — the links between AI and the military is much older than computers, but that’s a different tale and a different problem — the mental model is that it’s fighting a way and the AI is expected to lift the proverbial fog, provide total visibility, and therefore help win with minimal risk and cost.

That’s not the sort of story they are in. If you’re at the cutting edge or attempting to get there, if you’re doing new things in new ways, you aren’t fighting a war, you’re dealing with a vampire. What’s worse, it’s the beginning of Dracula and you have no idea what they look like.

Any new AI that’s more than an iterative improvement of a previous one, and this is particularly true not of nine-figures foundational models but of the smaller, specialized tools, will need not new algorithms but new questions, not bigger data but stranger combinations of data sets. Like every form of engineering in the making it’s a matter of imagination as much as of computational brute force.

Choosing to go after monster problems isn’t for the faint-hearted. There are no deploy-and-forget pipelines. No recipes. The monster can kill the company before the company has figured out how to kill it; if you lose nerve and stop searching in the darkness, this is almost a guarantee.

But there are few things as interesting, and, if you win, the reward can be as unprecedented as the path.