After Trolls, Elves. Or not.

Most societally adversarial uses of bots aim at disruption: they are loud, they are aggressive, they aim at numbers and reach: polluting the informational environment, generating as much emotional distress as possible, and breaking rules of community discourse. Generative AI makes this specially scalable — the main use of generative AI is low-quality cheap generic content, and this fits the bill — but troll farms had already refined this ugly art to an spectacular low point.

Rather than an attack on the platforms, we can see this as the culmination of what social networks were built for; in an environment pushed by the business model to favor crude measures of “reach” and “engagement,” the troll is the optimal organism.

We now have the possibility or the threat of a complementary strategy to the troll, the elf (made-up terminology, if you know the proper one let me know). In its most modern way, it’s a bot that targets individuals, analyzes their posts, likes, etc, and starts mirroring their behavior in a way calculated to be attractive. The most professionalized influencers are cyborg versions of this — carefully engineered to maximize generic engagement — but with sufficient resources you can build models for each individual with high likelihood of gaining their trust, which makes them very powerful later for changing opinions and modifying behaviors.

We all know how much influence our friends have on us, most of all the close ones we feel mirrored by. A rabbit hole can radicalize, a troll army can disrupt, but a single elf in your innermost circle can change your mind.

“High likelihood” is a contextual term: if I can test and send over time fifty bots to try and be friendly with you, I don’t need a high probability of success for each one in order to succeed. And coordinated bots can do things that are harder for humans, like interacting with other bots in ways designed to maximize their target’s empathy. You like popular people – the bot can be popular with other bots. You have a pattern of empathizing with bullied people – the bot can be bullied by other bots.

It’s an strategy of moving slowly and building things — in this case relationships — so it’s not surprising that it’s probably not the first tool in most techies’ arsenal: the concept of human relationships most “tech visionaries” seem to have oscillates between Uncanny Valley ideas of “community” and downright transactional views of how people interact with each other. Let’s not forget that Facebook began as what could be described as a very sketchy data-mining tool for pick-up artists, and that tells you quite a bit about the worldview behind social networks as a platform category.

So maybe it’s a good thing, in this case and only in this case, that so much of the worst of tech culture has such a poor grasp of human interaction I don’t mean people working in tech companies don’t; but it’s true of enough of them, and with more than enough economic and cultural power, to determine how platforms work. They quickly and systematically become psychologically poisonous in ways that are superficially driven by technical constraints but are ultimately reflections of their psychology and culture, yet we can still find trust and support in the direct, one-to-one relationships with other individuals we can form there.

At least for now.