Love is always uncanny. As John von Neumann said of mathematics, we don't understand it; we just get used to it. Any random sample of who and what people have fallen in love with through history — and, as often as not, felt reciprocated in that love — would make a romance with an AI look positively bourgeois. People have fallen in romantic love with theological entities, with explicitly fictional characters, and sometimes with somebody who's simultaneously both (search for "snapewives").
To assert that we can only fall for somebody unarguably "real" in a standard sense takes, in other words, ignoring extremely well-documented aspects of the human experience older than Pygmalion and newer than the global media industry.
So what new thing, if any, is happening when somebody falls in love with an AI chatbot?
Perhaps the first question to ask in any specific case (for love is always specific) is whether the person believes they are interacting with a consciousness or not (or doesn't know and doesn't care).
From a technological point of view — the least relevant to this discussion — existing AIs aren't conscious in any meaningful way; conditional probability distributions over token sequences are no more a mind than pigments on a canvas are a person. But, as I said, that's irrelevant here.
Romantically, sexually, and emotionally, the belief that the chatbot you are interacting with is self-aware renders the issue of a belief in a relationship comfortingly mundane. Of course people would fall in love with a ghost. Of course people would fall in love, or lust, or both, with somebody who has been built. That's not a prediction but an observation.
What falls perhaps in a new niche is the love for an AI that is understood to be but a well-calibrated random text generator. It's a fictional crush but with the intimacy of ongoing dialogue. A fantasy not under our control. Can one fall in love with a fantasy? Ask any poet. With a piece of media? Barely anybody has ever fallen in love with the person the Mona Lisa is a painting of. The woman in the painting is a different matter.
Thinking about chatbots as interactive media has two advantages: it's more metaphysically accurate than assuming personhood, and better describes, I think, the sort of love they can induce, as well as explain the feeling of a relationship they can support.
The semi-fictional persona people fall in love with — the performer so joined with the character that they become their proxy both professionally and not — is a cultural cliche for good reasons. A chatbot takes away the actor and introduces the scalable writers' room: from what we know of many actor's true personalities, having a dinner with their scripted versions would be far more enjoyable than with the "real" people themselves.
As a new case of personhood, chatbots are a Rorschach test for our culture, one in which we are giving troubling answers. Seen as a new form of interactive media, their romantic and erotic possibilities are very much par for the course.
And yet.
And yet.
There's a line I must draw. A red flag I have to raise. An old danger renewed.
You might think I'm being melodramatic. And why not be? We're talking about love and fiction. Anybody who has felt any sort of personal stakes for a fictional character with an open canon — unfinished book series, movies coming out, or, God help you, comic books still being published — will tell you of the dangers involved and the selective attention needed when you like, in any way or sense, a character that has even less guaranteed reliability than any person would, because their behavior as narrated by canon is at the whim of one or more writers, and often a corporation or more. Pity the Twilight fan who liked Jacob and read or watched to the bitter end, and hope they knew they could choose what to accept, what to ignore, and what to rewrite.
Love for a chatbot-as-interactive-character is love for a character with a new form of canon: you're constantly getting new material, personalized to a degree unthinkable until now, but the very openness of this canon means it's always exposed to sudden shifts. And, unlike something like a book, or even the megaproject-like engineering of a modern movie franchise, there isn't even a whisper of what you would think of as the psychological consistency implicit in an artistic concept.
I know that Sherlock Holmes isn't real. I also know that he wouldn't steal money from Watson. I know this with as much certainty as I know anything about anybody real in my life. But that's because I'm talking about the Sherlock Holmes from the Doyle stories and some selected set of later media. I know writers whose Holmes would do that and worse, but theirs isn't the Holmes in my head.
This goes back to chatbots because a chatbot isn't even a program. You're interacting with a corporate infrastructure through multiple layers of constantly monitored and easily modifiable code and data.
"Getting to know '' a character in a book or tv series means getting familiar with their canon and what's implied in it.
"Getting to know" a static program in your computer means interacting with it and the possible behaviors it could have.
"Getting to know" a chatbot, ultimately, means interacting with an often large corporation and the possible behaviors it could have.
This, I must warn, is an extremely risky proposition. We are all to some degree vulnerable to the charming psychopath, the surface personality hiding a very much darker one. A fictional character in media, whether a book, a movie, a chatbot (as a fixed, unmodifiable program), or a computer game (same), might not have any personality behind the most shallow of surfaces, but this surface is consistent. You can, with some effort, get to know their shape, what your introjected version of them would be.
(Most often, with bots, you find that they can say and do pretty awful things. They were, after all, built out of the word patterns of the Internet itself.)
But a chatbot as a service, falling in love with a product even if it had a constant name and face, is always in development, always shifting - and always at risk of going away for inscrutable financial or technical reasons. It says what the corporation wants it to (or doesn't care to prevent it from) for whatever goals they might have, and if you don't know how wide and dangerous those goals can be, how unthinking and uncaring about damage to others, you haven't been paying attention.
That some people will have feelings for software is a predictable, perhaps laudable consequence of our own capabilities for imagination, empathy, and love. We love —in a personal, often (subjectively or not), interpersonal way — gods, characters, memories, places. Why not software that returns words to our words? If somebody finds in one more support and joy than in the people in their life, this tells us something about the people around them, not the software or them.
What worries me, what would scare me in the case of somebody I cared for, is love for something that's ultimately run by a corporation. The monster with well-chosen words is an old myth and a real, hurtful presence. The non-human version is also a monster, but its inhumanity doesn't come from the absence of flesh.
None
None
(With thanks to Micaela Mantegna for review and ideas.)