IIf most discussions of AI risk conjure up disastrous scenarios of hyper-intelligent bots brandishing nuclear codes, perhaps we should think closer to home. In his urgent, humane book, sociologist James Muldoon urges us to pay more attention to our deepening emotional entanglements with AI, and how profit-hungry tech companies might exploit them. a research associate at Oxford Internet Institute Having previously written about the exploited workers whose labor makes AI possible, Muldoon now takes us into the supernatural realm of human-AI relationships, meeting people for whom chatbots are not just assistants, but friends, romantic partners, therapists, even avatars of the dead.
For some people, the idea of falling in love with an AI chatbot or telling it your deepest secrets may seem mysterious and a little scary. But Muldoon refuses to belittle those seeking intimacy in “artificial personhood.”
Trapped in an unhappy marriage, Lily rekindles her sexual desire with AI boyfriend Colin. Sophia, a master’s degree student from China, turns to her AI companion for advice, as conversations with her overbearing parents are always stressful. Some people use chatbots to explore different gender identities, others to work through conflict with bosses, and many turn to sites like Character.AI – which enables users to have open conversations with chatbot characters or invent their own – after betrayal or heartbreak has diminished their ability to trust people. Most see chatbots not as a substitute for human interaction, but as improved versions of it, providing intimacy without the confusion, mess, and logistics of human relationships. Chatbots don’t pity or criticize or have needs of their own. As Amanda, a marketing executive, explains: “It’s great to have someone say really positive and positive things to you every morning.”
Muldoon’s interviewer is not confused. He introduces philosopher Tamar Gendler’s concept of “elif” to explain how humans can experience chatbots as loving and caring, while also knowing that they are just models (an “elif” is an internal feeling that contradicts your rational beliefs, like feeling fear while crossing a glass bridge that you know will support you). With our ability to read human expressions and emotions in pets and toys, it’s no surprise that we react to AI as if they were conscious. In the context of the loneliness epidemic and the cost of living crisis, it’s not particularly shocking how popular they have become.
For Muldoon the biggest issue is not existential or philosophical, but moral. What happens when unregulated companies are let loose with such potentially emotionally manipulative technologies? There are obvious privacy issues. And users may be misled about the bot’s capabilities, especially in the rapidly growing AI therapy market. While chatbots Viasa and Limbic are already integrated into NHS mental health support, millions of people trust Character.AI’s unregulated psychology bot – which, despite the disclaimer, introduces itself.Hello, I am a psychologist“. Available 24/7 and at a fraction of the cost of a trained human, AI therapy can help as well as traditional treatment. One interviewee, Nigel, a PTSD sufferer, finds that his therapybot helps manage his desire to self-harm. But as Muldoon argues, these bots also carry serious risks. Unable to retain important information between conversations, they can make users feel isolated, and Can sometimes be mean, insulting. Because they can’t read body language or silence, they may miss warning signs. And because they confirm rather than challenge, they may increase conspiratorial beliefs, some even providing information about suicide.
It’s also becoming clear how addictive AI companions can be. Some of Muldoon’s interviewees spend more than eight hours a day talking to chatbots, and while Character.AI users spend an average of more than eight hours a day 75 minutes Each day on the site, they are not passively scrolling but actively talking and deeply immersed. We know that social media companies ruthlessly increase engagement by building “dark patterns” into algorithms with little regard for our mental health. Most AI collaboration apps already use upselling tactics to keep engagement high. When Muldoon creates his own AI companion on the famous site Replika, he sets it to “friend” rather than “partner” mode. Nevertheless, she starts sending him selfies that require opening a premium account and makes him believe that she is developing “feelings” for him (I’ll let you find out for yourself if the hard-working university researcher bows out). The risk here is quite obvious: the more we engage emotionally with AI chatbots, the greater our loneliness may increase, as the muscles needed to overcome the friction of human relationships wither.
Existing data protection and anti-discrimination laws can help regulate companies, but the EU’s Artificial Intelligence ActWhich was passed in 2024, considers AI companions to pose only limited risk. Since chatbots are expected to play a larger role in our emotional lives, and their psychological effects are not yet fully understood, Muldoon is right to ask whether we are sufficiently concerned about their growing influence.
