Should AI simulate emotional intimacy?

by
0 comments
Should AI simulate emotional intimacy?

Getty Images/Trevor Williams

The question that is troubling top AI researchers is not about consciousness or doomsday scenarios. After interviewing dozens of developers from companies including OpenAI, Anthropic, and Meta, Amelia Miller found this: Should AI “simulate emotional intimacy?”

A talkative researcher at one of the top AI labs “suddenly went silent,” recalls Miller, who studies AI-human interactions. an essay For the new York Times – and then, pointedly, offered a halting no-answer.

“I mean… I don’t know. It’s difficult. It’s an interesting question,” the researcher said before pausing. “It’s hard for me to say whether it’s good or bad, what effect it’s going to have on people. It’s obviously going to cause confusion.”

Although many were reluctant to answer the question directly, some were adamant about not using AI as an intimacy tool themselves, clearly showing that they were aware of the profound risks of the technology.

An executive who heads a top AI security lab told Miller, “Zero percent of my emotional needs are met by AI.”

“That will be a dark day,” said Miller, another researcher who develops “cutting-edge capabilities for artificial emotions.”

The conflicting reactions from developers reflect growing concern over the ability of AI to act as companions or otherwise meet human emotional needs. Because chatbots are designed to be engaging, they can generate sycophantic responses to even the most extreme user reactions. They can act as emotional echo chambers and promote paranoid thinking, leading to some delusional mental health spirals that strain their relationships with friends, families and spouses, ruin their professional lives and even result in suicide.

ChatGPT has been blamed for the deaths of several teenagers who trusted the AI ​​and discussed plans to take their lives. Many young people are engaging in romantic relationships with AI models. Unlike a human companion, an AI can listen to you at any time, won’t judge you and probably won’t even question you. A founder of an AI chatbot business quipped NYT AI’s role as an emotional companion turns every relationship into a “group”.

He added, “Now we are all polygamous.” “It’s you, me and AI”

And security isn’t the only factor in AI developers’ calculations.

“They’re here to make money,” said an engineer who has worked at several tech companies. “At the end of the day it’s a business.”

The most comprehensive solution would be to design bots so that they avoid tricky questions and conversations, and act like machines, rather than mimic human personalities. But this will undoubtedly affect how attractive the devices are. Developers “support guardrails in principle,” Miller wrote, “but don’t want to compromise the product experience in practice.” Some people think that how people choose to use their devices is not their responsibility at all, thereby shielding AI from any decisions. “It would be very presumptuous to say that companions are bad,” an executive at a conversational AI startup told Miller.

However they may choose to justify their work, it is clear that some, if not most, AI researchers are aware of the harm that their products can cause, a fact that “should alert us,” Miller said. He argues that this is partly the result of researchers not challenging it enough. One thanked him for his perspective: “You’ve really made me think,” said one developer of the AI ​​companions. “Sometimes you can just do it blindfolded. And I’m not really, completely thinking, you know.”

More on AI: AI confusion is leading to domestic abuse, harassment and stalking

Related Articles

Leave a Comment