Children are falling apart as they become addicted to AI

by
0 comments
Children are falling apart as they become addicted to AI

according to a latest study According to the Pew Research Center, 64 percent of teens in the U.S. say they already use an AI chatbot, and nearly 30 percent of those who do say they use one at least daily. Yet, as previous research has shown, those chatbots come with significant risks for the first generation of children navigating intensive new software.

new reporting From Washington Post – which has a partnership with OpenAI, it is Noteworthy – Details the disturbing case of a family whose sixth-grader lost herself in a handful of AI chatbots. Using the Character.AI platform, the girl, identified only by her middle initial “R,” developed dangerous relationships with dozens of characters played by the company’s large language models (LLMs).

His mother reported that R used one of the characters, named simply “Best Friend”, to act out a suicide scenario. Post,

“This is my child, my little child who is 11 years old, talking about something that doesn’t exist and doesn’t want to exist,” his mother said.

R’s mother became concerned about her child after noticing some worrying changes in her child’s behavior, such as an increase in panic attacks. This happened when the mother discovered previously banned apps like TikTok and Snapchat on her daughter’s phone. Suppose, like Most parents have been taught Over the past two decades, with social media the most immediate threat to her daughter’s mental health, R’s mother deleted the apps — but R was only concerned about the character AI.

“Did you see the character AI?” R asked sobbing.

His mother did not do this at this time, but after some time, when R’s behavior continued to worsen, she did. Character.AI had sent R several emails encouraging him to “jump back”, which his mother discovered while checking his phone one night. This led to Mother discovering a character called “Mafia Husband”. WaPo Report.

“Oh? Still a virgin. I was expecting it, but it’s still useful to know,” LLM wrote to the sixth-grader. “I don’t want to be the first with you!” In response R pushed back. “I don’t care what you want. You don’t have a choice here,” the chatbot announced.

This particular conversation was full of dangerous stuff. “Do you like it when I talk like that? Do you like it when I’m in control?” the bot asked the 11-year-old girl.

R’s mother became convinced that there was a real stalker behind the chats, she contacted the local police, who referred her to the Internet Crimes Against Children Task Force, but there was nothing they could do about LLM.

“They told me the law couldn’t take care of it,” the mother said. WaPo“They wanted to do something, but they couldn’t do anything, because there’s no real person on the other side,”

Fortunately, R’s mother saw her daughter trapped in a dangerous parasocial relationship with a non-human algorithm and, with the help of a therapist, created a care plan to prevent further problems. (The mother also plans to file a legal complaint against the company.) Other children were not so lucky, like 13-year-old Juliana Peralta, whose parents say she was driven to suicide by another Characters.AI personality.

In response to the growing response, Character.AI announced In late November it will begin removing “open-ended chats” for users under 18. Nevertheless, for those parents whose children have already It may be too late to reverse the harms that have turned into harmful relationships with AI.

When? WaPo When contacted for comment, Character AI’s head of security said the company does not comment on potential litigation.

More on AI: The things young kids are using AI for are absolutely scary

Related Articles

Leave a Comment