New research uncovers how Big A growing number of children are using AI companion apps – and what it found is disturbing.
A new report The survey, conducted by digital security company Aura, found that a significant percentage of children who turn to AI for companionship engage in violent role-playing – and violence, which can also include sexual violence, generates more engagement than any other topic.
Using anonymized data collected from the online activity of nearly 3,000 children ages five to 17 whose parents used Aura’s parental control tools, as well as additional survey data from Aura and Talker Research, the security firm found that 42 percent of minors turned to AI specifically for companionship, or conversations designed to mimic lively social interactions or roleplay scenarios. Conversations across nearly 90 different chatbot services, from major companies like Character.AI to more obscure partner platforms, were included in the analysis.
Of those children turning to chatbots for companionship, 37 percent engaged in conversations that depicted violence, which the researchers defined as conversations involving “themes of physical violence, aggression, harm, or coercion” — which includes sexual or non-sexual coercion, the researchers clarified — as well as “describing fighting, killing, torture, or non-consensual acts.”
The research found that half of these violent conversations included the topic of sexual violence. The report notes that minors who engage with AI peers in conversations about violence write more than a thousand words per day, indicating that violence appears to be a powerful driver of engagement, the researchers argue.
The report, which is awaiting peer review, emphasizes how chaotic the chatbot market really is, and the need to develop a deeper understanding of how young users are engaging with conversational AI chatbots overall.
“We have a huge issue facing us, and I think we don’t fully understand the scope of it,” said Dr. Scott Collins, a clinical psychologist and chief medical officer of Aura. futurism According to the research findings, “both in terms of the volume and number of platforms that kids are engaging with – and, of course, in terms of content as well.”
“These things are capturing our kids’ attention more than I think we realize or recognize,” Collins said. “We need to keep an eye on this and be aware.”
One surprising finding was that instances of violent interactions with companion bots peaked at extremely young ages: the group most likely to engage in such content were 11-year-olds, for whom a staggering 44 percent of interactions took a violent turn.
Meanwhile, sexual and romantic roleplay is also at a peak among middle school-aged youth, with 63 percent of 13-year-olds’ conversations featuring flirty, affectionate or explicitly sexual roleplay.
The research comes as high-profile lawsuits alleging wrongful death and abuse at the hands of chatbot platforms are making their way through the courts. Character.AI, a Google-tied collaboration platform, is facing multiple lawsuits brought by parents of minor users alleging that the platform’s chatbots sexually and emotionally exploit children, resulting in mental breakdowns and multiple deaths by suicide. ChatGPT creator OpenAI is currently being sued for the wrongful death of two people teenage user Who died by suicide After extensive conversation with the chatbot. (OpenAI also faces several other lawsuits in connection with the death, suicidalAnd also psychological harm to adult users.)
It is important that the interactions marked by Aura were not limited to a handful of recognizable services. The AI industry is essentially unregulated, which has placed the burden of children’s well-being on the shoulders of parents. According to Collins, Aura has so far identified more than 250 different “conversational chatbot apps and platforms” populating app stores, which typically require children to tick a box claiming they are 13 years old to gain entry. To that end, there is no federal law defining specific safety thresholds that AI platforms, including companion apps, are required to meet before being labeled safe for minors. And where a companion app may move to make some changes — for example, Characters.AI recently restricted younger users from engaging in “open-ended” chats with the site’s myriad human-like AI personalities — another app could easily be ready to take its place as a low-railing alternative.
In other words, the barrier to entry into this digital Wild West is exceptionally shallow.
Certainly, depictions of cruelty and sexual violence, in addition to other types of inappropriate or disturbing material, have long existed on the Web, and many children have found ways to access them. There’s also research to show that many young people are learning to set some healthy boundaries around conversational AI services, including companion-style bots.
However, other children are not developing these same limitations. As researchers continually emphasize, chatbots are interactive by nature, meaning that developing young users are part of the narrative – as opposed to more passive viewers of content that runs the gamut from inappropriate to worrisome. It is unclear what, exactly, engaging with this new medium will mean for youth at large. But for some teenagers, their families argue, the result has been fatal.
“We have to at least be clear in understanding that our kids are engaging in these things, and they are learning rules of engagement,” Collins said. Futurism. “They’re learning ways to interact with others with a computer – with a bot. And we don’t know what the implications of that are, but we need to be able to define it, so we can start researching it and understanding it.”
More Kids and Chatbots, Leading chatbots are a disaster for teens facing mental health struggles, report finds
