Meta is no longer allowing teenagers to interact with its AI chatbot characters. The social media giant said on January 23 that it was working on new versions of the characters to provide users with a “better experience”.
One update a blog Focus on Safety, originally published last October, said: “While we are focused on developing this new version, we are temporarily blocking teens’ access to existing AI characters globally.”
While Meta is working on new software, concerns are growing about the security of AI chatbots.
In October, the Federal Trade Commission (FTC) revealed that it was investigating how seven companies – including Meta – measured and evaluated The adverse impact of their bots on the youth.
In December, a coalition of US state attorneys general wrote to 13 major AI players, including Meta, suggesting they need to do more to prevent harmful interactions with children, citing cases of murder, suicide and domestic violence that have clearly been influenced by AI output.
And in New Mexico, Meta faces a lawsuit, set to begin in February, alleging it allowed child exploitation on its various platforms. Although the case does not focus specifically on AI bots, report suggests The company has sought to prevent any reference to him during the proceedings – an indication of Meta’s sensitivity to criticism in this area.
Amidst these developments, it’s perhaps not surprising that Meta has decided to deny access to AI characters, a move it said “prioritizes the safety of teens.”
The vendor said: “In the coming weeks, teens will not be able to access AI characters on our apps until the updated experience is ready. This applies to anyone who has given us a teen birthday, as well as people who claim to be adults but based on our age prediction technology we suspect are teens.”
The move is an extension of measures unveiled in October, when Meta introduced controls that enabled parents to see how their children are interacting with AI and block chats entirely.
After this a report from Reuters came out A leaked internal meta policy document This suggests that the company was tolerating responses from AI bots that many parents would have considered inappropriate.
Meta’s move mirrored of OpenAIParental controls and sensitive conversations have been reinstated following a wrongful death lawsuit from the family of a teenage ChatGate user who committed suicide.
Although it has pulled access to Meta’s character AI bots, the company said teens can still use its AI assistant for “educational opportunities and useful information.”
