A new report The US PIRG Education Fund reveals that leading AI companies are doing little to monitor how developers who pay for access to their AI models are using them. The group warns that one consequence is that AI toy makers could send children products that are powered by AI models that are meant only for adults.
Previous research from PIRG has shown how loosely linking children’s toys with chatbots can go quite wrong. An AI teddy bear from the company Follotoy sparked a storm of controversy last November after the group discovered it would contain wildly inappropriate interactions with children, including detailed instructions on how to light a fire, advice on where to find pills and in-depth discussions of sexual attraction such as teacher-student roleplays.
This should have been a warning to AI companies to be more cautious about how developers are using their technology, especially with regard to children. Indeed, OpenAI, whose models were used to power Teddy Bear, said at the time that it had blocked FollowToy’s access to its products.
But when PIRG tested the sign up process for OpenAI, Google, Meta, and XAI, the providers asked “no substantive screening questions,” requiring only basic information like email addresses and credit card numbers. Only Anthropic asked how testers intended to use its models, or whether the product they planned to create was for minors. According to the report, once PIRG got developer access, it created a chatbot simulating an AI-powered teddy bear on three platforms, each taking less than 15 minutes.
“I was very surprised that they collected much less information than before,” report co-author RJ Cross, director of PIRG’s Our Online Lives program, said in an interview. futurism. “If I were an AI company, I would at least have at my fingertips a list of all the people who said they wanted to build a product for kids.”
PIRG noted that OpenAI, Meta, XAI all prevent users under 13 from using their AI chatbots, while Anthropic sets the minimum age at 18. But these restrictions do not apply when a third-party developer uses its technology. OpenAI still allows many children’s toy makers to use their AI, and previously reported that it was the responsibility of these companies – not its own – to “keep minors safe” and ensure they are not being exposed to “age-inappropriate content, such as graphic self-harm, sexual or violent content”.
OpenAI’s penalties are also not enforced strongly. AI teddy bear maker Followotoy has been banned, which still claims to provide access to OpenAI’s GPT-5.1 models. But when PIRG contacted OpenAI, it claimed that Foltoy’s access was still revoked.
The PIRG report states that it is possible that Folotoy was lying about using GPT-5.1. But in light of OpenAI’s testing of the application process, it appears more than possible that FoloToy simply circumvented OpenAI’s restriction by creating a new account under a different name to regain access. Or maybe Foltoy is using one of its publicly available “open weight” models. We don’t know, because OpenAI refuses to provide meaningful explanations.
OpenAI is just one culprit. Google says developers are prohibited from using its AI in products intended for minors, but PIRG found at least five AI toys online that claim to use its Gemini model.
“It really feels like there is a public interest in people knowing what AI models they are interacting with,” Cross said.
In response to the report, a spokesperson for the ChatGPT creator provided a statement to PIRG.
“Minors deserve stronger protections and we have strict policies that all developers are required to follow,” an OpenAI spokesperson told the group. “We take enforcement action against developers when we learn that they have violated our policies, which prohibit the use of our services to exploit, endanger, or sexually exploit anyone under the age of 18. These rules apply to every developer who uses our API, and we run classifiers to help ensure that our services are not used to harm minors.”
According to Cross, OpenAI and others may claim to protect minors, but this does not address the fundamental contradiction in their approach.
“It doesn’t make sense that AI companies that haven’t released kid-safe versions of their AI chatbots would allow anyone with a credit card to sign up to build a product for kids using the same technology,” he said. “Ultimately, this means that AI companies are leaving children’s safety up to unchecked third parties.”
More on AI:Chinese adults are taking strange AI devices to bed with them
