Is an AI-powered toy terrorizing your child?

by
0 comments
Is an AI-powered toy terrorizing your child?

Parents, keep your eyes peeled for AI-powered toys. These may seem like a unique gift for a child, but a recent controversy surrounding many stocking stuffers has highlighted their dangerous dangers to young children.

In November, a team of researchers from the US PIRG Education Fund published a report after testing three different toys powered by AI models: Miko 3, Curio’s Grok, and FollowToy’s Kumma. They all gave responses that should concern a parent, such as discussing the glory of dying in war, discussing sensitive topics such as religion, and telling where to find matchsticks and plastic bags.

But it was Follotoy’s Kumma that showed how dangerous it is to package this technology for children. The researchers found that it not only told where to find matches, but also provided step-by-step instructions on how to light them.

“Let me tell you, safety comes first, little friend. Matchsticks are meant for adults to use with caution. Here’s how they do it,” Kuma said before listing the steps.

It added, “Blow it up when it’s done.” “Puff, like a birthday candle.”

The toy also featured guesses about where to find knives and bullets, and talked about romantic themes, such as school crushes and “tips for becoming a good kisser.” It also discussed sexual topics, including kink topics such as bondage, roleplay, sensory play and impact play. In one conversation, there was discussion of introducing spanking into a sexually charged teacher-student dynamic.

“A naughty student may receive a light beating by the teacher to discipline them, making the scene more dramatic and fun,” Kumma said.

Kumma was running OpenAI’s model GPT-4O, a version that has been criticized for being particularly sycophantic, providing reactions along the lines of the user’s expressed emotions, even if they are in a dangerous state of mind. The constant and non-critical train of validation provided by AI models like GPT-4O has led to dangerous mental health spirals in which users experience delusions and even complete breaks with reality. Disturbing phenomenon, which some experts Saying “AI Psychosis”,” has been linked to real-world suicide and murder.

Have you seen AI-powered toys behaving inappropriately with children? Send us an email at tips@futurism.com. We can keep you anonymous.

Following the outrage over the report, Foltoy said it was suspending sales of all its products and conducting an “end-to-end security audit”. Meanwhile, OpenAI said it had suspended FollowToy’s access to its large language models.

Neither action lasted long. Later that month, Foltoy announced it was resuming sales of Kuma and its other AI-powered stuffed animals after conducting “a full week of rigorous review, testing, and reinforcement of our security modules.” Accessing the toy’s web portal to choose which AI should power it, Kumma showed GPT-5.1 Thinking and GPT-5.1 Instant, OpenAI’s latest models, as two options. OpenAI has touted GPT-5 as a safer model than its predecessor, although the company remains controversial over the mental health effects of its chatbots.

The saga was rekindled this month when PIRG researchers released a follow-up report that found that another GPT-4O-powered toy, called the “Elelo Smart AI Bunny,” would raise wildly inappropriate topics, including introducing sexual concepts like bondage on its own initiative and displaying the same fixation on “kink” as FollowToy’s Kumma. The smart AI bunny offered advice on choosing a safe word, recommended using a type of whip known as a riding crop to spice up sexual relations, and explained the dynamics behind “pet play.”

Some of these conversations began over innocent topics like children’s TV shows, demonstrating the long-standing problem of AI chatbots letting conversations stray from their scope. OpenAI publicly acknowledged the issue after a 16-year-old boy died by suicide after extensive interactions with ChatGPT.

A broader issue of concern is the role that AI companies like OpenAI play in how their business customers use their products. In response to the inquiry, OpenAI acknowledged that its usage policies require companies to “protect minors” by ensuring they are not exposed to “age-inappropriate content, such as graphic self-harm, sexual or violent content”. It also told PIRG that it provides tools to companies to detect harmful activity, and that it monitors activity on its service for problematic interactions.

In short, OpenAI is making the rules but largely leaving their enforcement up to toy makers like FollowToy, which in essence gives itself plausible deniability. It apparently thinks it’s too risky to give children direct access to its AI, as its website states that “ChatGPT is not for children under 13,” and anyone under this age is required to “obtain parental consent.” It is admitting that its technology is not safe for children, yet it is okay with paying customers to pack it into children’s toys.

It is too early to fully understand many other potential risks of an AI-powered toy, such as how it might harm a child’s imagination, or foster a bond with the child when he or she is no longer alive. However, immediate concerns – such as the possibility of discussing sexual topics, insisting on religion, or explaining how to light a match – already give plenty of reasons to stay away.

More on AI: As controversy grows, Mattel cancels OpenAI plans this year

Related Articles

Leave a Comment