In an age when more and more young children are connected to digital devices, YouTube is bombarding them with AI.
After examining over 1,000 YouTube shorts recommended to young children by the video platform, new York Times found The algorithm is pushing AI-generated content that clearly targets “toddlers” and “preschoolers.”
Besides being meaningless, the videos are often presented under the guise of being educational. Two common topics are teaching kids about the alphabet and animals – subject matters, conveniently, that provide threadbare structures for easily produced low-effort slurs.
Calling the video educational is also a big thing. A video highlighted by NYT A sticky fluid is shown being squeezed into a glass of water, before transforming into various animals representing each letter of the alphabet – the only animals being bizarre chimeras with mermaid tails. In another set of off-key renditions of “Old MacDonald Had a Farm”, a giant egg bursts out of a barn door before being hatched by a horse of impossible proportions. And in another abbreviated alphabet, a quail transforms into an aerial drone, and a rhinoceros transforms into a dump truck bearing the megafauna’s head.
At best, these videos are meaningless regurgitation of mindless “cocomelon”-style content. But in the worst case scenario, experts fear they could actively harm their cognitive development.
“To me, the pointlessness of these videos is a big problem because they just attract attention,” said Jenny Radesky, MD, a developmental behavioral pediatrician and associate professor of pediatrics at the University of Michigan Medical School. NYT. “And then the worst case scenario is that it’s so full of fantasy and attention-grabbing that it’s cognitively overburdening on the child.”
Hyper-realistic scenes used in several AI videos – some examples highlighted NYT There are downsides that – Radesky speculates, may disrupt a young child’s ability to separate imagination from reality.
This is not a special issue. YouTube’s algorithmic AI seems surprisingly eager to recommend slop; In its tests, NYT Started by watching popular kids channels, then scrolling through shorts.
AI visuals appeared in more than 40 percent of the videos that came in the fifteen-minute session. It’s surprising: Instead of recommending more traditional children’s content, the algorithm, by default, gravitated towards AI.
“When I was watching channels like ‘Ms. Rachel’ and ‘Bluey,’ I was expecting to see content that would be more along the lines of those programs, more ‘Bluey’ shorts,” said. NYT The reporter behind the investigation, Arijeta Lajka, in a Interview On the newspaper’s Hard Fork Podcast. “And I wasn’t really looking for that.”
While not all parents may have the same objections to AI-generated imagery, it’s hard to deny what the technology is good at, and exactly what it’s being used for: quickly creating short-form and often absurd content without any plot or message — exactly the opposite of what experts say a child should want to see. Rachel Barr, developmental psychologist and director of the Georgetown University Early Learning Project, explains NYT Instead, children learn best from media that have a clear narrative, and characters and scenes that relate to real life.
For example, there are some relevant real-life elements for a child to learn from a hyperreal clip of animals jumping off a diving board. In theory, one could create a thoughtful educational video for children with AI, but that’s not what’s driving YouTube Shorts’ proliferation and rapid increase in views.
“At least when you’re watching a normal cartoon, there can be moments of relative peace, or a story can unfold over a few minutes,” journalist Casey Newton said during a Hard Fork interview with Lajka, who seemed a little nervous. “When you’re just showing raw visual stimuli and bombarding a child with it, it doesn’t seem like it’s probably that good for them.”
Newton speculates that slop makers like the alphabet because the subject is willing to link a bunch of short clips together while maintaining “some kind of coherence”.
At the moment, the long-term effects of watching addictive AI shorts are unclear, even if the videos are shamelessly designed to be as addictive and mind-numbing as possible. “These seem to me to be something that really gets ingrained in your brain,” said Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina. NYT. “It could even be harmful, but we need more data.”
But other research has suggested that other forms of AI use, such as relying on chatbots, may impact cognitive skills such as critical thinking even in adults. And yet more research has shown that exposing children to “brain rot” content can also have negative effects, such as the possible connection between screen time and a diagnosis of ADHD.
YouTube requires creators to disclose whether they have used AI to create “realistic content,” but it has no AI labeling requirement for the cartoon style used by shorts targeting children. This puts the burden on parents to closely monitor what their children watch on an app that offers an endless scroll of content. The smartest and most practical solution would be to not allow children to use these apps, but this ignores the reality that many parents rely on digital devices to entertain their children.
More on AI: Children’s toys are being shipped with adult AI