Generative AI relies on a vast set of training material, composed primarily of human-written content randomly extracted from the Internet.
Scientists are still trying to better understand what will happen when these AI models run out of that content and must instead rely on synthetic, AI-generated data, closing a potentially dangerous loop. Studies have found that AI models begin to destroy this AI-generated data, which can eventually turn their neural networks into mess. As AI iterates on recycled content, it begins to produce increasingly dull and often poor output.
There is also the question of what will happen to human culture as AI systems digest and produce AI content indefinitely. As AI executives promise that their models are capable of replacing creative jobs, what will future models be trained on?
in one Insightful new study published in journal patterns This month, an international team of researchers found that a text-to-image generator, when connected to an image-to-text system and instructed to repeat repeatedly, eventually converges on “very normal-looking images” in what they have dubbed “visual elevator music.”
“This finding shows that, even without additional training, autonomous AI feedback loops naturally gravitate toward common attractors,” they wrote. “Human-AI collaboration, rather than fully autonomous creation, may be necessary to preserve diversity and surprise in an increasingly machine-generated creative landscape.”
As Rutgers University computer science professor Ahmed Elgammal writes in an article essay about work for ConversationThis is another piece of evidence that generative AI may already be causing a state of “cultural stagnation.”
Recent studies show that “generative AI systems move toward self-homogenization when used autonomously and repeatedly,” he argued. “They also suggest that AI systems are currently working this way by default.”
“Convergence on a set of dull, stock images occurred without retraining,” Elgammal said. “No new data added. Nothing learned. The collapse emerged solely from repeated use.”
This is a particularly worrying situation, given that a tidal wave of AI slop is wiping out human-generated content on the Internet. While proponents of AI argue that humans will always be the “final arbiter of creative decisions”, according to Elgammal, algorithms are already beginning to push AI-generated content to the top, a homogenization that could significantly hinder creativity.
“The risk is not only that future models may train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, describable, and traditional,” the researchers wrote.
It remains to be seen to what extent existing creative outlets from photography to theater will be affected by the advent of generic AI, or whether they can coexist peacefully.
However, this is a worrying trend that needs attention. Elgammal argued that to prevent this process of cultural stagnation, AI models need to encourage or encourage “deviation from the norms”.
He concluded, “If generic AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence towards a statistically average output.” “One thing is clear from the study: In the absence of these interventions, generic AI will continue to gravitate toward mediocre and uninspired content.”
More on Generative AI: San Diego Comic Con quietly bans AI art