Tech giants Meta and Google-owned YouTube faced with Los Angeles suffered a devastating legal blow yesterday after losing a landmark social media addiction trial, an unexpected outcome that is likely to have an impact on the entire social media industry – and the splinters of that outcome may fall on AI companies too.
The matter has been described by some as a Big Tech “big tobacco moment“A jury found that a young woman suffered life-altering mental health effects as a direct result of using the companies’ platforms Meta and YouTube. Crucially, the case did not base its claims on the nature of the user-generated content that the then-teenage plaintiffs encountered on the social media sites. Instead it pointed to specific design features – infinite scroll, beauty filters – baked into the platforms, arguing that these were company-created elements that fostered harmful, addictive Are products.
Basically, the case put the adage “it’s a feature, not a bug” to the test. And a group of American consumers, sided with the plaintiffs, determining that the platforms are defective products, distributed to the public without proper safeguards or warnings about their potential harms.
Meta and YouTube have it Both vowed to appeal And defended the security of its platform. But as those appeals work their way through the court system, that same basic argument is currently being tested against the latest buzzy technology: AI.
As things stand, three AI companies — ChatGPT maker OpenAI, Gemini maker Google, and Google-tied AI collaboration platform Character.AI — are facing high-profile consumer safety and wrongful death lawsuits arising from users’ experiences with enterprises’ various human-like chatbots. The cases involve both minor and adult users of chatbots, and alleged user outcomes vary. Some lawsuits claim that anthropomorphic chatbots, connecting with users as platonic and romantic partners, serve as powerful suicide trainers, helping teenagers and adults write suicide notes and plan their death. Other lawsuits claimed that chatbots led users into a state of confusion, resulting in devastating mental health crises and psychological harm; Some of these cases are also resulted in deaths, as well as damage to reputationFinancial ruin, separation from loved ones, and hospitalization.
Character.AI has settled one of the many lawsuits it has been fighting so far, all of which relate to small users. OpenAI is battling more than a dozen separate death and harm lawsuits, including a case centered on a tragic murder-suicide that was allegedly instigated by ChatGPT to reinforce an unstable person’s paranoid delusions. And Google – which has also been named in the Character.AI lawsuits for its role in funding the smaller platform – continues to fight cases related to Character.AI, and was separately sued over the death by suicide of an adult user for whom the product allegedly set a suicide timer.
But while the human users of bots and the consequences they face and their families are diverse, the fundamental logic is more or less the same in all cases. The lawsuits collectively allege that the AI companies acted negligently. They insist on releasing half-baked and unsafe products to the public in order to gain market advantage, and make deliberate design choices – in the case of AI, these are features like the anthropomorphism of bots, or their human-like qualities – that keep users engaged with platforms despite the harm to their well-being. Basically, these cases focus on allegations of corporate negligence and how technological products are built to function by humans. And until yesterday, such claims were a winning argument against the social media titans.
In response to the lawsuits, AI companies have generally expressed condolences to the families while defending their products and security efforts. Both Character.AI and OpenAI have made changes to their platforms in the wake of litigation both companies have instituted parental controls and OpenAI assembling a panel of health experts.
However, the industry remains effectively self-regulated. Meanwhile, on the content side, potentially complicating things even further for AI labs is the reality that these cases do not actually deal with users engaging with user-generated content, as is typically the case with social media sites; These cases are about the relationship of users with the generated AI output by platform Self. (In the case that was settled, Character.AI initially tried to argue that the outputs of its chatbots were protected speech, but a judge rejected this.)
Some lawyers leading legal efforts against AI companies certainly see the Meta and YouTube results as a threat to the chatbot suit. To wit: In a statement following news of the social media decision, the Tech Justice Law Project (TJLP), a legal nonprofit that has been a driving force in the cases against Character.AI, Google, and OpenAI, declared that “when companies make deliberate decisions about how to build products, they should be held accountable for the potential consequences of those choices — whether those companies are social media platforms or building AI products.”
TJLP Director Meetali Jain said the decision “makes clear” that “Americans can clearly see that tech corporations are making specific design choices about their tech products that are harming our communities for their own profit.”
“Regardless of the specific tech product,” Jain continued, “these are the choices and their resulting impacts for which tech corporations must be held accountable.”
More on AI lawsuits: Lawsuit claims ChatGPT killed a man after OpenAI brought back “inherently dangerous” GPT-4o