In May 2024, Google erred on the side of caution by introducing its controversial AI Overview feature in a purported effort to make information easier to find.
But the AI hallucinations that followed – such as telling users to eat stones and put gum on their pizza – perfectly illustrate the persistent issues that plague large language model-based tools to this day.
And while not being able to reliably tell what year it is or making up explanations for non-existent idioms may seem like innocent mistakes that cause frustration to most users, some of the advice Google’s AI Overview feature is advising can have far more serious consequences.
In a new investigation, Guardian found The tool’s AI-powered summaries are full of inaccurate health information that could put people at risk. Experts warn it’s only a matter of time until bad advice puts users in danger – or, in a worst-case scenario, someone dies.
The matter is serious. For example, Guardian found that it advised people with pancreatic cancer to avoid high-fat foods, even though doctors were advising the exact opposite. It has also completely confused information about women’s cancer tests, causing people to ignore the real symptoms of the disease.
This is a precarious situation because people who are vulnerable and suffering often turn to self-diagnosis on the Internet for answers.
“People turn to the Internet in moments of anxiety and crisis,” explained Marie Curie, director of end-of-life charity Digital Stephanie Parker. Guardian“If the information they receive is inaccurate or out of context, it can seriously harm their health,”
Others were concerned with the feature yielding completely different responses to the same prompt, a well-documented shortcoming of large language model-based tools that can lead to confusion.
Stephen Buckley, head of information at the mental health charity Mind, told the newspaper that the AI overviews offered “very dangerous advice” about eating disorders and psychosis, summaries that were “inaccurate, harmful or could lead people to avoid seeking help.”
A Google spokesperson said Guardian A statement said the tech giant “invests significantly in the quality of AI observations, particularly for topics like health, and the vast majority provide accurate information.”
But given the results of the newspaper’s investigation, the company has a lot of work left to do to ensure its AI tool isn’t providing dangerous health misinformation.
The risks may keep increasing. according to a April 2025 survey According to the Annenberg Public Policy Center at the University of Pennsylvania, nearly eight in ten adults said they are likely to go online for answers about health symptoms and conditions. Nearly two-thirds of them found AI-generated results “somewhat or very reliable,” indicating a considerable – and troubling – level of trust.
Also, less than half of respondents said they were uncomfortable with healthcare providers using AI to make decisions about their care.
a separate mit study found Participants considered low-accuracy AI-generated responses to be “legitimate, trustworthy, and complete/satisfactory” and even “indicated a higher tendency to follow potentially harmful medical advice and incorrectly seek unnecessary medical attention as a result of the provided response.”
Despite this, AI models are proving themselves to be extremely poor replacements for human medical professionals.
In the meantime, doctors have the tough job of dispelling myths and preventing patients from being led down the wrong path by hallucinogenic AI.
But its websiteThe Canadian Medical Association calls AI-generated health advice “dangerous”, pointing out that hallucinations, as well as algorithmic bias and outdated facts, “can mislead you and potentially harm your health” if they choose to follow the generated advice.
Experts continue to advise people to consult human doctors and other licensed health care professionals rather than AI, which is a tragically tall ask given the many barriers to adequate care around the world.
At least AI Overview sometimes seems to be aware of its own shortcomings. When? queried The feature happily tells us if it should be relied upon for health advice. GuardianInvestigation of.
“A Guardian investigation found that Google’s AI Overviews displayed false and misleading health information that could put people at risk of harm,” read the response to AI Overviews.
More on AI Overview: Google’s AI summary recipes are ruining developers’ lives