How ‘trusted authority’ Google AI overview is putting public health at risk Google

by
0 comments
Study shows Google AI overview cites YouTube more than any medical site for health questions Google

Do I have the flu or Covid? Why do I wake up feeling tired? What is causing my chest pain? For more than two decades, typing a medical question into the world’s most popular search engine has generated a list of links to websites with answers. Search those health-related questions on Google today and the answers will likely be written by artificial intelligence.

Google Chief Executive Sundar Pichai first laid out the company’s plans to incorporate AI into its search engine in May 2024 at its annual conference in Mountain View, California. Starting that month, he said, U.S. users will see a new feature, AI Overview, that will provide information summaries on top of traditional search results. The change represents the biggest change to Google’s core product in a quarter century. By July 2025, the technology had expanded to more than 200 countries in 40 languages, providing AI observations to 2 billion people every month.

With the rapid rollout of AI overshadowing, Google is racing to protect its traditional search business, which generates about $200 billion (£147 billion) a year, before advanced AI rivals can derail it. “We are moving forward in AI and shipping at an incredible pace,” Pichai said last July. AI observations in particular were “performing well”, he said.

But experts say observation carries risks. They use generative AI to provide snapshots of information about a topic or question, and add conversational answers on top of traditional search results in the blink of an eye. They may cite sources, but do not necessarily know when that source is wrong.

Google’s chief executive, Sundar Pichai, hopes AI overview can help maintain its online search revenue. Photograph: Kylie Cooper/Reuters

Within weeks of the feature launching in the US, users encountered falsehoods across a number of topics. One AI Overview Andrew Jackson, the seventh US President, said, Graduated from college in 2005. Elizabeth Reed, Google’s head of search, Responded to criticism in a blog post. He acknowledged that “in some cases”, AI overviews misinterpreted language on web pages and presented incorrect information. “At the scale of the Web, with billions of queries coming in every day, it’s natural to have some oddities and errors,” he wrote.

But when those questions are about health, accuracy and context are essential and non-negotiable, experts say. Google is facing increasing scrutiny over its AI overview of medical questions after a Guardian investigation found people were being put at risk of harm from inaccurate and misleading health information.

The company says AI observations “reliable“. But the Guardian found that some medical summaries served inaccurate health information and put people at risk of harm. In one case, which experts called “really alarming”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this is exactly the opposite of what should be recommended, and could increase patients’ risk of dying from the disease.

In another “alarming” example, the company provided false information about vital liver function tests, which could mislead people with severe liver disease into thinking they were healthy. What the AI ​​overview said was normal may actually differ significantly from what is considered normal, experts said. Summary may cause seriously ill patients to mistakenly think that their test results are normal and not bother to attend follow-up appointments.

AI observations about women’s cancer tests also provided “grossly inaccurate” information, which experts said could result in people dismissing real symptoms.

Google initially tried to downplay the Guardian’s findings. The company said that despite what its own physicians could assess, the AI ​​observations worried experts linked to reputable sources and recommended seeking expert advice. A spokesperson said, “We invest significantly in the quality of AI observations, particularly for topics like health, and the vast majority provide accurate information.”

However, within days the company removed some AI observations for health-related questions flagged by the Guardian. A spokesperson said, “We do not comment on individual removals within Search.” “In cases where an AI observation misses some context, we work to make broader improvements, and we also take action under our policies where appropriate.”

While experts have welcomed the removal of some AI summaries for health-related questions, many remain concerned. “Our big concern with all of this is that it’s messing up a single search result and Google can turn off AI overviews for that, but it’s not dealing with the bigger issue of AI overviews for health,” says Vanessa Hebditch, director of communications and policy at the British Liver Trust, a liver health charity.

“There are still plenty of examples of Google AI observations that are giving people inaccurate health information,” says Sue Farrington, president of the Patient Information Forum, which promotes evidence-based health information for patients, the public, and health care professionals.

A new study has raised further concerns. When researchers analyzed responses to more than 50,000 health-related searches in Germany to see which sources AI observations were most trusted, one result immediately stood out. The most cited domain was YouTube.

“This matters because YouTube is not a medical publisher,” researchers wrote. “It’s a general-purpose video platform. Anyone can upload content there (for example, board-certified physicians, hospital channels, but also wellness influencers, life coaches, and creators with no medical training).”

Experts say that in medicine, it is not only about where the answers come from in the case, or what their level of accuracy is, but also how they are presented to users. “With AI observation, users are no longer faced with sources that they can compare and critically evaluate,” says Hannah Van Colfschooten, a researcher in AI, health and law at the University of Basel. “Instead, they are presented with a single, confident, AI-generated answer that demonstrates medical authority.

“This means that the system does not simply reflect health information online, but actively reconstitutes it. When that feedback is built on sources that were never designed to meet medical standards, such as YouTube videos, it creates a new form of unregulated medical authority online.”

Google says AI overviews have been created To surface information supported by top web resultsAnd include links to web materials that support the information presented in the summary. The company told the Guardian that people can use these links to delve deeper into a topic.

But single blocks of text in the AI ​​overview, combining health information from multiple sources, can cause confusion, says Nicole Gross, associate professor in business and society at the National College of Ireland.

“Once exposed to an AI summary, users are much less likely to do further research, meaning they are deprived of the opportunity to critically evaluate and compare information or even deploy their common sense when it comes to health-related issues.”

Experts also raised other concerns with the Guardian. He says that if and when AI observations provide accurate facts about a specific medical topic, they cannot distinguish between strong evidence from randomized trials and weak evidence from observational studies. He further said that some people also forget important warnings about that evidence.

Listing such claims next to each other in an AI overview may also give the impression that some are better founded than they actually are. As AI observations evolve, the answers may change, even if the science has not shifted. “It means people are getting different answers depending on how they search, and that’s not good enough,” says Athena Lamnisos, chief executive of the Eve Appeal cancer charity.

Google told the Guardian that the links included in the AI ​​overview were dynamic and changed based on the information most relevant, useful and timely to a search. If AI overviews misinterpreted web content or missed some context, the company will use these errors to improve its systems, and also take action when appropriate.

The biggest concern, Gross says, is that bogus and dangerous medical information or advice in AI observations “gets translated into the patient’s everyday practices, routines, and lives, even in customized forms.” “In health care, it can become a matter of life and death.”

Related Articles

Leave a Comment