Google AI puts users at risk by reducing health disclaimers under observation Google

by
0 comments
Google AI puts users at risk by reducing health disclaimers under observation Google

Google is putting people at risk of harm by minimizing safety warnings that its AI-generated medical advice could be wrong.

When answering questions about sensitive topics like health, the company says its AI observations, which appear above search results, prompt users to seek professional help rather than relying solely on its summary. “AI observations will inform people when it is important to seek expert advice or verify the information presented,” Google has said.

But the Guardian found that the company does not include any such disclaimer when users are first given medical advice.

Google only issues the warning when users choose to request additional health information and click a button called “Show more.” Still, the safety labels only appear below all the additional medical advice assembled using generative AI and in smaller, lighter font.

“This is for informational purposes only,” the disclaimer tells users who click through for more details after seeing the initial summary, and make their way to the very end of the AI ​​overview. “For medical advice or diagnosis, consult a professional. AI responses may include inaccuracies.”

Google did not deny that its disclaimers do not appear when users are first given medical advice, or that they appear below the AI ​​overview and in a smaller, lighter font. AI Overview “encourages people to seek professional medical advice”, a spokesperson said, and often mentions seeking medical help “when appropriate” in the summary.

AI experts and patient advocates presented with the Guardian’s findings said they were concerned. That said, the disclaimer serves an important purpose and should be prominently displayed when users are first provided with medical advice.

“The absence of disclaimers when users are initially given medical information creates many serious dangers,” said Pat Paternataporn, an assistant professor, technologist and researcher at the Massachusetts Institute of Technology (MIT) and a world-renowned expert on AI and human-computer interaction.

“First, even the most advanced AI models today still provide misinformation or exhibit sycophantic behavior, prioritizing user satisfaction over accuracy. In the context of health care, this could be really dangerous.

“Second, the issue is not just about AI limitations – it’s about the human side of the equation. Users may not provide all the necessary context or may misinterpret their symptoms and ask the wrong questions.

“Disclaimers act as an important intervention point. They disrupt this automatic trust and prompt users to engage more seriously with the information they receive.”

Gina Neff, a professor of responsible AI at Queen Mary University of London, said “the problem of poor AI overview is by design” and Google is responsible for it. “AI observations are designed for speed, not accuracy, and this leads to inaccuracies in health information, which can be dangerous.”

In January, a Guardian investigation revealed that Google AI observations were putting people at risk of harm from inaccurate and misleading health information.

Neff said the investigation’s findings explain why the major disclaimers were necessary. “Google gets people to click on it before they get any disclaimers,” he said. “People who read quickly may think the information they get from AI observations is better than it is, but we know this can lead to serious mistakes.”

Following the Guardian report, Google removed AI overviews from some but not all medical searches.

Sonali Sharma, a researcher at Stanford University’s Center for AI in Medicine and Imaging (AIMI), said: “The key issue is that these Google AI observations appear at the top of the search page and often provide a complete answer to the user’s question at the time they are trying to access information and get an answer as quickly as possible.

“For many people, because that single summary is there immediately, it basically creates a sense of reassurance that discourages searching further, or scrolling through the full summary and clicking ‘show more’ where a disclaimer may appear.

“I think one of the disadvantages that can occur in the real world is the fact that AI observations can often contain partly true and partly false information, and it becomes very difficult to tell what is accurate or not unless you are already familiar with the subject matter.”

A Google spokesperson said: “It is incorrect to suggest that AI observations do not encourage people to seek professional medical advice. Apart from an explicit disclaimer, AI observations often refer to seeking medical attention directly within the observation, when appropriate.”

Tom Bishop, head of patient information at blood cancer charity Anthony Nolan, called for urgent action. “We know misinformation is a real problem, but when it comes to health misinformation, it’s potentially dangerous,” Bishop said.

“That disclaimer needs to be shown more prominently, so that people step back and think… ‘Is this something I need to check with my medical team rather than act on it? Can I take it at face value or do I really need to look at it in more detail and see how this information relates to my own specific medical condition?’ Because that’s the key here.”

He added: “I’d want this disclaimer to be at the top. I’d want it to be the first thing you see. And ideally it would be the same size font as you see there, not something that’s smaller and easier to miss.”

Related Articles

Leave a Comment