‘Very dangerous’: A mindful mental health expert on Google’s AI overview mental health

by
0 comments
'Very dangerous': A mindful mental health expert on Google's AI overview mental health

A The year-long commission of inquiry into AI and mental health has been launched by Mind after a Guardian investigation revealed how Google’s AI observations, which are shown to 2 billion people every month, give people “very dangerous” mental health advice.

Here, Rosie Weatherly, information content manager at the largest mental health charity in England and Wales, describes the risks posed to people by AI-generated summaries that appear above search results on the world’s most visited website.

“Over three decades, Google designed and delivered a search engine where trusted and accessible health content could rise to the top of results.

“Searching online for information was not perfect, but it generally worked well. Users had a good chance of clicking through to a trusted health website that answered their query.

“The AI ​​overview replaced that richness with a clinical-sounding summary that gives the illusion of certainty.

“It’s a very tempting trade-off, but not a responsible trade-off. And it often ends the information-seeking journey prematurely. The user has, at best, a half-baked answer.

“I set myself and my team of mental health information specialists at Mind a task: to spend 20 minutes doing a search using the questions we know people with mental health problems use. None of us needed 20.

“Within two minutes, Google had presented an AI overview that assured me that hunger was healthy. It told one colleague that mental health problems are caused by chemical imbalances in the brain. Another was told that her imaginary stalker was real, and a fourth said that claims of a 60% benefit for mental health conditions are false. Needless to say, none of the above is true.

Rosie Weatherly said that, during a test conducted by Mind experts, Google misreported AI observations, including that starvation is healthy. Photograph: Jill Mead/The Guardian

“In each of these examples we are seeing how AI observation is flattening information about highly sensitive and nuanced areas into neat answers. And when you take out important context and nuance and present it the way AI observation does, almost anything can seem plausible.

“This process is particularly harmful to those who are likely to be in crisis at some level.

“A multi-billion dollar company like Google that profits from AI overviews should have more resources dedicated to providing accurate information. When individuals, organizations or indeed journalists flag new insights, their scope of concern seems to be limited to reactively retraining or removing AI overviews. This ‘weird’ style of problem-solving does not seem serious and is not commensurate with the size and resource of a company that profits from them.

“Search engines have evolved to provide instant access to the most harmful search results, such as suicide methods. But if you search as an unhealthy person, the risk remains that you will be served harmful inaccuracies and half-truths, presented in cool and reassuring copy as unquestionable neutral facts with the seal of approval from the world’s largest search engine.

“In searching for crisis information, AI observation randomly aggregated various contradictory signposts into long lists.

“AI probably has huge potential to improve lives, but right now, the risks are really worrying. Google will only protect you from the potential flaws of AI observations if it thinks you’re in serious trouble. People need and deserve creative, empathetic, careful, and nuanced information at all times.”

Related Articles

Leave a Comment