Mind launches investigation into AI and mental health after Guardian investigation AI (Artificial Intelligence)

by
0 comments
Mind launches investigation into AI and mental health after Guardian investigation AI (Artificial Intelligence)

Mind is launching a groundbreaking investigation into artificial intelligence and mental health after a Guardian investigation revealed how Google’s AI overview gave people “very dangerous” medical advice.

In a one-year commission, the mental health charity, which operates in England and Wales, will examine the risks and safeguards needed as AI increasingly impacts the lives of millions of people affected by mental health issues around the world.

The inquiry – the first of its kind globally – will bring together the world’s leading doctors and mental health professionals, as well as veterans, health providers, policy makers and technology companies. Mind says its aim will be to shape a safe digital mental health ecosystem with strong regulation, standards and safeguards.

The launch comes after the Guardian revealed how inaccurate and misleading health information in a Google AI overview was putting people at risk of harm. AI-generated summaries are shown to 2 billion people per month, and appear above traditional search results on the world’s most visited website.

Following the reporting, Google removed AI observations from some but not all medical searches. Dr Sarah Hughes, chief executive of Mind, said the public was still being provided with “dangerously inaccurate” mental health advice. In the worst cases, fake information can be life-threatening, he said.

Hughes said: “We believe AI has huge potential to improve the lives of people with mental health problems, increase access to support and strengthen public services. But that potential will only be realized if it is developed and deployed responsibly, with safeguards proportionate to the risks.

“The issues highlighted by the Guardian’s reporting are one of the reasons we are launching the Mind Commission on AI and mental health, to examine the risks, opportunities and safeguards needed as AI becomes more deeply embedded in everyday life.

“We want to ensure that innovation doesn’t come at the expense of people’s wellbeing, and that those of us who have experienced mental health problems are at the center of shaping the future of digital support.”

Google has said that its AI overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” And “reliable“.

But the Guardian found that some AI observations provide inaccurate health information and put people at risk of harm. The investigation revealed false and misleading medical advice on a range of issues including cancer, liver disease and women’s health as well as mental health conditions.

Experts said some AI observations for conditions such as psychosis and eating disorders give “very dangerous advice” and “are inaccurate, harmful or may lead people to avoid seeking help”.

The Guardian found that Google was also downplaying security warnings that its AI-generated medical advice could be wrong.

Hughes said vulnerable people were being given “dangerously wrong guidance on mental health”, including “advice that could stop people from seeking treatment, increase stigma or discrimination and, in the worst cases, put lives at risk”.

He added: “People deserve information that is safe, accurate and based on evidence, not untested technology presented under the cloak of confidence.”

quick guide

Contact Andrew Gregory about this story

show

If you have something to share about this story, you can contact Andrew using one of the following methods.

The Guardian app has a tool for sending suggestions about stories. Messages are end-to-end encrypted and hidden in the routine activity performed by each Guardian mobile app. This prevents the observer from knowing that you are communicating with us at all, let alone what is being said.

If you don’t already have the Guardian app, download it (iOS/Android) and go to menu. Select ‘Secure Messaging’.

Email (not secure)

If you do not require a higher level of security or privacy you can email andrew.gregory@theguardian.com

SecureDrop and other secure methods

If you can safely use the Tor network without being observed or monitored you can send messages and documents to the Guardian through our SecureDrop platform.

Finally, our guide on theguardian.com/tips lists several ways to contact us securely, and discusses the advantages and disadvantages of each.

Illustration: Patron Design / Rich Cousin

Thank you for your feedback.

The commission, which will last for a year, will gather evidence on the intersection of AI and mental health, and provide an “open space” where the experiences of people with mental health conditions will be “observed, recorded and understood”.

Rosie Weatherly, information content manager at Mind, said that although mental health information on Google “wasn’t perfect” before the AI ​​overview, it generally worked well. She said: “Users had a good chance of clicking through to a trusted health website that answered their questions, and then followed up – offering specifics, lived experiences, case studies, quotes, social context and onward journeys to support.

“AI overviews replace that richness with clinical-sounding summaries that give the illusion of certainty. They give the user more of one form of clarity (brevity and plain English), while giving them less of another form of clarity (the security in the source of the information, and how much to trust it). It’s a very tempting tradeoff, but not a responsible one.”

A Google spokesperson said: “We invest significantly in the quality of AI observations, particularly for topics like health, and the vast majority provide accurate information.

“For queries where our systems identify that someone is in crisis, we work to display relevant, local crisis hotlines. Without reviewing the referenced instances, we cannot comment on their accuracy.”

Related Articles

Leave a Comment