Millions of Americans are talking to AI instead of going to the doctor, and it’s giving them horrifically flawed medical advice

by ai-intensify
0 comments
Millions of Americans are talking to AI instead of going to the doctor, and it's giving them horrifically flawed medical advice

While Google’s AI may no longer be recommend eating rocks Or confidently telling users to put gum on their pizza, even state-of-the-art AI chatbots are woefully incompetent at providing medical advice.

one in new study Published in the Journal this week jama network openResearchers asked 21 marginal large language models (LLMs) to “play doctor” when faced with realistic symptoms that a real patient might possibly ask about.

The results presented a grim picture. The failure rate of AIs exceeded 80 percent when given vague symptoms that could match more than one condition, and for more straightforward cases that included physical exam findings and laboratory results, they still failed 40 percent of the time. The researchers also found that unlike human practitioners, “LLMs fail prematurely on single answers”, resulting in “weak performance” across all models.

“Despite continued improvements, off-the-shelf large language models are not ready for unsupervised clinical-grade deployment,” said Mark Susi, MD, corresponding author and associate chair of innovation and commercialization at Massachusetts General Hospital. statement. “Differential diagnoses are at the heart of clinical reasoning and are the basis of the ‘art of medicine’ that AI cannot currently replicate,” he said.

Translated into the real world, an AI that jumps to conclusions when not presented with the full picture can have disastrous consequences. Say, if a person asks a chatbot about a rash or a sudden onset cough, they may be given misleading information and potentially dangerous advice.

The results highlight the significant risks of relying on AI for life-or-death health advice, a worrying trend that is already underway across the country. As a recent survey Found by the West Health-Gallup Center on Healthcare in the US, one in four American adults – the equivalent of 66 million people – are already seeking medical advice from ChatGPT and other chatbots like it.

Respondents often said they were seeking information both before and after seeing a health care professional. In many cases, they are abandoning seeking real-world medical help altogether after talking to a chatbot. Of those who sought health advice from AI, 14 percent – ​​the equivalent of more than nine million Americans – said they never saw a provider they would have otherwise seen had it not been for the technology.

According to the survey, 27 percent said they do not want to pay for doctor visits because of AI consultations, while 14 percent said they are unable to pay for it. Some participants said they did not have the time or ability to see a doctor.

“Artificial intelligence is already reshaping the way Americans seek health information, make decisions, and connect with providers, and health systems must keep pace,” Tim Lash, president of the West Health Policy Center, said in a statement.

Overall, both studies paint a dire picture of the current health care landscape in America. Not only are millions of Americans overly reliant on AI tools, but by confusing LLMs they are often being given flawed advice – and they are choosing not to seek help from far more knowledgeable professionals.

AI has already faced a huge amount of criticism from experts for providing poor medical advice, from Google’s AI overview delivering dangerously inaccurate or out-of-context information to transcription tools used by doctors inventing non-existent drugs.

Even if the information they are providing is incorrect, AI is giving patients a sense of certainty. Nearly half of respondents in the latest survey said that talking to a chatbot about medical problems made them feel more confident when talking to a provider, 22 percent said it helped them identify issues earlier, and 19 percent said it allowed them to avoid unnecessary tests or procedures.

At the same time, many Americans remain deeply skeptical of AI’s medical advice. Nearly a third of participants who said they consulted AI for health-related issues said they did not trust the tool. One in ten respondents said AI had given them potentially unsafe advice.

One thing is certain: the AI ​​industry exists Strong need for regulatory oversight.

More on AI and medical advice: Frontier AI models are doing absolutely weird things when asked to diagnose medical X-rays

Related Articles

Leave a Comment