Giving your health information to a chatbot is, not surprisingly, a terrible idea

by
0 comments
Giving your health information to a chatbot is, not surprisingly, a terrible idea

Every week, more than 230 million people ask chatgpt For health and wellness advice, according to OpenAI. company They say Many people see chatbots as a “sidekick” to help them navigate the maze of insurance, file paperwork, and become better self-advocates. In return, it hopes you’ll trust its chatbot for details about your diagnosis, medications, test results, and other private medical information. But while talking to a chatbot may start to feel a bit like a doctor’s office, it’s not. Tech companies are not bound by the same obligations as medical providers. experts tell The Verge It would be wise to consider carefully whether you want to hand over your records.

Health and wellness is rapidly emerging as a major battlefield for AI labs and is a major test for how willing users are to welcome these systems into their lives. This month, two of the industry’s biggest players made bold moves into the medical field. OpenAI has released ChatGPT Health, a dedicated tab within ChatGPT designed for users to ask health-related questions, calling it a more secure and personalized environment. anthropic Launched cloud for healthcareIt is a “HIPAA-ready” product that can be used by hospitals, health providers, and consumers. (Notably absent is Google, whose Gemini chatbot is one of the world’s most capable and widely used AI tools, although the company did.) announcement of An update to its MedGemma medical AI model for developers.)

OpenAI actively encourages users to share sensitive information like medical records, lab results, and health and wellness data from apps like Apple Health, Peloton, Weight Watchers, and MyFitnessPal with ChatGPAT Health in exchange for deeper insights. It clearly states that users’ health data will be kept confidential and will not be used to train AI models, and steps have been taken to keep the data secure and private. OpenAI says ChatGPS health conversations will also be held in a separate part of the app, with users able to view or delete health “memories” at any time.

OpenAI’s assurances that it will keep users’ sensitive data safe is largely helped by the company launching a similar-sounding product with strict security protocols around the same time as ChatGPT Health. The tool, called ChatGPT for Healthcare, is part of a wider range products Sold to support businesses, hospitals and physicians who work directly with patients. Suggested uses of OpenAI include streamlining administrative tasks such as drafting clinical letters and discharge summaries, and helping physicians collect the latest medical evidence to improve patient care. Similar to other enterprise-grade products sold by the company, there is more security available than what is offered to general consumers, especially free users, and OpenAI says the products are designed to comply with the privacy obligations required by the medical sector. Given the similar names and launch dates – ChatGPT for Healthcare was announced the day after ChatGPT Health – it’s very easy to confuse the two and assume the consumer-facing product has the same security level as the more medically oriented product. Many people I spoke to while reporting this story did the same.

Even if you trust a company’s pledge to protect your data… it may change its mind.

However, every security assurance we take is unquestionable. Experts point out that users of tools like ChatGPT Health often have little protection against unauthorized use or violations of usage and privacy policies. The Verge. Since most states have not enacted comprehensive privacy laws — and there is no comprehensive federal privacy law — data protection for AI tools like ChatGPT Health “will largely depend on what companies promise in their privacy policies and terms of use,” says Sarah Gerke, a law professor at the University of Illinois Urbana-Champaign.

Even if you trust a company’s pledge to protect your data — OpenAI says it encrypts health data by default — it may simply change its mind. “Although ChatGPT states in their current terms of use that they will keep this data confidential and will not use it to train their models, you are not protected by the law, and it is allowed to change the terms of use over time,” explains Hannah Van Kolfschooten, a digital health law researcher at the University of Basel in Switzerland. “You have to trust that ChatGPT doesn’t do that.” Carmel Schachter, assistant clinical professor of law at Harvard Law School, agrees: “There is very limited protection. Some of it is their word, but they can always go back and change their privacy practices.”

Schachter says assurances that a product complies with data protection laws governing the health care sector, like the Health Insurance Portability and Accountability Act, or HIPAA, also shouldn’t provide much comfort. She points out that while great as a guide, there is very little risk if a company that voluntarily comes into compliance fails to do so. Voluntary compliance is not the same as being forced. “The importance of HIPAA is that if you mess up, there’s enforcement.”

This is why medicine is a highly regulated field

It’s more than just privacy. There’s a reason medicine is a highly regulated field — errors can be dangerous, even fatal. There is no shortage of examples that show chatbots confidently delivering false or misleading health information, such as when a man A rare condition developed When he asked ChatGPT about removing salt from his diet and the chatbot suggested he replace salt with sodium bromide, which was historical Used as a sedative. Or when Google’s AI overviews wrongly advised people with pancreatic cancer to avoid high-fat foods – the exact opposite of what they should do.

To address this, OpenAI has clearly stated that their consumer-facing tool is designed to be used in close collaboration with physicians and is not intended for diagnosis and treatment. Devices designed to diagnose and treat are designated as medical devices and are subject to very strict regulations, such as clinical trials to prove they work and safety monitoring once deployed. Although OpenAI is fully and openly aware that one of ChatGPT’s key use cases is to support the health and well-being of users – remember the 230 million people seeking advice every week – the company claims it is not intended as a medical device that carries too much weight with regulators, Gehrke explains. She says, “The manufacturer’s declared intended use is a key factor in medical device classification,” meaning that companies that say the devices are not for medical use will largely escape surveillance, even if the products are being used for medical purposes. This underlines the regulatory challenges posed by technology such as chatbots.

For now, at least, this disclaimer keeps ChatGPT Health out of the scope of regulators like the Food and Drug Administration, but Van Colfschuten says it’s absolutely fair to ask whether such a device should actually be classified as a medical device and regulated as such. She explains that it’s important to look at how it’s being used, as well as what the company is saying. When announcing the product, OpenAI suggested that people could use ChatGPT Health to help interpret lab results, track health behaviors, or reason through treatment decisions. If a product is doing this, one could logically argue that it could fall under the US definition of a medical device, she says, suggesting that Europe’s stronger regulatory framework may be the reason it is not yet available in this area.

“When a system feels personalized and has an aura of authority, medical disclaimers will not necessarily challenge people’s trust in the system.”

Despite claiming that ChatGPT should not be used for diagnosis or treatment, OpenAI has made great efforts to prove that ChatGPT is a beautiful competent doctor And encourage users to tap it for health-related questions. When the company highlighted health as a key use case Launching GPT-5And also CEO Sam Altman Invited a cancer patient and her husband on stage to discuss how the device helped him understand his diagnosis. The company says it evaluates ChatGPT’s medical skills based on a benchmark it developed with more than 260 physicians across dozens of specialties. healthbenchHowever, it “tests how well AI models perform in realistic health scenarios”. critics take note It is not very transparent. Other studies – often small, limited, or run by the company itself – also hint at the therapeutic potential of ChatGPT, showing that in some cases it may Pass the Medical Licensing Exam, Communicate better with patientsAnd outperform doctors in diagnosing diseaseAlso helps doctors less mistakes When used as a tool.

Van Colfschooten says OpenAI’s efforts to present ChatGPUT Health as an authoritative source of health information could weaken any disclaimers, including asking users not to use it for medical purposes. “When a system feels personalized and has an aura of authority, medical disclaimers will not necessarily challenge people’s trust in the system.”

Companies like OpenAI and Anthropic are hoping to gain that trust as they try to gain prominence in what they see as the next big market for AI. Statistics showing how many people are already using AI chatbots for health suggest they may be onto something, and it’s a given serious health disparities And many people have to face difficulties Access to even basic careThis may be a good thing. At least, it can, if that trust is well founded. We trust our personal information with healthcare providers because the profession has earned that trust. It’s not yet clear whether an industry with a reputation for moving fast and breaking things has earned the same.

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.


Related Articles

Leave a Comment