OpenAI has launched ChatGPT Health, which captures your entire medical record, but warns not to use it for “diagnosis or treatment.”

by
0 comments
OpenAI has launched ChatGPT Health, which captures your entire medical record, but warns not to use it for "diagnosis or treatment."

AI chatbots may be extremely popular, but they have been known to deliver some seriously strange and potentially dangerous health advice in a flood of readily available misinformation, which has experts worried.

Their advent has turned countless users into armchair experts, often relying on outdated, misrepresented or completely made-up advice.

A recent investigation By GuardianFor example, it was found that Google’s AI overviews, which come with most search results pages, provide a lot of inaccurate health information that can lead to serious health risks if followed.

But unfazed by repeated warnings from experts that AI’s health advice shouldn’t be trusted, OpenAI is doubling down Launching a New Feature Called ChatGPT Health, it will combine your medical records to generate responses “more relevant and useful to you.”

Yet despite being “designed in close collaboration with physicians” and built on “strong privacy, security and data controls,” the facility is “designed to support, not replace, medical care.” In fact, it’s shipping with a ridiculously self-defeating warning: that the particular health feature is “not intended for diagnosis or treatment.”

“ChatGPT Health helps people take a more active role in understanding and managing their health and wellness – supporting, not replacing, physician care,” the company’s website reads.

In fact, users will almost certainly be using it for exactly the kind of health advice that OpenAI is warning against in the fine print, which is likely to cause new embarrassments for the company.

This will only aggravate the existing problems for the company. As business insider reportsChatGPT is “turning everyone into amateur lawyers and doctors”, to the dismay of legal and medical professionals.

Miami-based medical malpractice attorney Jonathan Friedin told the publication that people will use chatbots like ChatGPT to fill out his firm’s customer contact sheets.

“We’re seeing a lot of callers who think they have a case because ChatGPT or Gemini told them the doctor or nurse was below the standard of care in a number of different ways,” he said. “While this may be true, it does not necessarily translate into a viable case.”

Then there’s the fact that users are willing to surrender medical history, including highly sensitive and personal information — a decision that OpenAI is now encouraging with ChatGPAT Health — despite federal legislation, like HIPAA, not applying to consumer AI products.

For example, billionaire Elon Musk encouraged People uploaded their medical data to its ChatGPT competitor Grok last year, allowing a flood of confusion Because users received diagnoses of hallucinations after sharing their X-rays and PET scans.

Looking at the AI ​​industry’s track record when it comes to privacy protection and the struggles it has faced important data leakAll these risks are as relevant as ever.

“New AI health tools promise to empower patients and promote better health outcomes, but health data is the most sensitive information people can share and must be protected,” said Andrew Crawford, senior counsel at the Center for Democracy and Technology. told BBC,

“Especially as OpenAI moves to explore advertising as a business model, it is important to ensure separation between this type of health data and the memories ChatGPT receives from other conversations,” he said. “Since it is up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information at real risk.”

Sarah Geoghegan, senior counsel at the Electronic Privacy Information Center, said, “ChatGPT is bound only by its own disclosures and promises, so without any meaningful limits like regulation or law, ChatGPT can change its terms of service at any time.” told record,

Then there are also concerns over highly sensitive data such as reproductive health information being given to the police against the user’s wishes.

“How does OpenAI handle (law enforcement) requests?”. Crawford told record“Do they just hand over the information? Are the users informed in any way?”

He added, “There are a lot of questions out there that I still don’t have good answers to.”

More on AI and health advice: Google’s AI caught giving dangerous “health” advice

Related Articles

Leave a Comment