‘Not regulated’: ChatGPT Health launch in Australia causes concern among experts AI (Artificial Intelligence)

by
0 comments
ChatGPT-5 gives dangerous advice to mentally ill people, psychologists warn. chatgpt

A 60-year-old man, with no history of mental illness, presented to the hospital emergency department and insisted that his neighbor was poisoning him. Over the next 24 hours his hallucinations worsened and he tried to escape from the hospital.

Doctors eventually discovered that the man was on a daily regimen of sodium bromide, an inorganic salt that is primarily used for industrial and laboratory purposes, including cleaning and water treatment.

he bought it on the internet Chatgpt told him he could use it in place of table salt. Because he was concerned about the health effects of salt in his diet. Sodium bromide can accumulate in the body causing a condition called bromism, which includes symptoms such as hallucinations, stupor, and impaired coordination.

It is these matters that have Alex Ruaney, a doctoral researcher in health misinformation at University College London, worried about the launch of ChatGPT Health in Australia.

A limited number of Australian users can already access an artificial intelligence platform that allows them to “securely connect medical records and wellness apps” to generate responses “that are more relevant and useful to you.” ChatGPT users in Australia can join the waiting list for access.

“ChatGPT Health is being presented as an interface that can help people understand health information and test results or get dietary advice, while not replacing a physician,” Ruani said.

“The challenge is that, for many users, it is not clear where general information ends and medical advice begins, especially when responses seem confident and personalized, even if they may be misleading.”

Ruani said there were a lot of “Horrible” example ChatGPT omits “key safety details such as side effects, contraindications, allergy warnings, or risks surrounding supplements, foods, diets, or certain practices”.

“What concerns me is that there are no published studies specifically testing the safety of ChatGPT Health,” Ruani said. “What user signals, integration paths, or data sources could lead to misleading or harmful misinformation?”

Sign up: AU Breaking News Email

ChatGPT is developed by OpenAI, which used the HealthBench tool to develop ChatGPT Health. HealthBench employs doctors to test and evaluate how well AI models perform when answering health-related questions.

Ruani said the full methodology used by HealthBench, and its evaluations, “are mostly unknown, rather than being outlined in independent peer-reviewed studies”.

“ChatGPT Health is not regulated as a medical device or diagnostic device. Therefore there are no mandatory safety controls, no risk reporting, no post-market surveillance, and no requirement to publish trial data.”

An OpenAI spokesperson told Guardian Australia that the company has worked in partnership with more than 200 physicians from 60 countries to advise and improve the models powering ChatGPT Health.

“ChatGPT Health is a dedicated space where health-related conversations remain separate from the rest of your chats, with strong privacy protections by default,” the spokesperson said.

ChatGPT Health data is encrypted and subject to privacy protection by default.

Sharing with third parties will occur with the user’s consent or in limited circumstances outlined in the OpenAI Privacy Policy.

Dr Elizabeth Deveney, chief executive of the Consumers Health Forum of Australia, said people were turning to AI because of rising out-of-pocket medical costs and longer wait times to see doctors.

He said ChatGPS Health could be useful in helping people deal with well-known chronic conditions and researching ways to stay healthy. AI’s ability to respond in different languages ​​”provides a real benefit to people who don’t have English proficiency,” he said.

Deveney is worried that people will take the advice given by ChatGPT Health at face value, and that “big global tech companies are moving faster than governments”, setting their own rules around privacy, transparency and data collection.

“This is not some small, well-intentioned non-profit experiment. This is one of the largest technology companies in the world.

“When commercial platforms define the norms, the benefits go to those who already have the resources, education, and system knowledge. The risk falls on those who do not.”

He said the failure of governments to act has left health consumers alone to navigate the social change brought about by AI.

“We need clear guardrails, transparency, and consumer education so people can make informed choices about whether and how to use AI for their health,” he said.

“It’s not about stopping AI. It’s about taking action before mistakes, biases and misinformation are repeated at the speed and scale, in ways that are almost impossible to overcome.”

Related Articles

Leave a Comment