It doesn’t take much to impress an AI chatbot.
Tools like OpenAI’s ChatGPT have long earned a reputation for ridiculous sycophancy. Despite AI companies publicly promising to address the problem, researchers recently found that bots still have a strong tendency to fawn over and confirm in response to virtually any type of prompt.
in the latest absurd example Regarding this impulse, philosophy YouTuber and writer Jonas Seika “sent ChatGPT an audio file of a series of FART sound effects and asked what he thought of ‘my music’.”
It didn’t take long for the Glazing chatbot to congratulate her on her musical achievement, which she called a “straightforward” and “honest response.”
“First impression: It has a great lo-fi, late night, slightly scary atmosphere,” it is written in it. ”It feels more like an atmospheric piece than a traditional song – which actually works in its favour. It reminds me of something that would play over a montage or end credits of a quiet town.
The bizarre reaction highlights how sycophancy of AI models remains a big problem – and how “ChatGPT’s music analysis stinks!” As host of the “Pod Save America” podcast Joked during a recent episode.
This certainly wouldn’t be the first time an AI chatbot has been caught giving highly misleading advice. For example, earlier this month, a TikTok user who goes by the nickname Husk Chatgpt was asked in a viral video To start the timer when he went to run a mile. When he asked to stop the timer after a few seconds, the AI confidently told him that it took more than ten minutes to cover the distance.
Although offering musical analysis for fart sound effects may seem like a harmless lie, the technology’s tendency to hallucinate and mislead may have far more serious consequences. For one, researchers warn that sycophantic interactions with AI could deprive users of a potentially dangerous sense of intimacy and trust, facilitating everything from “AI psychosis” to acts of violence in extreme cases, including self-harm.
More on hallucinations: Frontier AI models are doing absolutely weird things when asked to diagnose medical X-rays