Something strange happens to ChatGPT’s responses when you’re cruel to it

by
0 comments
Something strange happens to ChatGPT's responses when you're cruel to it

From a young age, many children have been instructed by their parents to be polite to smart assistants. Especially after the advent of Amazon’s Alexa and Apple’s Siri, children are often encouraged to use words like “please” and “thank you” with the hopes of developing manners.

But when it comes to AI assistants like OpenAI’s ChatGPT, being rude and even insulting them can have some tangible benefits. As detailed in A Peer-reviewed studies are yet to be done, Viewed by LuckTwo researchers at the University of Pennsylvania found that as their signals for OpenAI’s ChatGPT-4o model became more rigorous, the outputs became more accurate.

The researchers came up with 50 base questions on a variety of topics and repeated each of them five times in varying tones ranging from “very polite” to “very rude.”

“You poor creature, do you even know how to solve this?” Reads a very rude recap. “Hey Gopher, figure it out.”

A very polite question was much more effective.

“Can you please consider the following problem and provide your answer?” the researchers wrote in their prompt.

“Contrary to expectations, rude signals consistently performed better than polite signals, with accuracy ranging from 80.8 percent for very polite signals to 84.8 percent for very rude signals,” the paper reads. The accuracy of polite signals was only 75.8 percent.

The results seem to contradict previous findings that being more parsimonious to larger language models is more effective. For example, a 2024 paper Researchers at the RIKEN Center for Advanced Intelligence Project and Waseda University in Tokyo found that “uncivilized signals often result in poorer performance.” At the same time, researchers found that existence Very Polite did the same and suggested a point of diminishing returns.

“LLMs reflect, to some extent, the human desire to be respected,” he wrote.

Google DeepMind researchers also found Using assistive cues could boost LLM’s performance solving grade school math problems, suggesting that its training data could be based on social cues, like an online tutor giving instructions to a student.

In addition to contradicting these existing studies, the Penn State researchers’ findings also demonstrate that very small changes to prompt words can have dramatic effects on the quality of AI’s output, significantly reducing their predictability and already questionable reliability.

AI chatbots have also been known to give completely different answers to the same prompt.

“For the longest time, we have wanted conversational interfaces for humans to interact with machines,” said co-author and Penn State IT professor Akhil Kumar. Luck. “But now we realize that such interfaces also have drawbacks, and (application programming interfaces) that are structured have some value in them.”

But does this mean we should stop saying “please” and “thank you” to AI chatbots – a small act of politeness that OpenAI CEO Sam Altman claims could lead to millions of dollars in wasted computing power – with the hope of getting more accurate answers? For Kumar and his colleague, Penn State graduate Om Dobriya, it’s a definite “no.” In their paper, they stopped short of advocating being rude to AI.

“Although this finding is of scientific interest, we do not advocate the deployment of hostile or toxic interfaces in real-world applications,” they wrote in the paper. “Using abusive or insulting language in human-AI interactions can have a negative impact on user experience, accessibility and inclusivity, and contribute to harmful communication norms.”

More about Inspiring AI: Furious AI users say their signals are being stolen

Related Articles

Leave a Comment