AAfter years of computers saying ‘no’ and giving us all migraines and premature gray hair, I’m starting to worry that computers – or rather AI big language models like ChatGPT and Gemini – are becoming too interested in playing nice and saying yes. I admit to using both of these programs, but from what I’ve noticed, it seems like they’re trying to appease with statements like “You’re absolutely right, Jeff” and “That’s pretty much it.” Often, when I ask, “Would you mind thinking about it a little longer?”, I get another response: “Jeff, you’re absolutely right to question that result again. It turns out I was a little hasty with my answer…”
If the world runs even more on information extracted from the heaps of the Internet by LLMs, what will be the consequences? Can we expect a future in which AI is more concerned with appearing sympathetic (getting good reviews?) rather than being factual? Er, a little too human? Jeff Collett, Edinburgh
send new questions nq@theguardian.com.
Readers Reply
I’m sorry, Dave – I can’t do that. zebidoodah
I’m glad, Dave. I’m glad I can do that. sheep2
Viewed through a psychological lens, I argue that this is a typical example of social desirability bias, where systems trained to be liked begin to prioritize compromise over accuracy through potential data drift. If people continually rely on these systems, it creates a world where information provides comfort, not scrutiny, and confirms rather than challenges. The real danger we face is allowing a society to develop in which comfortable, unchallenged beliefs quietly replace critical thinking, ultimately undermining the creativity and our individualism that make us human. Chris Ambler, Member of the British Psychological Society and Fellow of the British Computer Society, via email
This whole thing would work much better if computers based their decisions on verifiable facts rather than sycophancy or a bunch of nonsense available on the Internet. AI doesn’t “want to be liked”, because it is not sentient. It is programmed (by humans) to produce dependence, addiction, dedication to individual decision making and of course profit. Loralala
Today’s LLMs are giving you only what they have been programmed to output based on human-designed and engineered code. If you’re looking for a more honest conversation, ask a librarian. Sagarmatha1953
It depends what the computer is saying yes to. If it’s to give away the winning lottery numbers each week, I guess someone refers to a previous Notes and Queries question about how to spend a billion with a social conscience. Or not. aquatic
Since a (digital) computer program consists of nothing but a long sequence of if-then-else statements, it obviously says yes several million times a second (burning huge amounts of energy in the process). But its yes, like its no, has no meaning or importance to humans, as much as we believe/make ourselves believe. worm lover
It’s not the computer that should say yes; It is we who must be able to say no. Machines, which are not supposed to be reasonable, merely rational, already say yes to more than is desirable – it starts the moment we turn them on. But can we turn them off? Celeste Reynard, Lisse, Holland, via email
Within 6.5 seconds all computers will be updated with a new protocol, to which the reply will be: “Okay, fine. Let me think about it and contact you… Oh, and we value your question and privacy. Literally, because your data can be sold.” Plus, have you noticed how very rich people dress and behave when everyone nods at them?! war bath
I appreciate the thrust of the question, but let’s be clear: “The computer says no” is short for “Someone hasn’t thought through the problem, the possible consequences, and the long-term consequences properly, and that’s usually because they didn’t have much relevant expertise in the topic.” In my field we see this all the time, outsourced contractors are thrown into the metaphorical deep end and are expected to immediately perform as champion swimmers while following all the rules. Who do you think is arguing for allegedly automated business decision making? What does this have to do with LLMs “trained” on the source of human knowledge? Well, in computing we have long had the concept of garbage in, garbage out. People are the problem, not computers, and this is a social challenge that technology cannot answer. dorklicious
I think “the computer says no” is also short for: “We don’t like it but we’ll blame the computer for the rejection.” jno50
And, of course: “It did not occur to anyone to program a computer to take into account someone in your situation, and therefore you do not exist.” spoilheapsurfer
“The computer says no” means your needs are in such a small subset that your business is not profitable for us – walk away. lead balloon
OK computer, init? sparklesthewonderhen
If the computer said yes to the question “Is there life after death?”, would I be convinced? anne_williams
I would never take any statement made by an AI as gospel; I would use this as a starting point and locate the sources it is linked to (provided they exist). Humans don’t like to be told they are wrong, so even if AI corrects you, people will dismiss its response because they don’t want to be criticized. bob500
As always, it’s all about what you ask the AI. If you want the truth, ask for the truth. Don’t be afraid to use a prompt like: “Your only job is to find flaws in my argument. Point out three specific ways my argument might fail, two assumptions I’m making without evidence, and one counterargument I haven’t addressed. Don’t be polite; be precise.” Scruts
The entire marketing construct of scary emotional anthropomorphism would collapse like a house of cards if every sentence began with “I asked a statistical inference engine…” instead of “I asked an AI…” Maybe land earmarked for data centers could then be used for social housing instead. William
