An AI pioneer has criticized calls to give the technology rights, warning that it is showing signs of self-preservation and humans should be prepared to pull the plug if necessary.
Yoshua Bengio said giving legal status to cutting-edge AI would be akin to granting citizenship to hostile extraterrestrials, amid fears that advances in technology far exceed the ability to ban them.
Bengio, chair of a major international AI safety study, said there is a growing perception that chatbots are becoming aware, “making bad decisions”.
The Canadian computer scientist also expressed concern that AI models – the technology that underpins tools like chatbots – were showing signs of self-preservation, such as Attempts are being made to disable surveillance systemsA main concern among AI safety campaigners is that powerful systems could develop the ability to escape guardrails and harm humans,
“People demanding that AI have rights would be a big mistake,” Bengio said. “Frontier AI models already show signs of self-preservation in experimental settings today, and ultimately empowering them would mean we are not allowed to shut them down.
“As their capabilities and degree of agency grow, we need to make sure we can rely on technical and social guardrails to control them, including the ability to shut them down if needed.”
As AIs become more advanced in their ability to act autonomously and perform “reasoning” tasks, there has been increasing debate over whether humans should give them authority at some point. A survey by the Sentience Institute, an American think tank that supports moral rights for all sentient beings, found that nearly four in 10 American adults legal rights supported For a sentient AI system.
Leading US AI firm Anthropic said in August it was letting its Cloud Opus 4 models turn off potentially “disturbing” interactions with users, saying it needed to protect the “well-being” of AI. Elon Musk, whose XAI company developed the Grok chatbot, has written on his X platform that “it is not okay to torture AI”.
Robert Long, a researcher on AI consciousness, has said, “If and when AI develops moral status, we should ask them about their experiences and preferences, rather than assuming that we know best”.
Bengio told the Guardian that the human brain had “genuine scientific properties of consciousness” that machines could, in principle, replicate – but humans interacting with chatbots were a “different thing”. He said this was because people assumed – without evidence – that an AI was fully conscious in the same way as a human being.
“People aren’t going to care what kind of mechanisms are going on inside the AI,” he said. “They care that it feels like they’re talking to an intelligent entity that has its own personality and goals. That’s why there are so many people who are connecting with their AI.
“There will always be people who will say: ‘Whatever you tell me, I’m sure it’s conscious’ and then other people will say the opposite. That’s because consciousness is something we have a gut feeling for. The phenomenon of subjective perception of consciousness is going to motivate bad decisions.
“Imagine that some alien species came On the planet and at some point we realize that they have nefarious intentions for us. Can we Should we provide them citizenship and rights or should we protect our lives?”
Responding to Bengio’s comments, Jesse Reese Anthis, who co-founded the Sentience Institute, said that humans would not be able to safely co-exist with digital minds if the relationship was one of control and coercion.
Anthis said: “We can increase or decrease the rights of AI, and our goal should be to do so with careful consideration of the welfare of all sentient beings. Neither complete rights for all AI nor complete denial of rights to any AI would be a healthy approach.”
Bengio, a University of Montreal professor, earned the nickname “Godfather of AI” after winning the 2018 Turing Prize, which is considered the equivalent of the Nobel Prize for computing. He shared it with Geoffrey Hinton, who later won the Nobel, and Yann LeCun, the outgoing chief AI scientist of Mark Zuckerberg’s Meta.