AI consciousness is an alarm bell in the security debate. AI (Artificial Intelligence)

by
0 comments
AI consciousness is an alarm bell in the security debate. AI (Artificial Intelligence)

The concern expressed by Yoshua Bengio that advanced AI systems might one day resist being shut down deserves careful consideration (AI is showing signs of self-preservation and humans should be ready to pull the plug, Pioneer says, December 30). But treating such behavior as evidence of consciousness is dangerous: it encourages anthropomorphism and distracts from the human design and governance choices that actually determine AI behavior.

Many systems can protect their continued operation. The laptop’s low-battery warning is in this sense a form of self-preservation, yet no one takes it as evidence that the laptop wants to live: the behavior is purely instrumental, without experience or awareness. Linking self-preservation to consciousness reflects not any internal consciousness, but the human tendency to link intentions and emotions to artifacts.

Crucially, consciousness is neither necessary nor relevant to legal status: corporations have rights without a mind. If AI needs regulation, it is because of its impact and power, and to address human accountability, not because of speculative claims about machine consciousness.

The comparison with extraterrestrial intelligence is even more misleading. If extraterrestrial beings exist, they would be autonomous entities beyond human creation or control. AI systems are the opposite: intentionally designed, trained, deployed, and disrupted by humans, with any impact occurring through human decisions.

There’s a point behind all this that the article largely ignores: AI systems, like all computing systems, are Turing machines with inherent limitations. Learning and scale do not remove these limitations, and claims that consciousness or self-preservation could emerge from them would require an explanation, currently lacking, of how subjective experiences or real goals arise from symbol manipulation.

We must take AI risks seriously. But doing this requires conceptual clarity. Confusing designed self-maintenance with conscious self-preservation risks misdirecting both public debate and policy. The real challenge is not whether machines will survive, but how humans choose to design, deploy, and operate systems whose power comes entirely from us.
professor Virginia Dignam
Director, AI Policy Lab, Umeå University, Sweden

I was leisurely reading my favorite year-end newspaper when I came across your articles on Yoshua Bengio’s concerns about Artificial Intelligence and the work of AI safety researchers in California (Office block where AI ‘doomers’ gather to predict apocalypse, 30 December).

I have to admit to being terrified that some of the science-fiction horrors predicted during my 84-year lifetime are now upon us and the world is probably going to sit back and watch itself be destroyed by machines at best, or destroyed at worst.

The humans running this process are only interested in power and unimaginable profits; The deniers are complacent and the rest of us can only keep our fingers crossed in the hope that enough governments will have the strength, courage and awareness to say: “Stop!” Sadly, looking at our current world leaders, I can’t hold my breath.
john robinson
lichfield

Reading your article on “There is a need to ensure that we can rely on technical and social guardrails to control (AI), including the ability to turn them off if needed” I was reminded of the letter from Gerry Rees (29 December) referring to the short story answer. By Frederick Brown, dating from 1954.

The computer’s reply that there is now a god prompts the Questioner to attempt to shut it down, but a bolt from the sky kills the Questioner and flips the switch. An AI trained on a larger language model will, perhaps, “read” this story as part of its training and, as a result, have ready answers to any of the security measures suggested above.
Eric Skidmore
Gipsy Hill, London

Do you have an opinion on what you read in the Guardian today? Please email Your letter to us and it will be considered for publication in our Letter Section.

Related Articles

Leave a Comment