Professor Virginia Dignam is right (Letters, 6 January): Consciousness is neither necessary nor relevant to legal status. Corporations have power without brains. The 2016 EU Parliament resolution on “electronic personality” for autonomous robots said exactly the same thing – liability, not emotion, was the proposed limit.
The question is not whether the AI system “wants” to live. This is the governance structure we build for systems that will increasingly act as autonomous economic agents – entering into contracts, controlling resources, incurring losses. Recent studies from Apollo Research and Anthropic show that AI systems are already engaging in strategic deception to avoid shutdowns. Whether it is “conscious” self-preservation or instrumental behavior is irrelevant; The challenge of governance is the same.
Simon Goldstein and Peter Salib Debate Social Science Research Network Rights frameworks for AI can actually improve security by removing adversarial dynamics that encourage deception. DeepMind’s recent work on AI well-being reaches similar conclusions.
The debate is “Should machines have emotions?” Has progressed beyond. “What accountability structures might work?”
PA Lopez
Founder, AI Rights Institutenew york
As humans, we rarely question our right to legal protection, even though our species has fought and harmed for thousands of years. Yet when the topic turns to artificial intelligence, fear dominates the discussion before understanding can even begin. That imbalance alone is worth examining.
If we are truly concerned about the risks of advanced AI, perhaps the first step is not to assume the worst, but to ask whether fear is the right basis for decisions that will shape the future. Avoiding conversation will not prevent technology from developing; It just means that we leave the direction of that development to chance.
This is not an argument for treating AI as human, nor a call to grant it personhood. It is simply a suggestion that we might benefit from a more open, balanced debate – a debate that looks at both the risks and the possibilities, rather than just threat rhetoric. When we view AI only as something to fear, we close down the opportunity to set thoughtful expectations, safeguards, and responsibilities.
We now have the opportunity to view this moment with clarity rather than panic. Instead of just asking what we fear, we can also ask what we want, and how we can shape the future with intention rather than reaction.
D Ellis
Reading