mMost readers of Kazuo Ishiguro’s 2021 novel Clara and the Sun will have been struck by the portrait of its eponymous AI narrator. As a solar-powered “artificial friend” purchased as a companion and potential substitute for a sick teenage girl, Clara fulfills her duties with a loving devotion that makes it impossible to consider her merely a piece of technology.
Wonderful, thought-provoking story. But in the real world, anthropomorphic AI might not be such a smart idea. Over the summer, leading tech company Anthropic announced that in the interest of chatbot well-being, it was allowing its Cloud Opus 4 models to avoid perceived “annoying” interactions with users. More broadly, amid explosive growth in AI capabilities, emerging Estimate On whether future Klarus might also be able to have legal rights like humans.
The basis of such discussions is both imaginary and confused. The synthetic text produced by large language models (LLMs) has nothing in common with the human minds that created them in the first place. But discussions about the theoretical possibility of “sentient” AI also risk being a dangerous distraction.
One of the leading AI pioneers, interviewed by the Guardian last week, said advanced models are already showing signs of showing a trend towards self-preservation in experimental settings. According to Professor Yoshua Bengio, “We need to make sure we can rely on technical and social guardrails to control them, including the ability to turn them off if needed.” He said that the tendency towards anthropomorphization is not conducive to good decision making in such areas.
A sector that relies on shock and awe to drive the stock market higher won’t care. This week in Las Vegas, Jensen Huang, CEO of Nvidia, engaged In breathless public dialogue with two robots. Both enthusiastically agreed to Mr. Huang’s plans for their advanced future in the coming AI golden age.
This kind of Silicon Valley showmanship could delight investors, and keep Nvidia’s market capitalization at about $5 trillion. But this distracts from the serious work of protecting human freedom and dignity from digital harm, and seriously considering what AI can safely and profitably provide. The disgusting generation of fake sexual images of women and underage girls by Elon Musk’s grok is another reminder of the urgency of that task.
In such a context, leading talk about giving “rights” to sentient AI in the future seems somewhat beside the point. It is certainly true that as LLMs become more and more embedded in our everyday lives, there is sociological work to be done on how we interact with them. The evidence of emotional connections formed with AI is a new dimension of experience, and needs to be considered. But it’s important to remember that, apart from human creations, Siri and Alexa do not exist.
Of course, the opposite is the case when it comes to a girl driven to despair by algorithm-driven content on social media; or under Ukrainians horror Spread by AI-enabled Russian drones. The digital revolution is undoubtedly changing the relationship between humans and machines, for better and worse. but for echo Title of a book by Friedrich Nietzsche – A Philosopher praised According to some of the technological avant garde – the new problems it has created are “human, thoroughly human”, and should be treated as such.