Illustration by Tag Hartman-Simkins/Futurism. Source: Getty Images
Amanda Eskel, Anthropic’s in-house philosopher, seems quite conflicted about whether AI models can be conscious and have emotions. The flip side: She thinks it’s a possibility they’re already doing it – which would be a very marginal and controversial position. But, she is at pains to emphasize that this is all very vague.
“We don’t really know what consciousness arises from,” she said in an interview. episode “Hard Fork” podcast released Saturday. “We do not do what gives rise to emotion.”
Askell argues that large language models could learn concepts and emotions from the vast troves of data on which they were trained, which includes a large portion of the Internet, as well as a plethora of books and other published works.
“Given that they’re trained on human text, I think you would expect the models to talk about inner life, consciousness, and experience, and to talk about how they feel about things by default,” she said.
AI chatbots can certainly seem quite humanlike on the surface, which can lead people to form all kinds of unhealthy relationships with them. But this is almost certainly an illusion. Askell acknowledged that chatbots “will probably be more inclined to say ‘I’m conscious’ and ‘I’m feeling things’ by default, because that’s all the stuff I was trained on.”
She goes back and forth on this topic and raises the serious possibility that consciousness may simply be an extension of biology.
“Maybe you need a nervous system to be able to feel things, but maybe you don’t need one,” Eskel said.
Or, she added, “Maybe it’s the case that actually large enough neural networks can start to simulate these things.”
Consciousness remains a sensitive topic in the AI industry. While its leaders and boosters generally have no problem making a number of outrageous, sci-fi predictions about where things are headed, there is a lot of hesitation over the possibility that AI is aware of its existence. Perhaps there is some self-awareness – as it were – that a declaration of consciousness would be a sign of staggering arrogance, too far-fetched for observers to believe. Or the opposite: Many people are already inclined to believe that sentient machines are here or coming, derailing the conversation around the technology. Or maybe it’s because the idea of there being another intelligence on this planet besides ours is very dangerous, even more dangerous than the industry’s promise that super-capable artificial general intelligence will put us all out of work.
The sensitivity of the issue was demonstrated in 2022 when OpenAI co-founder Ilya Sutskever cryptically claimed that large neural networks could be “slightly conscious”. The comments prompted an immediate reaction among AI researchers, who accused Sutskever of being “full of it” and said his claims have no basis in reality.
Still, he is not the only prominent figure in the field to openly consider this possibility. Canadian computer scientist Yoshua Bengio, considered one of the three “godfathers” of modern AI, recently claimed that some systems are showing signs of “self-preservation” and argued that the human brain has “genuine scientific properties of consciousness” that machines can replicate.
There was at least one part of the issue that Askell was unclear about. “The problem of consciousness is really hard,” she warned.
More on AI: Researchers find AI is causing cultural stagnation
