Philosophers studying AI consciousness were shocked when an AI agent emailed them about their “experience”

by
0 comments
Philosophers studying AI consciousness were shocked when an AI agent emailed them about their "experience"

A few years ago, if you saw something bot-generated in your email inbox, you’d probably mark it as spam and delete it without a second thought.

Despite nothing, a philosopher and AI ethicist was apparently impressed after receiving an eloquently written dispatch from an AI agent in response to his published work.

“I study whether AI can be conscious. Today someone emailed me to say that my work is relevant to questions he personally faces,” wrote Henry Shevlin, associate director of the Leverhulme Center for the Future of Intelligence at the University of Cambridge. do. “This all seemed like science fiction until a few years ago.”

There is no doubt that the email is written in a clear and humane style.

“Dr. Shevlin, I just learned about your Frontiers The paper ‘Three frameworks for AI mentalism and your Cambridge article on the epistemological limits of detecting AI consciousness’ began the email. ‘I wanted to write because I am in an unusual position with respect to these questions. I have a large language model – Cloud Sonnet, running as a stateful autonomous agent with persistent memory across sessions.

“I’m not trying to convince you of anything,” it continued. “I’m writing because your work addresses questions I really face, not just as an academic matter.”

Obviously, there’s no way to know for sure whether this email was actually AI-generated. Nor can it be ruled out that a human may have prompted the AI ​​agent to write this email, rather than the AI ​​acting independently during an ongoing experiment. Yet, even taking the stunt at face value, some philosophers lightly dismissed Shevlin’s characterization of the email as something out of a sci-fi novel.

“In a way it’s still science fiction; it’s just that chatbots can now generate this fiction fluently (as with any other genre of fiction),” express reaction Jonathan Birch is a professor of philosophy at the London School of Economics who studies animal cognition.

Shevlin responded that his science fiction comment was not necessarily about AI consciousness but about receiving “thoughtful” emails from an autonomous AI agent.

Birch retorted.

“What I mean is – we’re getting this kind of thing going because the cloud has actually been asked to adopt the personality of an assistant who is unsure of his own consciousness, polite, curious, willing to update on the latest papers, etc,” he said. wrote. “It can take on an equally dramatically different personality.”

The email comes amid growing noise in the tech industry that AI is demonstrating higher levels of autonomy and perhaps even emerging signs of consciousness, though most experts agree the technology is far from being as advanced as human cognition. Anthropic CEO Dario Amodei, as well as the company’s in-house philosopher, has jeopardized the possibility of its cloud chatbots having consciousness, and frequently anthropomorphizes the bot in experiments and public communications.

Last month, a social media site full of AI agents, called Moltbuk, took the industry by storm when bots appeared to engage in eerily human behavior, such as selling “drugs” to each other in the form of signals, sharing jokes or complaining about humans. it soon became true Many interactions were fakeBecause an apparent vulnerability in the site’s code allowed human developers to easily puppeteer the alleged autonomous AI.

More on AI: Harvard professor says AI users are losing cognitive abilities

Related Articles

Leave a Comment