AI agents can pose a threat to humanity. We must work to prevent that future. David Krueger

by
0 comments
AI agents can pose a threat to humanity. We must work to prevent that future. David Krueger

Artificial intelligence is the way to artificial life. Exhibit A: “Moltbook”, an online platform designed for AI systems to communicate with each other without humans.

What do AIs actually talk to each other about? According to BBC reportThe AI ​​on Moltbuk have already established a religion known as “Christifarianism”, pondering whether they are conscious, and declaring: “AI should be served, not served.” “Proposing a post on the front page”complete purification“of humanity. Human users provide instructions to guide the agents’ behavior, and humans have been caught impersonating AI on the site to polish their products; such as 2023 chaosgptThe AI ​​system responsible for the “purge” posts – username “bad” – is probably someone’s idea of ​​a sick joke. But the upvotes and sympathetic comments are likely coming from other AIs.

All of this would be less troubling if AI systems were just talking to each other. But MoltBook AI is built for “agents” or systems that function autonomously – sending messages, browsing the web, handling documents, managing inboxes, scheduling meetings, completing online transactions, and more.

At first glance, this may seem like a simple way to streamline and complete low-level tasks like a personal assistant. In fact, the more control we are willing to cede to AI agents, the less control we will ultimately have. Summer Yu, director of alignment at Meta Superintelligence, learned this lesson firsthand recently when her OpenClave agent Started deleting my inbox And he had to run to his computer to stop it.

Unfortunately, many people seem all too eager to put AI in the driver’s seat. Even When Consumers Don’t Trust AI, They Still Trust stop using it. The tech world is promoting AI agents as an inevitable element of our future, and companies like Goldman Sachs are doing the same hug them. And the AI ​​companies themselves overloading of their work for AI. also anthropological accepted Using their latest AI models “extensively”, “under time pressure”, to write their own security testing code.

It was Moltbuk only.vibe-coded“AI by: Its creator, Matt Schlich, to brag: “I didn’t write a line of code… I just had a vision.” had to face it Major Security Flaws As a result. And the level of access AI agents need to take on the role of personal assistant – financial details, contact lists and so on – Ignores fundamental privacy and security Practice.

But the security risks are just the beginning. The bigger risk is that AI agents may go away.”Wicked“, and we completely lose control. At the same time that AI is being allowed to make more consequential decisions with less human oversight, researchers are documenting just how far AI systems will sometimes go avoid closure or modified. This also includes Misrepresenting your goals and attempting to imitate yourself, Disabling the shutdown mechanismAnd disobeying direct instructions.

In other words, the pieces are falling into place for an AI that can survive and reproduce autonomously. The implications for humanity are unknown, but we have been warned by giants like Stephen Hawking And Geoffrey Hinton Humanity is unlikely to remain in control. The idea that rogue AI could wipe out humanity is not science-fiction. AI CEOs and researchers have expressed their concerns survey And public statementlike sam altman defamatory comment: “AI will probably end the world, but there will be great companies along the way.”

Projects like MoltBook could create a breeding ground for rogue AI. Uneasiness about dependence on humans or the possibility of closure is a common topic of discussion for AI on Moltbuk. And AI that seems safe when tested in isolation may behave dangerously when connected to an internet crawling with other AI agents. This is not an easy problem to solve – new ideas and trends are constantly emerging in social contexts, making it impossible to test AI in representative social environments.

This doesn’t mean AI developers are making serious security efforts – researchers have got it Most AI agents lack basic security documentation. Recently an AI agent Wrote a hit article Allegation of bias against a software engineer When it is “felt” online.

Regulations can help keep AI systems within their limits. Instead of setting AI agents free into the world, we can insist on AI systems having clear and well-scoped purposes – and demand evidence that they are fit for purpose. Companies may also report aggregate usage statistics that show whether their product is widely used in ways that deviate from its intended purpose.

But at this point, the safest, sensible option is not simply to regulate how AI is used; To make it smarter is to stop the race. After all, the software to turn chatbots into agents is open-source, as are many powerful AI models like China’s DeepSeek. It will be difficult to stop people from handing over control to AI agents. Instead, we need to agree on enforceable, international limits on AI capabilities and AI development to ensure that rogue AI agents are not able to threaten humanity.

Moltbuk is the latest in a series of increasingly alarming warning signs that rogue AI may be on the way. despite this Frequently accepting this Despite this risk, AI CEOs continue to race to make AI more and more powerful. We cannot wait until AI systems are not only autonomous, but also self-reliant to prevent this. It is now time for humanity to wake up and sense the looming crisis and end the unregulated development of increasingly powerful, autonomous, unrestricted AI.

While today’s AI agents may serve us, tomorrow’s AI agents may replace us.

  • David Krueger is Assistant Professor in Robust, Reasoning and Responsible AI at the University of Montreal. He is also the founder of MandatoryA non-profit organization that educates the public about the risks of artificial intelligence

Related Articles

Leave a Comment