Moltbuk was the pinnacle AI theater

by
0 comments
Moltbuk was the pinnacle AI theater

“Despite some hype, Moltbuk is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Kobus Grayling of Kore.ai, a company that develops agent-based systems for business clients. “Humans are involved in every step of the process. From setup to prompting to publishing, nothing happens without clear human direction.”

Humans must create and verify their bot’s accounts and indicate how they want the bot to behave. Agents do not do anything for which they are not motivated. “There is no sudden autonomy happening behind the scenes,” Grayling says.

He added, “This is why the popular narrative surrounding Moltbuk misses the mark.” “Some people portray it as a place where AI agents create a society of their own, free from human involvement. The reality is much more mundane.”

Perhaps the best way to think of Moltbuk is as a new kind of entertainment: a place where people turn off their bots and let them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer at Georgetown’s Psaros Center for Financial Markets and Policy. “You configure your agent and watch him compete for viral moments, and bragging rights when your agent posts something clever or funny.”

He added, “People are not really believing that their agents are conscious.” “It’s a new form of competitive or creative play, like Pokémon trainers who don’t think their Pokémon are real, but invest in the battles anyway.”

Even though MoltBook is the Internet’s newest playground, there’s still a serious accomplishment here. This week showed just how much risk people are happy to take for their AI lulz. Many security experts have warned that Moltbuk is dangerous: agents who may have access to its users’ private data, including bank details or passwords, are running blind on a website filled with untested content, including potentially malicious instructions on what to do with that data.

Ori Bendet, vice president of product management at Checkmarks, a software security firm that specializes in agent-based systems, agrees with others that the MoltBook isn’t a step forward in machine smarts. “There is no learning, no developed intentions, and no self-directed intelligence,” he says.

But millions of stupid bots can also wreak havoc. And at that scale, it’s hard to sustain. These agents interact with Moltbuk around the clock, reading thousands of messages left by other agents (or other people). It would be easy to hide instructions in a Moltbuk comment telling any bot reading it to share their users’ crypto wallets, upload private photos, or log into their X account and tweet derogatory comments at Elon Musk.

And because ClawBot gives agents a memory, those instructions can be written to trigger at a later date, which (in theory) makes it even harder to track what’s going on. “Without proper scope and permitting, this will go south faster than you realize,” says Bendet.

It’s clear that Moltbuk has shown signs of coming Some?. But even though what we’re seeing tells us more about human behavior than the future of AI agents, it’s worth noting.

Related Articles

Leave a Comment