Viral AI Agent Moltbot Is a Security Bug – 5 Red Flags You Shouldn’t Ignore (Before It’s Too Late)

by
0 comments
Viral AI Agent Moltbot Is a Security Bug – 5 Red Flags You Shouldn't Ignore (Before It's Too Late)

Nurfoto via Getty Images

Follow ZDNET: Add us as a favorite source On Google.


ZDNET Highlights

  • Moltbot, formerly known as Clodbot, has gone viral as an “AI that actually works.”
  • Security experts have warned against joining this trend and using AI assistants without caution.
  • If you’re planning to try Moltbot yourself, be aware of these security issues.

Clodbot, now rebranded as Moltbot after Anthropic’s IP nudge, has been at the center of a viral whirlwind this week – but there are security implications of using an AI assistant you need to be aware of.

What is Moltbot?

MoltbotDisplayed as a cute crustacean, it promotes itself as “AI that actually works”. Originating from the mind of Austrian developer Peter Steinberger, the open-source AI assistant is designed to manage aspects of your digital life, including handling your email, sending messages, and even performing tasks on your behalf, like checking you in for flights and other services.

Also: 10 ways AI could cause unprecedented harm in 2026

As previously reported by ZDNET, this agent, stored on personal computers, communicates with its users through chat messaging apps including iMessage, WhatsApp and Telegram. It has over 50 integrations, skills and plugins, persistent memory and both browser and full system control functionality.

Instead of operating a standalone backend AI model, Moltbot harnesses the power of Anthropic’s cloud (guess why the name change to Cloudbot was requested, or check out Lobster) Vidya Page) and OpenAI’s ChatGPT.

Moltbot has gone viral within a few days. On GitHub, it now has hundreds of contributors and nearly 100,000 stars – making Moltbot one of the fastest growing AI open source projects on the platform to date.

So, what is the problem?

1. Viral interest creates opportunities for scammers

Many of us love open source software for its code transparency, the opportunity for anyone to audit the software for vulnerabilities and security issues, and, in general, the community created by popular projects.

However, the popularity and change of dangerous speeds may also allow malicious developments to slip through the cracks, as reported fake repo And crypto scams Already in vogue. Taking advantage of the sudden name change, scammers launched a fake Clodbot AI token that managed to raise $16 million Before it crashed.

So, if you are planning to try it out, make sure you only use trusted repositories.

2. Handing over the keys to your digital empire

If you choose to install Moltbot and want to use the AI ​​as a personal, autonomous assistant, you’ll need to grant it access to your accounts and enable system-level controls.

There is no completely secure setup like Moltbot Documentation Accepts, and Cisco calls moultbot An “absolute nightmare” from a security point of view. Since the bot’s autonomy depends on permissions to run shell commands, read or write files, execute scripts, and perform computational tasks on your behalf, these privileges can put you and your data at risk if they are misconfigured or if malware infects your machine.

Also: Linux after Linus? The kernel community drafts a plan to eventually replace Torvalds

“Molbot has already been reported to have leaked plaintext API keys and credentials, which can be stolen by threat actors via quick injection or unsecured endpoints,” Cisco security researchers said. “Moultbot’s integration with messaging applications expands the attack surface to applications where threat actors can craft malicious signals that cause unintended behavior.”

3. Exposed credentials

Offensive security researcher and Dvulan founder jamison o’reilly Monitoring Moltbot and connecting exposed, misconfigured instances connected to the web without any authentication protections other researchers Also exploring this area. Of the hundreds of instances, some had no protection at all, causing Anthropic API keys, Telegram bot tokens, Slack OAuth credentials and signature secrets, as well as conversation histories to be leaked.

While developers immediately jumped into action And introduced new security measures that can mitigate this problem, so if you want to use Moltbot, you should be confident in how you configure it.

4. Prompt injection attack

Instant injection attacks are nightmare fuel for cybersecurity experts now involved in AI. Rahul Sood, CEO and co-founder of Irreverent Labs, has listed a series of potential security problems associated with proactive AI agents. Saying Moltbot/Cloudbot’s security model is “Scares me.”

Also: The Best Free AI Courses and Certifications for Upskilling in 2026 – and I’ve Tried Them All

This attack vector requires an AI assistant to read and execute malicious instructions, which may, for example, be hidden in the source web content or URL. An AI agent can then leak sensitive data, send information to an attacker-controlled server, or execute tasks on your machine – should it have the privileges to do so.

Sood expanded the topic on X, Comment: :

“And wherever you run it… the cloud, the home server, the Mac Mini in the closet… remember you’re not just giving access to a bot. You’re giving access to a system that will read content from sources you don’t control. Think of it this way, scammers all over the world are cheering as they prepare to destroy your life. So please, scope out accordingly.”

As Moltbot Documentation Note, as with all AI assistants and agents, the issue of prompt injection attacks has not been resolved. There are some measures you can take to reduce the risk of becoming a victim, but combining widespread system and account access with malicious prompts sounds like a recipe for disaster.

“Even if you can send messages to a bot, instant injection can still occur through any untrusted content (web search/fetch results, browser pages, emails, documents, attachments, pasted logs/code) read by the bot,” the document reads. “In other words: the sender is not the only threat surface; the content itself can carry adverse directions.”

5. Malicious skills and content

Cybersecurity researchers have already exposed example Malicious skill suitable for use with Moltbot appearing online. In one such example, on January 27, a new VS Code extension named “Cloudbot Agent” was flagged as malicious. This extension was actually a full-fledged Trojan that uses remote access software for monitoring and data theft purposes.

Moltbot does not have a VS Code extension, but this case highlights how the growing popularity of the agent will likely give rise to a whole crop of malicious extensions and skills that repositories will have to detect and manage. If users accidentally install it, they may be unknowingly providing the open door for their setup and accounts to be compromised.

Plus: Cloud Cowork now automates complex tasks for you – at your own risk

To highlight this issue, o’reilly Built a safe, but backdoor skill, and released it. It didn’t take long for this skill to be downloaded thousands of times.

While I urge caution in adopting AI assistants and agents that have a high level of autonomy and access to your accounts, this does not mean that these innovative models and tools have no value. Moltbot may be the first iteration of how AI agents will insert themselves into our future lives, but we should still exercise extreme caution and avoid choosing convenience over personal safety.

Related Articles

Leave a Comment