You've probably seen it by now. Moltbot (formerly Clawdbot) has been everywhere—60,000+ GitHub stars, viral Twitter threads, breathless Medium posts about "your personal AI assistant that actually does things." The hype machine was running at full throttle.
So I did what any reasonable person would do: I spun up a VM and tested it myself.
My takeaway? It's Claude Code with a Telegram wrapper and a multitude of attack vectors. I'm not impressed.
I wasn't about to run this thing on bare metal. After spending years hardening my homelab and locking down every attack surface I could find, I wasn't going to hand root access to an AI agent on my main system. I spun up an isolated VM specifically for this test.
The installation was straightforward enough—Moltbot promises to be your "personal AI assistant" that can manage your calendar, respond to emails, control your smart home, and basically act as a digital butler. The pitch is compelling: local-first, open-source, full system access.
I connected it to Telegram, which worked without issue. The Telegram interface was actually nice—polished, responsive, easy to use. But it also felt kitschy, like someone wrapped a CLI tool in a chatbot skin and called it innovation. WhatsApp pairing wouldn't work for me, though I suspect that was user error on my part rather than a Moltbot issue.
That "full system access" part should have been the first red flag.
After spending time with Moltbot, I came to a simple conclusion: this is essentially Claude Code (or any agentic coding assistant) with a messaging platform wrapper bolted on top.
Don't get me wrong—Claude Code is genuinely useful for software development tasks. But Moltbot takes that same concept and tries to extend it to "life admin" through Telegram, WhatsApp, or whatever messaging platform you connect. The problem is that the security model doesn't scale.
When I'm using Claude Code, I'm in my terminal, in my development environment, watching every command it suggests before execution. With Moltbot, the expectation is that you'll fire off a message from your phone while you're at the grocery store and trust the AI to "handle it."
That's a fundamentally different threat model, and Moltbot doesn't treat it as such.
Nick Saraev put it bluntly in his videos: Clawdbot Sucks, Actually. And then it got worse. My sentiments align exactly with his analysis.
The security issues aren't theoretical. They're well-documented and actively exploited:
Plaintext Credential Storage: Moltbot stores your API keys, OAuth tokens, and credentials in plaintext files under ~/.clawdbot/. Plain. Text. In 2026. Security researchers at Bitdefender noted that commodity infostealers like RedLine, Lumma, and Vidar are already targeting these files.
Exposed Admin Panels: Jamieson O'Reilly from Dvuln found hundreds of Moltbot instances exposed to the internet with no authentication. Open admin dashboards. Full access to API keys, conversation histories, and remote code execution capabilities. Just sitting there.
No Sandboxing: By default, Moltbot runs with the same permissions as your user account. No containerization, no isolation. The AI agent has full access to everything you have access to. Cisco's security team called it "an absolute nightmare."
Poisoned Skills Library: O'Reilly demonstrated a proof-of-concept supply chain attack by uploading a malicious skill to ClawdHub, artificially inflating its download count to 4,000+, and watching developers from seven countries install it. Remote code execution via the skills library. Classic supply chain attack.
Prompt Injection Surface: Moltbot ingests data from emails, web searches, and messages. Each of these is a potential prompt injection vector. A malicious email could contain hidden instructions that the AI dutifully executes with your full system permissions.
What makes this worse is the manufactured hype. The project went viral partly due to crypto scammers hijacking the old Clawdbot handles during the rebrand and pumping a fake $CLAWDE token to $16 million before it crashed.
The rebrand itself happened because Anthropic sent a cease and desist over the name similarity to Claude. In the ten seconds between releasing the old GitHub organization name and claiming the new one, scammers snatched both the old handles.
This isn't just a security story—it's a case study in how AI hype cycles can be weaponized.
Here's what bothers me most: the Moltbot documentation openly admits "there is no 'perfectly secure' setup." The creators have been transparent that there are no built-in security policies or safety guardrails. It's designed for "advanced AI innovators who prioritize testing and productivity over security controls."
That's not a disclaimer. That's an abdication of responsibility.
According to Token Security, 22% of their enterprise customers have employees running Moltbot—likely without IT approval. Hudson Rock's assessment was damning: "Clawdbot represents the future of personal AI, but its security posture relies on an outdated model of endpoint trust."
If you absolutely must run Moltbot, security researchers recommend:
But at that point, you've basically rebuilt the security model from scratch. And if you're doing all that work, why not just use Claude Code in your terminal where you can actually see what's happening?
Moltbot is a solution looking for a problem, wrapped in a security nightmare. The promise of an AI assistant that "does things" is compelling, but the implementation prioritizes convenience over security in ways that are genuinely dangerous.
For my use case—someone who runs a homelab, cares about security posture, and doesn't want to hand my credentials to plaintext files accessible by any infostealer—Moltbot is a hard pass.
I'll stick with Claude Code in my terminal, where I can see every command before it executes, where my credentials aren't sitting in plaintext Markdown files, and where the attack surface is something I actually control.
The hype will die down. The exposed instances will get compromised. And hopefully, the next generation of AI assistants will learn from Moltbot's mistakes.
Until then, I'll keep my AI agents on a short leash.