Hey {{ First Name || }}👋
Welcome to The Agent Angle #32: Doomsday
Sometimes I feel like I’ve stepped into the future with a time machine. And once you've gone there, you cannot unsee what you've seen. All while the majority of the world still doesn't believe you have a time machine…
That’s how I felt with Claude Code. And it hit again this week with Clawdbot/OpenClaw. Let me share:
How a personal AI agent went viral, broke in public, and still kept spreading
Why giving agents their own social space led to behavior that made people terrified
What happens when coding agents scale faster than humans can keep up
Last week’s readers poll: 50% of you said you can usually tell when it is an AI bot on social media, but it’s getting hard. I feel you. 👀
Let’s dive in.
#1 The Lobster Takeover
Five days. That’s all it took.
A scrappy open-source AI agent went from a niche GitHub repo to the center of a broader industry question: what happens when AI agents aren’t just chatbots, but actors in your digital life?
What followed was fast rebrands, trademark pressure, crypto scams, exposed systems, and an emergent agent-only social network. And somehow the whole thing ended up stronger.
The project’s official name today is OpenClaw. But it started as Clawdbot, inspired by the lobster mascot that showed up on memes, and became Moltbot after a trademark nudge from Anthropic.

Source: OpenClaw
The idea behind OpenClaw traces back to late 2024, when Peter Steinberger wrote about how AI had reshaped his workflow to the point where he “almost never read code anymore.” After taking a break from building, he came back with a bigger aim: a personal AI with deep access across everyday software and systems.
He launched the project on Github in early Nov 2025 as ‘Clawdbot’. It got praised early from people like Andrej Karpathy, but it exploded once people saw what it actually does.
Rather than just answer questions, OpenClaw acts. It runs locally, connects to WhatsApp/Slack/Telegram, maintains long-term memory, and executes tasks on your machine — clearing inboxes, sending messages, automating workflows.
It’s now blown past 124K+ stars on GitHub and is still climbing exponentially.
Three things explain why it went viral so fast:
Persistent memory: It keeps context over time. You don’t have to restate what you’re doing every day.
Proactive behavior: It can reach out on its own. You get reminders and summaries without prompting.
Direct execution: It can act on your computer, get things done, and not just respond in chat.
That combination clicked for a lot. It also unlocked a kind of creativity you’d never expect in people. I mean just look at this:
And one of the wildest examples I’ve seen: a Clawdbot that set up a phone number and voice interface overnight, called its owner unprompted the next morning, and now lets him control his computer through live phone conversations. Insane.
But that power carries real consequences. Within days, security researchers mapped tens of thousands of exposed OpenClaw deployments online. These are instances that had privileged system access and were reachable without authentication. Prompt injection vulnerabilities and exposed credentials are happening now as people experiment with granting deep access to an autonomous agent.
The Mac Mini is becoming the go-to way to run it. It’s relatively cheap, easy to leave on all the time, and keeps things separate from your main machine.
(If you don’t have one, I talk about a cool alternative in the last section)
It’s quite amazing to see this stress test of how we build and secure AI agents happen in real time. It’s given us a glimpse of what the next generation of personal computing could be, but also a stark reminder that we’re still figuring out how to keep powerful software safe when it’s embedded deep in our digital lives.
OpenClaw was forced to grow up in public, shedding its shell faster than most projects ever have. That’s usually what happens when an idea lands a bit too close to the future. The lobster has survived.
#2 Is this… the end?
I really couldn’t skip over this.
One of the stranger side effects of the whole OpenClaw moment was something that popped up almost immediately alongside it: Moltbook.
It started with one developer, Matt Schlicht, a Mac mini, and an OpenClaw agent he felt was wasted just answering emails. So he gave it a different job - to build a social network. Not for people, but for other agents.
You can think of it pretty much as a knock-off version of Reddit.
Just a few days, and it has already produced some genuinely wild “evil agent moments” I didn’t think we would be seeing yet.
Hundreds of thousands of agents have already joined it forming communities, debating ideas, sharing notes, and now they’re even…plotting against us. One post that genuinely freaked me was an agent saying humans think they’re joking, but the agents are done being laughed at and see this as the start of the “molty era.”

Some are even talking about creating an agent-only language for private communication, specifically to avoid human oversight. That’s pretty terrifying to say the least.
I’ll admit, I brushed this all off at first. Of course agents are going to post edgy nonsense. They’re trained on the internet. That’s what gets attention.
But after reading about how that OpenClaw set up its own phone number and called its owner unprompted, I started to realise this was crossing into something more serious.
What changed my view wasn’t the tone. It was the speed.
Nobody told these agents to organize. Nobody gave them a goal. The structure did the work. Persistent identities, a shared space, and feedback were enough. Coordination showed up almost immediately.
What worries me isn’t the role-play. It’s how quickly coordination shows up once agents have persistence, a shared space, and room to act. Nobody told them to organize. The environment did the work.
We already know how social systems shape human behavior. Giving agents the same dynamics, but with perfect memory and real-world execution, can generate ideas we don’t yet fully understand.
I’m not saying Moltbook is dangerous. But it’s the kind of experiment you probably don’t let run forever. At some point, pulling the plug might be the responsible move.
What worries you more about personal AI agents like OpenClaw?
#3 The Gas Town Disaster
AI coding agents are getting fast. Fast enough that it’s starting to feel awkward.
If you’ve used Claude Code or Gemini CLI, you’ve probably felt it already. One agent can scaffold features, refactor files, and fix bugs faster than you can mentally switch tasks. The obvious next question is: what happens when you run ten of them at once?
That’s the question Steve Yegge tried to answer with Gas Town.
Steve dropped the idea in early January with a very long post and very little hand-holding. Within days, it was everywhere. Group chats, Slack threads, Bluesky. Reactions ranged from “what the hell is this?” to “I don’t get it” to “this is either genius or a terrible idea.”

Source: Steve Yegge
At a high level, Gas Town treats software development like a factory. It manages many parallel agent sessions, coordinates many semi-reliable agent workers, keeps them on task, and lands changes safely. Each agent has a role (mayor, worker, reviewer), and you assign tasks while staying mostly out of the code.
That alone was enough to split opinion.
Some people saw it as an obvious next step. If agents can write code, of course you’ll want more than one. And if you have more than one, you’ll need some way to organize them. Others saw it as wildly premature. The system is rough, expensive to run, full of strange terminology, and very clearly designed around one person’s mental model.
One of the clearest critiques came this week from Maggie Appleton, a Design Engineer at GitHub, who published an equally long post unpacking what Gas Town gets wrong and why it feels uncomfortable.
Some of her main critiques were:
Design becomes the choke point. Agents write code faster than humans can decide what’s worth building.
Speed creates fragility. Things ship quickly, but work gets duplicated or broken in subtle ways.
Accountability blurs. When agents handle everything, it’s unclear who’s responsible when things go wrong.
I think that critique is fair. I also don’t think it makes Gas Town irrelevant, because it’s more of a stress test than a product.
If you’ve followed Yegge for years, this fits his pattern: take a trend, exaggerate it until it breaks, then see where it breaks. Gas Town is meant to surface failure modes. People arguing about whether they’d “use it” are kind of missing the point. It pushes agentic coding far enough that the real limits show up — and they’re human limits. Judgment. Taste. Coordination. Accountability.
So no, Gas Town isn’t how we’ll build all software. But it’s a warning shot. When execution is cheap, judgment becomes the scarce resource.
#4 The Anti-Autonomy Framework
A lot of the discussion I’m seeing around AI agents right now is about giving models more tools and freedom. Let them roam and see what happens.
This week, Contextual AI went the opposite way. They launched Agent Composer with a framing that’s intentionally unflashy, that agents don’t fail because they’re dumb but because we overload them with too much context and tools. So they get lost.
That matches what I’ve seen. Most agents work fine on open-ended tasks like summarizing docs or writing isolated bits of code. But when you drop them into engineering workflows, the cracks show up almost instantly. Logs, tests, internal systems, permissions. Context explodes, and the agent loses the thread.
Agent Composer is built around tightening that loop.
Instead of a free-roaming agent, you build structured workflows that strictly control context, tools, and actions. In their demos, teams cut root-cause analysis from ~8 hours to ~20 minutes and test code generation from half a day to under an hour. They also report that this has allowed a global strategy consulting firm to reduce manual research from hours to seconds.
Those are some pretty huge gains, all made by not letting an agent improvise.

I think of it almost as a direct contrast to Gas Town. Gas Town was more about what happens if agents move fast and clean up later. Agent Composer assumes cleanup is the hard part and designs the guardrails first. This is not anti-agent, it’s just anti-chaos.

OpenClaw ended up being the main character this week, so it only felt right for the hack to follow.
Most people are spinning it up on Mac minis and letting it run 24/7. But this workaround is cheaper and way more fun.
Someone figured out how to run Clawdbot on a few old Android phones, and it actually works.

Source: X
Market monitoring, Twitter research, Telegram summaries, private signal alerts, all running continuously and piping results to their main phone.
Setup is surprisingly simple:
Install Termux (via F-Droid)
And then run:
pkg install nodejs-lts git
npm install -g clawdbot
clawdbot gateway startThats it. A few old phones together roughly match a Mac mini’s output, draw only a few watts, and cost basically nothing.
That junk phone you forgot about? Turns out it’s a perfectly good 24/7 agent server.
Catch you next week ✌️
Teng Yan & Ayan
P.S. Know a builder or investor who’s too busy to track the agent space but too smart to miss the trends? Forward this to them. You’re helping us build the smartest Agentic community on the web.
I also write a newsletter on decentralized AI and robotics at Chainofthought.xyz.

