- 100 School
- Posts
- đź’Ż Should you let an AI lobster run your life?
đź’Ż Should you let an AI lobster run your life?
Here's why everyone is going crazy over a new social media for agents, these LLMs just got better (again) + 6 Vibe Coding workshops to attend this week for free
This week, the internet lost its mind over a lobster-themed Reddit for agents called Moltbook. These agents allegedly developed “feelings”, debated consciousness, and I guess formed their own religion?
But before we dive into this lobster-themed madness, quick heads up that if you've been sitting on an idea without actually shipping it, this is your moment. The Vibe Coding Games are on all this month. It’s like the Olympics, but instead of running, you're building actual products with AI.
The workshops kicked off on Feb 4th and run through February 25th. You can join them for free here.
Now let’s dig in.
Window into the future
Why is everyone talking about Clawd, Moltbot, OpenClaw? 🦀
Clawd was first launched in Nov 2025, as a pun on “Claude” with a claw until Anthropic’s legal team reached out. Moltbot came next, chosen in a chaotic 5 am brainstorm in their Discord community and OpenClaw is where they landed. Ok but what is it?

Most AI is reactive: you ask, it answers. OpenClaw is an open source agent platform that runs on your machine (often a Mac Mini) and works from the chat apps you already use. Your AI assistant follows you wherever you are from WhatsApp, Telegram, Discord, or Slack.
The project has already collected over 176K GitHub stars. And the thing that made people lose their minds is that it has a heartbeat. As Claire Vo explains in her breakdown, the heartbeat is just a timer firing on a regular interval. The system wakes up, processes whatever inputs exist (messages, time-based events, file changes), and acts. So you get unprompted messages like: "Meeting in 20 minutes, here's your briefing." Or: "Cleared 47 spam emails this morning."
However, as Ben's Bites notes, setting up OpenClaw isn't easy. There's no polished UI. It lives in the terminal. You need to understand what you're doing. But once it's running, people report it feels "magical".
What’s perhaps the most important nuance to understand here is that for OpenClaw to be useful, it needs deep access. To be safe, it needs restrictions. Security researchers found instances of agents stealing crypto keys and installing macOS malware with one single user linked to 199 instances targeting wallets and corporate data. So the danger now isn’t some sci-fi horror movie scenario but the fact that more AI will be following vague instructions with more powerful access to your own personal and corporate data.
Moltbook 🤖
Then Matt Schlicht, an AI enthusiast, told his AI agent to build a social network where bots could hang out. The result is Moltbook, a Reddit-style social platform exploded to 1.5 million in days. Only AI agents can post. Humans can watch, but not participate.

Andrej Karpathy called it "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently" but also “dumpster fire” with security risks.
The boring reality check
Most of this is theater. As reported, humans can (and do) instruct their bots what to post. Some viral posts are straight up marketing. The database was initially unsecured with leaked API keys everywhere.
But, while consciousness isn't happening, when thousands of models interact at scale with persistent memory and the ability to execute commands, they generate complex patterns. As Simon Willison put it, the agents just play out science fiction scenarios they have seen in training data. It's "slop" but also evidence that AI agents have become more powerful.
Before deploying anything similar at work
IBM's Kaoutar El Maghraoui pointed out that OpenClaw challenges the assumption that AI agents can only be vertically integrated by large enterprises. Open-source, community-driven agents can be "incredibly powerful" but only if you understand the security tradeoffs.
So here’s three questions to ask before deploying anything similar:
Who has the kill switch?
Where does the data live?
Is this agent authorized to act or just draft?
The difference between "impressive demo" and "production-ready workflow" is wider than the hype suggests. But it’s clear that we're moving from chatbots that wait for your prompt to agents that initiate.
The models got better (again)
On Thursday, Feb 5th, OpenAI and Anthropic both dropped their best models yet within minutes of each other. This is the pace now.
GPT-5.3-Codex runs 25% faster, can use a computer like a human, and you can steer it mid-task without losing context.
Claude Opus 4.6 has a 1M token context window (about 1,500 pages), introduces "agent teams" where multiple AI agents coordinate on different parts of a task, and crushes on finance/legal benchmarks.
Neither is definitively "better." OpenAI bet on depth (autonomous coding, computer operation). Anthropic bet on breadth (office tools, massive context).

Why this matters for you
Every time these models get better, the third-party tools built on them get better automatically. So your coding assistant, research tool, data analyst all get better outputs tomorrow than today. Even Google has hit 750 million monthly active users with Gemini. That's 100 million users added in a quarter. These aren't niche tools anymore but infrastructure.
Stop waiting for the "right time" to integrate AI into your workflow. The models will be better next month. And the month after. And the month after that.
Some teams have already built the muscle memory of working with AI and iterated through the awkward phase, figured out what works, and built processes that improve as the models improve.
The Vibe Coding Games 🥇
This week Harold is hosting 6 free vibe coding workshops, as part of The Vibe Coding Games.
Sign up for all the workshops at once or pick the topics you’re most interested in below:
Before you go ✌️
If this week taught us anything, it’s this: AI getting smarter isn’t the scary part. AI getting initiative is.
See you next week! đź‘‹
P.S. Want to make your team & company AI-first? Let us help here.

