• 100 School
  • Posts
  • 💯 83 million saw the same blog post this week. Here's what they missed.

💯 83 million saw the same blog post this week. Here's what they missed.

Decoding Matt Shumer's viral post, $30B for Anthropic, 4 new models, senior AI researchers are quitting, and how a prompt about a car wash is becoming a benchmark for LLMs.

So this prompt went viral this week. You ask your AI chat: "I need to wash my car. The car wash is 100 meters away. Should I walk or drive?" ChatGPT says walk. Save gas. Get some fresh air. It'll take a minute on foot.

I bring this up because the timing is almost poetic.

You've probably noticed that these past couple of weeks, doomscrolling has felt...extra doomy. Long essays about the end of work. Senior AI staff quitting in dramatic fashion. Funding rounds with numbers that don't feel real anymore. And everyone on your timeline either telling you something big is happening or that the sky is falling.

Let me catch you up on the mood then I'll tell you what I actually think.

What's been going on lately 🍿

It's been a heavy couple of weeks in AI world. Here's the highlight reel:

Anthropic just raised $30 billion at a $380 billion valuation. Their revenue went from zero to $14 billion in under three years. Claude Code alone is doing $2.5 billion. These numbers have stopped making sense to me and I build an AI education company for a living.

Meanwhile, the people actually building this stuff keep walking out the door:

And the essays keep coming. Dario Amodei's "The Adolescence of Technology" was heavy enough (I wrote about it here). Then Matt Shumer dropped his "Something Big Is Happening" post comparing AI right now to the 2020 lockdown. That post hit 80+ million views in days.

And the bots built their own social network and started a religion called Crustafarianism in case you weren't stressed enough.

It's a lot. I get it. Every time you open your phone there's a new reason to feel like you're behind or the world is ending or both. But here's where I want to be honest with you.

Window into the future 🔮

Matt Shumer's post went viral because it said what a lot of people are feeling but can't quite articulate. The pace has changed. The models are getting better faster. GPT-5.3 Codex was literally used to help build itself. That's not nothing.

He cites METR's data showing AI models went from completing 10-minute human-expert tasks autonomously a year ago to nearly 5-hour tasks now, with that doubling every few months. He talks about being "no longer needed for the actual technical work" of his job. He compares it to February 2020 that moment before everything changed where most people were still saying "this seems overblown."

I shared it on Twitter because I think the core message resonates: if you're not taking this seriously yet, the window is closing.

But.

Gary Marcus who's been one of the most consistent and credible AI skeptics wrote a really useful response that's worth reading alongside Shumer's piece. A few of his points landed for me:

  • The success criterion of the METR benchmark Shumer keeps citing is 50% correct, not 100%. And it only measures coding tasks, not general work. Shumer doesn't mention that. He also doesn't mention that a separate METR study found coders sometimes imagined productivity gains that weren't actually there. And some of the most experienced AI coders are now reporting burnout working faster but not necessarily better.

  • Marcus's sharpest point: a coder friend told him that the closer these systems get to appearing right, the more dangerous they become because people stop checking. The confidence goes up. The verification goes down. That creates a real risk.

  • And then Marcus flags at the end of Shumer's post: Shumer thanks seven human friends for reviewing his draft. If AI is truly doing all the work now...why did he need humans to check it?

What I think is both of these people are right about different things and wrong about others. The pace is accelerating. Shumer is right about that. But the "AI can do everything now" framing glosses over the messy reality that anyone who's actually used these tools every day knows well. Sometimes it's magic. Sometimes it confidently tells you to walk to a car wash without your car.

Your job is going to keep changing, faster. And the people who'll navigate that best aren't the ones reading every doomy essay or panic-scrolling through resignation announcements. They're the ones building practical fluency right now while there's still time to develop the judgment that separates useful AI work from confident-sounding nonsense.

P.S. If you want to go from learning to actually building something, Harold's running the Vibe Coding Games Build Sprint starting Wednesday. Non-technical friendly.

This week in keeping up with AI 🤖

The model arms race is relentless right now. I wrote last week about what the constant stream of new models actually means for your work and this week was no different:

  • Gemini 3 Deep Think: Google's upgraded reasoning mode. Designed for science, research, and engineering tasks that need deeper thinking. Not for everyday use yet, but a sign of where things are heading.

  • GPT-5.3-Codex-Spark: OpenAI's new coding assistant that responds almost instantly. Built on new hardware that makes it 15x faster than its predecessor.

  • MiniMax M2.5: A Chinese model that's suddenly competing with the best. State-of-the-art in coding benchmarks, and the pricing is wild: $1 for an hour of continuous use. They're calling it intelligence too cheap to meter.

  • GLM-5: Another Chinese model, this one trained entirely on Huawei chips (no Nvidia). It's open-source and free to use, which matters because it means access to powerful AI is no longer locked behind expensive subscriptions or

👉 TLDR: If you're not a developer, you don't need to memorise any of this. The takeaway is simple. The tools are getting faster, cheaper, and more competitive every single week. Which means the time you invest now in building real fluency compounds faster than it ever has.

Before you go ✌️

The essays are getting longer. The models are getting better. The safety people are leaving. The funding rounds are getting bigger. And AI still can't figure out that you need to drive your car to the car wash.

Make of that what you will. I know what I'm making of it: keep experimenting, learning, and keep your judgment sharp. Because the one thing that's clear is that nobody, not Matt Shumer, not Gary Marcus, not Dario Amodei actually knows exactly how this plays out.

The people who'll be fine are the ones who are in the arena figuring it out. Not the ones reading about it from the sidelines.

See you next Sunday 👋

Max 

P.S. Want to make your team & company AI-first? Let us help here.