The Ceiling Just Got Higher. Are You Ready to Think Bigger?

Share This Post
The Ceiling Just Got Higher. Are You Ready to Think Bigger?

I’ll be honest with you. There are things I’ve stopped trying to use AI for.

Not because I gave up on AI. Because I’ve been using it long enough to know where it falls short. There are certain kinds of thinking — nuanced strategic decisions, complicated multi-variable problems, moments where context from six different directions all needs to land at once — where the current generation of AI just doesn’t hold up the way I need it to.

I made my peace with that. I work around it.

This week, I read something that made me reconsider whether those workarounds are still necessary.


What Anthropic Just Announced

Anthropic has finished training a model they’re calling Claude Mythos. It’s being positioned as a new tier above Opus, their current highest offering. The early language around it is careful but pointed: “the most capable we’ve built to date.”

It’s being piloted now with early customers. Full release is coming.

I’m not going to pretend I’ve used it yet. I haven’t. But I’ve been using AI tools seriously enough, long enough, to know what it means when a company that builds as carefully as Anthropic says something like that.

It means the ceiling just moved.


What Happens When the Ceiling Moves

Here’s what I’ve noticed about how entrepreneurs respond to capability jumps in AI.

Most of them don’t really update. They hear about a new model, they might try it for a few days, and then they keep using AI for exactly the same things they were using it for before. The new capability doesn’t change their behavior because they never stopped to ask what new capabilities would actually make possible.

I’ve done this myself. I got comfortable with what current AI could do well, built my workflows around that, and unconsciously stopped imagining what I’d do differently if those limitations weren’t there.

That’s a mistake I want to avoid making this time.


The Question I’m Asking Myself

When I heard about Claude Mythos, the first thing I did was sit with a question I don’t ask often enough:

What have I stopped asking AI to help with because it wasn’t good enough?

Not what it can’t do in theory. What have I personally stopped trying because past experience taught me it would fall short?

For me, the honest list includes things like: genuine back-and-forth strategic debate where I need the AI to push back with real precision. Complex scenario analysis that requires holding many competing variables in memory at once. Long-form reasoning chains where I need the conclusion to reflect everything that came before it, not just the last few exchanges.

Those things. I’ve mostly worked around them.

If Mythos delivers on what the early signals suggest, some of those workarounds become unnecessary. And when that happens, my question isn’t “what do I test first?” My question is: what has being too cautious about AI’s limits cost me?


The Bigger Lesson (That Applies to More Than AI)

I think there’s something here that goes beyond which AI model can do what.

We all have places in our businesses where we’ve made peace with a ceiling we didn’t choose. A limitation that used to be real but might not still be. A thing we stopped trying because the last time we tried it didn’t work.

The world changes faster than our assumptions update. I’ve watched this happen in every season of building my business. Something that was genuinely hard becomes suddenly accessible. A technology or a tool or a method shifts, and the ceiling moves, and the people who notice early get somewhere the people who don’t notice early have to scramble to reach later.

AI is doing that right now, in real time. Mythos is one data point in a pattern that’s been consistent for the past two years.

The ceiling keeps rising.


What I’m Actually Going to Do

When Claude Mythos becomes available, I’m going to go back to the list I made. The things I stopped asking AI to help with. And I’m going to try them again.

Not because I assume it’ll be perfect. But because assuming today’s limits still apply is a choice that has a cost.

I’d encourage you to make that list too. Right now, before the new model drops. Because the value of the exercise isn’t really about AI at all. It’s about noticing where you’ve quietly shrunk your expectations to fit a constraint that might not be there anymore.

What have you stopped asking for?


Jonathan Mast is the founder of White Beard Strategies. He works with entrepreneurs across 190+ countries to help them build AI into their businesses in ways that are practical, sustainable, and worth the effort. Follow along at jonathanmast.com.