Skip to main content

Tags:

What If AI Isn't a Bubble?

by Josh Levine
Nov 24, 2025 3:39:16 PM

When The Crash Might Be The Softer Landing

The past few years have felt like watching the ground shift in real time. AI capabilities that seemed impossible eighteen months ago are now routine. The technology moves faster than most of us can track, and nobody's quite sure where any of this lands.

Along with the technical leaps, the business side has been equally unprecedented. Hyperscalers are making trillion-dollar infrastructure bets. Capital is flooding in at scale. Four hundred billion dollars on AI infrastructure this year alone. The financial patterns feel historic, even if we're not quite sure what they mean yet.

There's a familiar frame for this. A bubble happens when capital floods into a sector faster than fundamentals can justify, valuations climbing on speculation rather than revenue, money chasing promises instead of profits.

Through most serious lenses, the debate has already moved past whether this is a bubble. The question is just how much of a bubble we're in, and how bad the correction will be when it comes. Bubble as worst case, the outcome to avoid.

But there's a different question worth asking. What if in AI, the bubble is actually the preferable scenario? Historically, bubbles mean crashes. But bubbles also correct. Markets reset, things return to something recognizable. If this isn't a bubble, if it's real capabilities changing how value gets created on a permanent basis, that's a different terrain entirely.


Why AI Is In A Bubble

Start with the obvious. The comparisons to the dot-com era write themselves, and they're not unfair. In the late 90s, capital flooded into internet companies based on the promise of a web-enabled future. The business models were unclear, the path to profitability uncertain, but the investment kept coming anyway. Today's AI boom follows a similar script.

Four hundred billion dollars being spent on AI infrastructure this year. Consumer spending on AI services sits at twelve billion annually. Companies are pouring in more than thirty times what consumers actually pay.

Safe Superintelligence, an AI startup led by OpenAI's former chief scientist, closed a funding round this year at a thirty-two billion dollar valuation. The company's name doubles as its complete product description. No product, no roadmap, and no revenue. That is the 'future promise over present reality' pattern in its raw form.

Figure AI tells a different version of the same story. The humanoid robotics startup went from a two point six billion dollar valuation in early 2024 to thirty-nine billion by fall 2025. Last year it had no revenue and only dozens of robots in production. The valuation multiplied while the underlying business barely changed.

Perplexity hit an eighteen billion dollar valuation mid-year on roughly one hundred fifty million in annualized revenue. A hundred times sales for a three-year-old search startup. Even by high-growth standards, that revenue multiple is hard to justify.

In each case, prices have run far ahead of current revenue or real-world usage. That gap between story and fundamentals is what people usually mean by a bubble.

We've seen this before. Valuations disconnect from sustainable revenue, money chasing promises instead of profits. Then the correction comes. Valuations collapse, layoffs ripple through the sector, markets reset. It's painful, but at least there's a pattern we recognize. Price discovery happens, what survives rebuilds on clearer foundations, and the world keeps turning with fewer unicorns and more realistic expectations.


Why AI Isn't a Bubble

But there are concrete differences worth examining.

Fed Chair Jerome Powell made one distinction explicit in October 2025: unlike the dot-com era, today's AI companies "actually have business models and profits." OpenAI is on track for over $20 billion in annual recurring revenue. Anthropic is growing rapidly. These aren't companies hoping to figure out monetization later. They're selling products people are paying for.

The big cloud providers report the same pattern. Microsoft attributes meaningful Azure growth to AI services. Google Cloud says its generative AI revenue more than doubled year-over-year. Enterprise customers are signing multi-year contracts.

The infrastructure is real. The money isn't going into vaporware or speculative promises. It's buying GPUs, building data centers, and creating compute capacity that exists regardless of which companies survive. North American data-center vacancy has fallen to record lows, about 1.6% overall and under 1% in Northern Virginia, as tenants pre-lease power years ahead of delivery.

And capabilities are measurably improving on consistent curves. METR, an independent organization, tracks how long AI systems can work autonomously without human intervention. That metric has doubled every seven months for years, moving from minute-long tasks toward multi-hour projects. The pattern is verifiable and consistent.

When you look at outcomes rather than lab scores, the same pattern shows up. Klarna's customer-service bot handled two-thirds of chats in its first month, work they equated to roughly 700 full-time agents, and the company credited it with a material profit lift.

OpenAI's GDPval evaluation measured performance across 1,320 real-world tasks in 44 occupations, graded by experienced professionals in blind tests. The latest frontier models are approaching expert-level performance on a large subset of tasks. Real estate jobs are in the set: property managers, sales agents, and brokers.

Not all AI investment is equal, either. The frontier labs (OpenAI, Anthropic, Google) building foundational models are on very different footing than the broader ecosystem of startups. Some of those startups will fail. That doesn't mean the underlying technology isn't real.


What Does The Money Tell Us

If the capabilities are real and the revenue exists, there's still one pattern we haven't addressed. And it's the strangest one.

Look at where the money actually flows. OpenAI burns billions on compute, pays Nvidia for chips. Nvidia invests billions in AI companies. Those companies buy infrastructure from Microsoft, Google, Amazon, who are themselves building AI. The money moves in circles between the same handful of players. It's the kind of setup that should make you suspicious, because it looks exactly like financial engineering at its most absurd.

This should collapse. It has all the markers of an incestuous capital loop that can't sustain itself.

But if we take seriously the evidence that capabilities are real and business models exist, then maybe this circular flow isn't proof of fakery. Maybe it's revealing what value creation looks like when it reorganizes.

The companies building the infrastructure capture the value. That value flows back into more infrastructure. And as AI pushes the cost of labor down, the organizations capturing those efficiency gains have no reason to distribute them. The value concentrates where it's created, and the loop reinforces itself.

What could this look like? The starter-home market disappears while luxury markets persist. High-end services thrive while mid-tier professions scramble. Value accrues in one part of the economy and the rest becomes a rounding error. If you're positioned outside that loop, you're competing in markets where margins compress, and cost structures work against you.

The pattern that looks unsustainable might not be. It might just be what the economy looks like when it restructures, and it keeps working because the underlying capabilities are real. We default to calling it a bubble, not because the evidence demands it, but because the alternative is too strange to process. When nothing else about AI makes sense (when the labs can't fully explain how their own systems work, when capabilities shift monthly), calling it a bubble at least gives us a story we recognize.


Zooming Out

When people talk about AI preparedness today, the conversation typically centers on a few key areas. Developing prompting skills. Integrating tools into workflows. Staying current with capabilities as they evolve. Understanding which use cases make sense for your business.

That foundation matters. These aren't optional skills; they're table stakes. If you're not building literacy with AI tools now, you're setting yourself up to fall behind in ways that will be difficult to recover from.

But there's a meta-skill that often gets overlooked: learning to position yourself when conditions are simultaneously uncertain and consequential. AI is moving fast and restructuring economic patterns while the underlying technology itself remains partially unexplainable, even to the people building it. You're making decisions where the inputs are shifting.

The conventional response is to focus on what's immediately actionable and ignore the strategic questions until things settle. But you can make informed positioning decisions without perfect prediction. There's value in understanding what to pay attention to.

Today, that could mean looking beyond tool proficiency to where value is actually flowing. If your client base is connected to infrastructure spending (tech workers, AI-adjacent services, high-end markets), you're positioned differently than if you're serving compressed-margin middle markets. You could be running the most AI-optimized operation in your market, but if you're serving clients whose purchasing power is shrinking while your costs track to the infrastructure economy, the math works against you. That's one version of the strategic question: not just "am I using AI effectively?" but "am I paying attention to which economy I'm operating in?"

Tomorrow, the relevant patterns will be different. The skill isn't nailing today's specific assessment. It's building the capacity to notice when the ground shifts and reassess accordingly, because it will shift, and recognizing that change early creates advantage.

Post by Josh Levine
Nov 24, 2025 3:39:16 PM

Comments