Beyond the Boom

January 5, 2026

"The trouble with our times is that the future is not what it used to be" - Paul Valéry

Essay image

The current discourse around artificial intelligence gives the impression that it’s on an uninterrupted course upwards. This depiction has held remarkably well over the past decade, which explains most of the capital that has been invested into the field. However, it’s no longer obvious that the next stage of development will resemble the last one.

There’s a widely accepted presumption that exponential returns will be persistently propelled by larger models and continued adoption, but growing evidence points us in a different direction. This might be the onset of a long plateau where raw capability gains lag and economic effects diffuse gradually through deployment and integration rather than erupting via visible breakthroughs. The future feels difficult to forecast precisely because we’re still so early and people are struggling to really grasp how far we’ve come in just a few short years. But, we can observe that early leaps have fostered overconfidence and mask the reality that true scalability tends to require years of unglamorous refinement. Digitization and cloud migration both took decades, and AI will likely be no different.1

Progress now reflects a growing stack of bottlenecks. Reported gains from frontier labs like OpenAI and Anthropic are an opaque mixture of genuine capability improvements and measurement artifacts or benchmark-driven inflation.2 The first of many challenges came from the quick saturation of available data. In addition, optimizing for benchmarks rewards incremental performance gains that don’t reliably translate into general usefulness. At the same time, experimentation has emerged as an overlooked constraint inside the AI labs themselves. There’s no shortage of ideas from intelligent researchers, but designing, executing, and interpreting rigorous experiments has become exceptionally difficult under current compute constraints. Training runs can fail silently or only partially which makes it hard to distinguish genuine model shortcomings from flawed evaluation or training instability.3 On the other hand, application-layer startups struggle with adoption rather than scale. Trends are emerging that enterprise leaders are deploying AI, but frustrated with lackluster productivity gains in integration with their legacy infrastructure.4 The “picks and shovels” of the era, like compute suppliers, data tooling, and evaluation platforms, may fare better on average in a plateau as demand would persist even as hype deflates.5

Scaling laws are the empirical relationships that underwrite a lot of today’s AI optimism. Models that are given more data, parameters, and compute, see their performance improve in a predictable way. For investors, these laws have provided an unusually clear roadmap for where to deploy capital and what to expect from larger training runs. However, by mid-2025, evidence of diminishing returns mounted. Scaling laws say little about the tapering of marginal improvements from larger models and rising inference costs. Synthetic datasets can introduce noise that erodes gains and energy constraints cap feasible training runs at scale.6 Longer training cycles and heavier compute requirements stretch development timelines, decelerating iteration and compounding execution risk.7 The edge now may come from data quality, product design, and clever engineering rather than raw scale alone.

Counterintuitively, periods that feel like plateaus have historically been where sustainable value can accumulate. Backpropagation was developed amid the major AI winter of the late 20th century and later became the workhorse algorithm behind modern deep learning.8 We’ll also continue to see a “sailing ship” effect, where the introduction of AI accelerates innovation in incumbent technologies.

Much of the anxiety around AI right now centers on the prospect of a dramatic correction. Are we in a bubble? What might happen in the event of a sudden collapse in valuations? How widespread would a downturn be felt in the economy? These concerns largely stem from the sense that AI companies have been priced for perfection. I’m not convinced that a reset would play out like they have in the past or that the current concentration of AI investment would actually have a direct impact on the broader economy.

A bubble is usually thought of as evidence of some form of collective error, whether that means brazenness, herd behavior, or outright irrationality. But bubbles can also be understood as aggressive bets placed under extreme uncertainty, where the cost of missing out appears larger than the cost of being wrong.9 The upside is infinite while the downside, in theory, is limited to equity write-downs.

If you gain, you gain all. If you lose, you lose nothing. Wager then, without hesitation, that He exists
Blaise Pascal

Importantly, this behavior is not driven purely by irrationality. The incentive structure of venture capital rewards markups even when underlying timelines and value are uncertain. Deploying into perceived category leaders and inflating valuations enables firms to justify larger funds. Frontier labs took in over $80 billion of funding in 2025 alone at preposterous revenue multiples.10 The cycle reinforces itself with bigger funds demanding bigger bets. As long as AI’s scale can absorb those bets, accountability can be deferred until exits materialize or falter. Paul Graham illustrates a prescient example from the dot-com boom that Yahoo’s “bogus” earnings both arose from and perpetuated the circular loop where investors funded startups chasing the next Yahoo and those startups spent that funding on Yahoo ads which inflated earnings and spurred a new round of startup investments.11 A clear parallel can be drawn to the dynamics of AI startup funding and rapid ARR growth today causing massive overvaluations. An important caveat Paul Graham describes is that the reason smart people get sucked into these vicious cycles is because in order to get a really big bubble, you need to have something solid at the center.

History suggests that periods like this of apparent overinvestment in shared infrastructure are necessary. Some bubbles can finance assets that no single rational actor would build alone.12 This temporary relaxation in discipline over capital can be linked to myriad examples, such as Amazon, which benefited from a post-dot-com environment with a concentration of talent and cheap, underutilized infrastructure.13

A lot of AI investment is already sunk into compute, talent, and long-lived assets.14 Write-downs would expose overvaluation and compress multiples, extending liquidity timelines. Impaired growth projections could painfully contract price-to-free-cash-flow ratios for hyperscalers like Microsoft and Google, dragging private valuations with them, particularly if AI capex fails to meet return expectations.15 Shockwaves would likely spread through broader markets by nature of index exposure and reduced risk appetite. Unlike credit-fueled bubbles, this one lacks leverage amplification which would limit a larger macroeconomic impact. Losses would be real, but largely contained.

The more meaningful effects would be visible within the venture ecosystem itself. Companies built on fragile differentiation, thin unit economics, or purely narrative momentum would struggle. Venture firms in need of liquidity would quickly feel pressure. Meanwhile, infrastructure providers and investors with longer-duration capital would quietly benefit. Corrections can be healthy in their reallocation of labor by steering talent towards projects with clearer economic grounding. The assets financed during the boom would not disappear; they would be repriced, consolidated, or wait for use cases and demand to catch up.

Recognizing an important trend turns out to be easier than figuring out how to profit from it
Paul Graham

AI may not generate venture-scale outcomes on conventional timelines, but enormous value can be created without producing the kinds of exits the market has been trained to expect. Returns may accrue downstream through adoption and integration. Nick Bostrom noted in 2006 that once something is actually useful enough and common enough it’s not labeled AI anymore. Implementation is likely to be the bottleneck for years. Organizations will have to redesign obsolete processes, retrain employees, and ultimately accept that productivity gains can arrive unevenly. None of this shows up cleanly in quarterly metrics, but it is where lasting economic impact is actually realized.

The question has shifted from assessing AI’s revolutionary potential to examining whether venture capital is properly synchronized with its cadence of value creation. What passed for a correction in 2021 now looks more like a partial regime shift than a full clearing event. Risk capital never fully left the system. David Cahn argues that we have been “bubble hopping” since the dot-com era, with capital repeatedly rotating toward the next new thing without materially updating underlying priors.16 AI has functioned less as a resolution to that cycle than as a deferral mechanism. The scale of its perceived upside absorbed a large risk appetite that might otherwise have forced widespread repricing. This allowed capital to remain deployed even as exit markets stalled, abandoning a growing class of highly valued yet economically unresolved companies.

The market can stay irrational longer than you can stay solvent
John Maynard Keynes

If AI deploys over decades, the firms that perform best will not be those with the strongest conviction about its ceiling, but those structured to endure ambiguity. Some companies will reach meaningful scale quickly by perfecting distribution and, more importantly, timing. But, for most of the industry, returns will favor patience with deployment-heavy businesses and a willingness to underwrite value that compounds quietly. A great remedy to the uncertainty will be if the trend of forever-private companies comes to an end. Public markets are great at valuing companies and if the coming year has as many IPOs as people are expecting, I think some of these moonshot valuations will come back to Earth.

Defining technologies rarely announce themselves clearly. They transform economies through long periods of adjustment, experimentation, and disappointment, punctuated by moments of clarity that only appear obvious in retrospect. Mistimed expectations rather than overinvestment will be the substantial risk. For investors, that distinction matters because those who sync to the rhythm will win asymmetrically.

  1. Rechtman, Yoni. “Implementation Is the Task of Our Time.” 99% Derisible, 12 Dec. 2025, https://99d.substack.com/p/implementation-is-the-task-of-our. Accessed 31 Dec. 2025.
  2. Leech, Gavin. “AI in 2025: Gestalt.” Argmin Gravitas, 8 Dec. 2025, https://www.gleech.org/ai2025. Accessed 31 Dec. 2025.
  3. Cantanzaro, Sarah. “The AI Research Experimentation Problem.” Amplify, 26 Aug. 2025, https://www.amplifypartners.com/blog-posts/the-ai-research-experimentation-problem. Accessed 31 Dec. 2025.
  4. MIT NANDA. The GenAI Divide: State of AI in Business 2025. By Aditya Challapally, Chris Pease, Ramesh Raskar, and Pradyumna Chari, preliminary findings from Project NANDA, July 2025, pp. 1–20.
  5. Nathan, Allison, ed. “AI: In A Bubble?” Top of Mind, Issue 143, Goldman Sachs Global Investment Research, 22 October 202,https://www.goldmansachs.com/pdfs/insights/goldman-sachs-research/ai-in-a-bubble/report.pdf.
  6. Shumailov, Ilia, et al. “AI models collapse when trained on recursively generated data.” Nature, vol. 631, no. 8022, 24 July 2024, pp. 755–759, https://doi.org/10.1038/s41586-024-07566-y.
  7. Yafah Edelman and Anson Ho (2025), “Compute scaling will slow down due to increasing lead times”. Published online at epoch.ai. Retrieved from: ‘https://epoch.ai/gradient-updates/compute-scaling-will-slow-down-due-to-increasing-lead-times’.
  8. Aschenbrenner, Leopold. Situational Awareness: The Decade Ahead. Situational Awareness AI, June 2024, situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf.
  9. McCormick, Packy. “Infinity Mission.” Not Boring, 29 Oct. 2024, https://www.notboring.co/p/infinity-missions. Accessed 31 Dec. 2025.
  10. Bogoslaw, David. “Funding for AI Dominated in VC in 2025: Crunchbase.” Venture Capital Journal, 17 Dec. 2025, www.venturecapitaljournal.com/funding-for-ai-dominated-in-vc-in-2025-crunchbase.
  11. Graham, Paul. “What the Bubble Got Right.” Paul Graham, Sept. 2004, paulgraham.com/bubble.html.
  12. Thompson, Ben. “The Benefits of Bubbles.” Stratechery, 5 Nov. 2025, https://stratechery.com/2025/the-benefits-of-bubbles/. Accessed 31 Dec. 2025.
  13. William N. Goetzmann, "Bubble Investing: Learning from History," NBER Working Paper 21693 (2015), https://doi.org/10.3386/w21693.
  14. Nathan, Allison, ed. “Gen AI: Too Much Spend, Too Little Benefit?” Top of Mind, Issue 129, Goldman Sachs Global Investment Research, 25 June 2024, 5:10 p.m. EDT, https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gen-ai--too-much-spend%2C-too-little-benefit-/TOM_AI%202.0_ForRedaction.pdf.
  15. Kawa, Luke. “If This Really Is an AI Bubble, Let’s See Some More Inflation.” Sherwood News, Sherwood News, 26 Dec. 2025, sherwood.news/markets/2026-charts-to-watch-hyperscaler-valuations-ai-bubble-boom-inflation-capex.
  16. Cahn, David. “It’s All One Bubble.” David Cahn’s Substack, 28 Jan. 2025, https://dcahn.substack.com/p/its-all-one-bubble. Accessed 31 Dec. 2025.