top of page
Search

The AI Bubble: Why it's Different

  • Writer: James Purdy
    James Purdy
  • 3 days ago
  • 7 min read
ree

Key Takeaways

 • AI valuations look inflated, but the underlying value is real and measurable across enterprise, education, logistics, medicine, and creative industries 

• The dotcom and 2008 bubbles collapsed because the value being traded was theoretical or synthetic, while AI is built on deployed infrastructure, operational use, and documented productivity gains 

• Nearly four out of five organizations now use AI in production environments, the fastest large-scale technology adoption ever recorded 

• Any future correction is more likely to result from physical constraints such as energy capacity, semiconductor bottlenecks, and infrastructure limits rather than a discovery that AI has no substance

 Every day I find a new use for AI that saves me time or improves my work, and every day I see another headline warning that we are in an AI bubble about to burst. It feels like living in two different worlds at once. On one hand, I have not met a single person under seventy who does not use AI in some form. On the other, commentators keep insisting we are reenacting a version of dotcom optimism, as if ChatGPT is just the modern GeoCities. Meanwhile, Sundar Pichai and Sam Altman openly acknowledge market irrationality while simultaneously investing billions into new infrastructure. This tension is not a contradiction. It is the signal that something different is happening.

 The Article in 30 Seconds

 This article examines whether the current AI surge resembles past market bubbles and arrives at a simple conclusion. It does not. The dotcom era collapsed when investors discovered that many companies had no viable business models behind their valuations. The 2008 financial crisis collapsed because synthetic financial instruments hid the true value of the assets they represented. By contrast, today’s AI expansion is fuelled by real infrastructure, real enterprise adoption, and real productivity gains backed by measurable data. This does not mean AI is risk-free. The danger now comes from physical constraints like energy, chips, and compute limits that even trillion-dollar firms cannot spend their way past. In other words, this wave will not burst like the last two. It will bend, correct, and consolidate, but the underlying transformation is already too embedded to disappear.

 The Bubble Case: Familiar Warning Signs

 Several financial indicators do resemble classic speculative excess. Market concentration has reached its highest point in over fifty years, with the Magnificent Seven accounting for 37 percent of the S&P 500. Between December 2024 and November 2025, these companies produced 83 percent of all market gains. Nvidia alone added $1.25 trillion in valuation, Alphabet added $1.04 trillion, and Microsoft added $0.61 trillion.

 Nvidia illustrates the scale of this expansion. The company has grown from $0.8 trillion in 2015 to $4.48 trillion today, driven almost entirely by AI compute demand. OpenAI holds a $500 billion valuation despite remaining unprofitable. Robotics funding reached $4.2 billion in 2024, often for companies with prototypes rather than commercial products.

 The energy projections are equally striking. Data centers currently consume an estimated 415 TWh per year, or about 1.5 percent of global electricity use. The International Energy Agency projects this will rise to 945 TWh by 2030, roughly equal to Japan’s annual consumption. Large AI facilities can draw as much power as 100,000 homes, and the largest hyperscale installations require significantly more.

Major institutions have begun to warn of parallels with earlier bubbles. The International Monetary Fund has drawn comparisons to the late 1990s internet boom, arguing that both periods saw valuations and wealth creation reach unprecedented levels. Sam Altman has also acknowledged the excess, stating in August 2025: “Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes.”

 The Contrarian Analysis: Real Foundations

 This is where the comparison to earlier bubbles begins to break down. Unlike dotcom-era companies built on theoretical revenue models, or the 2008 crisis driven by synthetic financial products, AI rests on measurable adoption, documented productivity gains, and large-scale physical infrastructure.

Enterprise adoption is the strongest signal. Seventy-eight percent of organizations now use AI in at least one operational function, up from 55 percent in 2023. Among enterprises with more than 10,000 employees, adoption rises to 87 percent. This is the fastest enterprise technology diffusion ever recorded. For comparison, cloud computing took approximately a decade to reach similar adoption.

The productivity data is equally clear. MIT researchers studying more than five thousand customer support agents found a 14 percent average productivity increase when using generative AI, and a 34 percent increase for novice workers. Harvard Business School documented 38 percent performance improvements among consultants using GPT-4, with a 12.2 percent increase in tasks completed. McKinsey estimates AI could contribute between $2.6 and $4.4 trillion to global GDP annually, with software engineering productivity improving by up to 45 percent.

Multiple institutions report similar findings. The Federal Reserve Bank of St. Louis found workers saved an average of 5.4 percent of work hours using generative tools. Upwork’s Research Institute identifies average productivity gains of 40 percent, with 77 percent of C-suite leaders confirming operational improvements. Teachers using AI report saving roughly six hours per week, showing gains extend beyond corporate environments.

The infrastructure investments represent durable physical assets rather than speculative valuations. Since January 2025, more than $500 billion in new data center projects have been announced. Microsoft plans to spend $80 billion in fiscal year 2025, Meta expects $60 to $65 billion, and the Stargate Project estimates $500 billion over four years. These are large, permanent investments involving land, power grid expansion, semiconductors, and long-term assets with multi-decade utility.

Data center spending rose 34 percent in 2024 to $282 billion, the highest on record, according to Synergy Research Group. These are not website valuations but infrastructure that will retain value regardless of equity market volatility.

Finally, usage metrics confirm genuine demand. ChatGPT processes an estimated 2.5 billion prompts per day and has 800 million weekly active users. Google Gemini reports 450 million monthly users. More than 92 percent of Fortune 500 companies use ChatGPT in some capacity, and 77 percent of banking executives consider AI essential for operations.

Sector-level penetration shows the breadth of integration. Financial services report 89 percent adoption, healthcare 78 percent, technology companies 94 percent, and manufacturing 68 percent. These applications range from fraud detection to medical imaging, software automation, predictive maintenance, and logistics optimization.

Taken together, this data points to a transformation that is already underway. The investments are not speculative bets on possible future value. They are part of a rapid and widespread deployment of technology that is already reshaping the global economy.

 The difference between a speculative bubble and a structural transformation has direct consequences for leaders. If this were a classic asset bubble, the safest move would be to sit on the sidelines, wait for a crash, and then enter at a discount. The evidence points in a different direction. AI has already crossed the point where delay reduces risk on paper but increases competitive risk in practice.

This is especially clear in education and public institutions. While some systems hesitate and wait for valuations to normalize, others are already using AI to redesign curriculum workflows, automate routine administration, and improve student support. The question is no longer whether AI will reshape these environments. The question is whether institutions choose to shape that process or simply respond to it after the fact.

The constraints that could trigger market corrections are real. Energy capacity, semiconductor supply, regulatory action, and local resistance to new infrastructure will influence the pace and cost of deployment. What they will not do is reverse the documented productivity gains or remove AI from workflows where it is already embedded. If anything, a correction is likely to slow expansion at the edges while the core systems continue to deepen.

This creates a slightly unusual strategic environment. Equity markets may become more volatile even as the operational dependence on AI increases. A future correction might make certain tools cheaper to adopt just as their capabilities improve. Organizations that treat current conditions as a reason to postpone serious planning may find themselves permanently behind those that started building stable, responsible AI practices early. 

Reality Check

That brings me back to the basic disconnect that started this piece. In my own work, I see AI used every day in quiet, practical ways that make people more effective. I also speak with education leaders who are already relying on AI for tasks they would struggle to reverse without major disruption. At the same time, much of the public conversation is still trying to fit this into old stories about GeoCities and mortgage-backed securities.

The bubble narrative persists because we do not yet have a better shared framework for understanding technology that scales this fast and runs through both software and physical infrastructure. The simpler reading is that we are watching two things at once: speculative excess in financial markets and a genuine shift in how work gets done. The first will correct. The second is already too integrated to unwind.

This is why my own focus has been less about predicting prices and more about building practical guidance for institutions. Someone still has to write the meeting agendas, draft the oversight processes, and decide how a school or public agency will use AI on Monday morning with the staff and tools it actually has.

If you are in that position and need help building something stable and defensible, that is the work my new book is meant to support. The Stop-Gap AI Compliance Guide is written for institutions that cannot wait for perfect regulation but still want a clear operational framework for responsible AI use. It is not a theoretical roadmap. It is a way to turn this moment into predictable routines and documented decisions.

You can always reach out to me directly if you want to talk through how that might look in your context.

 

About the author

Ryan James Purdy is the author of the Stop-Gap AI Policy and Compliance Guides and works with education leaders and public institutions on practical AI governance and implementation strategies.

 

References and sources

 A short selection of recent sources informing this article:


  1. BBC News, interview with Sundar Pichai https://www.bbc.com/news/articles/cwy7vrd8k4eo

  2. CNBC, IMF and Bank of England on AI bubble risk https://www.cnbc.com/2025/10/09/imf-and-bank-of-england-join-growing-chorus-warning-of-an-ai-bubble.html

  3. Yahoo Finance, Magnificent Seven concentration in the S&P 500 https://finance.yahoo.com/news/magnificent-seven-makes-one-third-140006761.html

  4. Forbes, S&P 500 concentration and diversification risk https://www.forbes.com/sites/investor-hub/article/sp-500-weight-mag-7-stocks-diversification-risk/

  5. Business Insider, Bill Gates and Sam Altman on whether AI is a bubble https://www.businessinsider.com/ai-bubble-debate-business-leaders-sam-altman-bill-gates-2025-11

  6. Wharton School, 2025 AI Adoption Report https://knowledge.wharton.upenn.edu/special-report/2025-ai-adoption-report/

  7. Apollo Technical, AI productivity statistics and study summaries https://www.apollotechnical.com/27-ai-productivity-statistics-you-want-to-know/

  8. FullView, global AI and chatbot usage statistics https://www.fullview.io/blog/ai-statistics

  9. CNBC, big tech plans to spend more than 300 billion dollars on AI in 2025 https://www.cnbc.com/2025/02/08/tech-megacaps-to-spend-more-than-300-billion-in-2025-to-win-in-ai.html

  10. Carbon Brief, data centre energy use and emissions https://www.carbonbrief.org/ai-five-charts-that-put-data-centre-energy-use-and-emissions-into-context/

  11. Bruegel, DeepSeek and structural shifts in AI efficiency https://www.bruegel.org/policy-brief/how-deepseek-has-changed-artificial-intelligence-and-what-it-means-europe

  12. Nvidia, financial results for fiscal 2025 http://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-fourth-quarter-and-fiscal-2025


 

ree

 


 
 
 

Comments


bottom of page