Following on from last week’s commentary about the Tech sector, this week looks at how big the AI infrastructure CAPEX spending is, what revenues need to be generated for payback and what timescales we are looking at. There is a lot of dispute among research boutiques as to whether this is a bubble waiting to explode or simply the start of something bigger. The likes of Macrostrategy speak to the former arguing AI – especially LLM-based – is the biggest and most dangerous bubble ever exceeding the scale of the Dotcom and 2008 bubbles by 17X and 4X respectively! Their arguments are centred on cheap capital, speculative hype and round-tripping behaviour (e.g. Nvidia):
- Big Tech monopolies (like the Mag-7) are deploying LLMs to preserve their monopolies – not because of commercial viability.
- Most LLMs lack true “intelligence” and, instead, rely on pattern-matching representations that become less effective with scale.
- LLMs are used mainly for low-value tasks. The combination of “High Compute Cost + Low Pricing Power = Unsustainable Economics”.
- Nvidia’s receivables have increased significantly and stand at some 85% of quarterly revenues largely on the back of selling chips in return for “future compute”.
- The Nvidia effect adds some 3% to US GDP via CAPEX and the corresponding wealth effect (i.e. rise in share price and its impact on share portfolios). They project a loss to US GDP of some 3% to 6% if this bubble bursts resulting in an immediate recession.
- The cash burn is very high and the scale needed for profitability is too high and far. The level of user-monetisation is not there.
- Big Tech valuations are at risk of a major reset.
By contrast, a Goldmans (GS) report argues “the AI spending boom is not too big” and gives the following reasons:
- AI CAPEX is still small vs other booms and is running at $250bn annually. This is still below CAPEX on Cloud at its peak phase and well below the Dotcom-era telecom investment (about $700bn pa).
- They forecast AI revenue of $1.2tn by 2023 and believe the return on capital employed is plausible and comparable to early cloud or mobile cycles.
- Critically, they believe broad-based adoption is accelerating with 35% of enterprises globally having started some level of GenAI integration in 2025 (vs 2023 of 15%). The US, South Korea and UAE lead!
- While the likes of OpenAI are running at a loss, others (e.g. Nvidia, AWS, MS Azure) are already monetising infrastructure layer.
- The cost curve is improving with cost per inference/token down some 60% to 80% y/y. [Cost per inference/token refers to the amount of text processed while “tokens” are pieces of words/punctuation. Total cost = the number of input/output tokens times their respective price rates.]
- They were not able to find any evidence of systemic round-tripping but did acknowledge Nvidia’s growing receivables but attributed that to longer payment cycles – not some “scheme-driven, demand fabrication”.
- They project an increase of +1.3% to global GDP by 2030.
- Big Tech valuations are supported by growth and productivity gains.
The commercial success of GenAI will inevitably come down to commercials i.e. fee-paying users. A recent DB study showed European spending on ChatGPT has stalled since May. OpenAI (ChatGPT’s maker) claims it has over 800mn weekly active users but, of these, 20mn are believed to be fee-paying. The impediments to a user going from a free version to a fee-paying one are key. As highlighted last week:
- Over 70% of AI usage is non-work related i.e. it lives in a “shadow AI economy”. Its usage is not part of a structured, integrated workflow process that boosts productivity.
- The winners will be those companies that can customise and integrate workflow processes. That means working with external partners on a project-by-project basis and treating AI tools as business solutions rather than as SaaS (Software as a Service).
How much revenue needs to be generated for breakeven?
- At an industry-wide level, Global Data Centre CAPEX is expected to exceed $250bn pa from 2024 to 2026. Nvidia alone is expected to pump in $70bn pa (= $350bn over the next 5 years) on AI infrastructure.
- Assume a 5-year, straight-line depreciation, that’s an amortisation rate of $50bn pa.
- As a rule of thumb, Operating Expenditure (OPEX) is about 30% to 50% pa. Assuming a mid-point of 40%, this equates to $100bn pa in OPEX.
- So, total cost ($50bn + $100bn) is $150bn pa – this is the revenue needed to be earned pa to breakeven.
- What revenue streams could support this? AI cloud services (e.g. Azure, AWS, Google Gemini); AI chip sales (e.g. Nvidia, AMD, Intel); Enterprise AI workloads (e.g. Robotics, Biotech, Finance, Defence and more); Consumer AI (e.g. searches, image generation, personal assistants).
How big is the known AI/Cloud/AI-enabled revenue data?
- As of 2024, the total Global AI market size was $279.22bn. Of this, Global Cloud AI was $87.27bn, US AI market was $146.09bn and OpenAI’s annualised run-rate was $12bn.
- Google One (the subscription component) had reached 150mn users.
- The difficulty in trying to project revenue data is that (1) some numbers are overlapping (e.g. “AI market” could include hardware, software, services, consulting, etc.) so its not a simple case of aggregating them and end up double-counting; (2) these numbers are broad-based and not broken down by “AI chip sales used in data centres” vs “AI services/subscriptions, etc.” and (3) AI companies are not exactly forthcoming about the split between revenue that is truly AI vs Gen-AI.
- The fragmented nature of current AI revenue means growth assumptions (for CAGR, Compounded Annualised Growth Rate) are prone to estimates from market researchers – and they can be wide-ranging! However, they land as follows: Global AI Market +20% to +35% pa for the next 5y to 10y; Cloud AI +40% pa to 2030; Company/Product growth suggest triple-digit growth (OpenAI’s jump to a run-rate of $12bn marks a high growth rate.
What is the actual revenue being earned?
- Nvidia’s AI-specific revenue alone is expected to hit $60bn to $70bn (this includes H100s, GH200s and so on).
- OpenAI’s revenue run-rate is $12bn.
- Amazon/AWS AI/Bedrock is around $15bn to $20bn.
- Azure AI (Microsoft) is about $15bn (and accelerating).
- Google Cloud AI/Gemini is around $10bn to $12bn (and accelerating).
- Others (Meta, AI Saas, China) around $20bn.
- Total estimated AI revenue in 2024 is between $130bn to $140bn.
- At these growth rates – and assuming they hold (a big assumption!) – breakeven over the next two years is possible (OPEX = $150bn pa to support CAPEX of $250bn).
- It all comes down to user numbers! At the end of the day, this is where developers and investors are at odds! ChatGPT has reported weekly active users between 400mn and 700mn in 2025. However, paid user estimates drops to 10mn and 20mn. Revenue estimates are quite wide but average around $10bn (to mid-2025). Full year 2025 is projected at $12.7bn and expects to reach cashflow positive by 2029.
What if it is all hype and it goes horribly wrong – how badly would it be felt in markets?
There are two types of effects – Direct and Indirect. Directly, it is zero because the likes of OpenAI are not a publicly, listed company – hence its financial performance does not directly impact the market in which case the true risk becomes “Narrative Collapse + Write-Offs”. However, the indirect impact could be significant should markets get the slightest smell CAGRs (Compounded, Annualised Growth Rates) are going to fall short! In the case of the latter, the entire Tech sector will be seen as having been a bubble and investors will exit at an alarming speed! Two of the largest players in the OpenAI ecosystem are Microsoft (which owns about 49% of OpenAI for an investment of some $13bn. It integrates ChatGPT and GPT APIs into the likes of Copilot, Azure AI, Office 365, Bing. These all generate revenue for Microsoft – the success of OpenAI directly boosts Microsoft’s cloud and software revenues) and Nvidia (which provides GPUs and AI chips that OpenAI relies on. Imagine how many chips go into OpenAI). Here’s a stab at what might happen using historical reference:
| Scenario | What Happens | Why It Matters | Potential Stock Price Decline | Driver of Decline | Assumptions | Historical Reference |
| OpenAI succeeds slowly | No material EPS gain for MSFT/NVDA, but the story holds. Market tolerates lag in profits. | CAPEX is seen as forward-thinking, patient capital. No write-offs. | 5–10% | Mild multiple compression | AI optimism cools, but GenAI adoption continues at steady pace. | Tech pullbacks like 2014–2016 cloud transition (~10%) |
| OpenAI stalls / slow uptake | Uptake slows. Monetization lags. Investors begin to fear AI was overhyped. | Narrative collapse risk emerges. CAPEX called into question. ETF flows reverse. | 40–60% | Panic selling, multiple de-rating, risk-off unwind | GenAI expectations reset. CAPEX not justified by user traction. | Dotcom bust (2000–02 Nasdaq -78%) |
| OpenAI fails dramatically | Catastrophic collapse of AI narrative. Loss-making, ethical/legal shocks, or financial failure. | Forced write-offs. Tech sector contagion. Institutional exit from AI exposure. | 60–80% | Widespread capitulation, systemic sentiment shock | No path to GenAI monetization. Full narrative collapse akin to Pets.comera. | Dotcom bust (individual stocks -80 to -100%) |
MARKET SUMMARY...
