Does Nvidia’s $5-trillion feat raise the spectre of an AI winter? – Crypto News – Crypto News
Connect with us
Does Nvidia’s $5-trillion feat raise the spectre of an AI winter? Does Nvidia’s $5-trillion feat raise the spectre of an AI winter?

Metaverse

Does Nvidia’s $5-trillion feat raise the spectre of an AI winter? – Crypto News

Published

on

We examine how sustainable the surge really is.

Why is Nvidia in the headlines again?

Nvidia’s rise has been meteoric. The chipmaker crossed a $1 trillion market cap in May 2023, doubled to $2 trillion by February 2024, touched $3 trillion in June 2024, and overtook Apple and Microsoft at $4 trillion in July 2025 before hitting $5 trillion on 30 October. Analysts now forecast a market cap of $8-10 trillion by 2030, fuelled by surging investment in AI infrastructure.

Nvidia’s fiscal year 2025 revenue of $130.5 billion, up 114% from the previous year, is projected to exceed $200 billion in fiscal 2026. Some, including Fundstrat Global Advisors, are predicting $1 trillion of annual revenue by the end of the decade.

What explains this rapid rise?

Nvidia’s ascent to the tanks of the Big Tech elite stems from its dominance in using high-performance graphics processing units (GPUs)—from its current Blackwell series to the upcoming Rubin data-centre chips—to train and run large language models, image generators, and other AI workloads.

Once a gaming-chip maker, Nvidia has evolved into an AI infrastructure powerhouse, building not just chips but a full ecosystem around its CUDA platform, combining hardware, software, and deep data-centre partnerships.

Nearly 89% of revenue now comes from its data-centre business, while its gaming and AI PC unit contributes around 10%. The company’s professional visualisation, automotive, and robotics divisions remain small but are growing rapidly. With an estimated 85-90% share of the AI chip market, Nvidia has struck supply deals with global giants such as Nokia, Samsung, Tesla, Siemens, Hyundai, Reliance Industries, and the Tata Group.

It is also investing billions in partnerships with OpenAI, Microsoft, Oracle, and Intel to produce chips and build large-scale AI infrastructure, including AI factories and supercomputers. In the US, Nvidia is collaborating with the Department of Energy’s national labs to strengthen the country’s AI research and industrial capabilities. It is also working with the UK, France and Germany as the EU commits €20 billion to establish 20 AI factories—five of them “gigafactories”—to expand computing capacity tenfold.

As geopolitical tensions with China ease, Nvidia could see stronger-than-expected growth as chip exports to the region pick up once again.

How big is the chip and AI infrastructure market?

The broader market for AI infrastructure, valued at $47.2 billion in 2024, is projected to grow from $60.2 billion in 2025 to nearly $500 billion by 2034, according to Precedence Research. Gartner forecasts global AI spending will hit $1.5 trillion in 2025 and surpass $2 trillion later in the decade, driven by the integration of AI into everyday products such as smartphones, PCs, and enterprise systems.

Nvidia itself estimates global spending on AI infrastructure will reach $3-4 trillion by 2030. McKinsey projects that by 2030, the world’s data centres will require around $6.7 trillion of investment to meet surging demand for computing power, rising to $7 trillion when non-AI workloads—such as web hosting, enterprise software, and data storage—are included. The consulting firm expects global data-centre capacity to nearly triple by 2030, with AI workloads accounting for about 70% of that growth.

If so, why are enterprises still wary of GenAI and agentic AI projects?

There are good reasons for this skepticism. Last year, for instance, the US Department of Employment and Workplace Relations commissioned Deloitte to evaluate an automated compliance system for job seekers and submit a report. Last month, the consulting firm refunded the Australian government $290,000 after admitting to using GenAI to produce an error-riddled report for which it had charged $440,000.

Last July, Gartner predicted that by the end of 2025, 30% of GenAI projects would be abandoned after proof of concept, citing poor data quality, inadequate risk controls, rising costs, and unclear business value. Greyhound Research’s CIO Pulse 2025 found that 64% of organisations in India were yet to scale even half of their AI pilots.

Similarly, an MIT study in August stirred debate when it found that despite $30–40 billion in enterprise GenAI investment, 95% of organisations have seen no measurable return. The research noted that tools like ChatGPT and Copilot boost individual productivity, not corporate profitability. It said most projects stumble because of fragile workflows, poor contextual learning, and weak integration with daily operations.

Yet, the spending spree continues. In October, OpenAI announced that Nvidia would take an equity stake worth up to $100 billion and co-develop massive GPU-powered data centres. Partnering with Oracle, OpenAI also unveiled plans for five large-scale facilities—part of the $500 billion ‘Stargate’ project—expected to run hundreds of thousands of Nvidia chips. CEO Jensen Huang estimates Nvidia’s products account for about 70% of capital spent on new AI data centres. And OpenAI isn’t alone – Meta, Google, and others are also ramping up infrastructure investments at a record pace.

Could this boom trigger an AI bubble, followed by an AI winter?

On 29 October, Nvidia co-founder and CEO Jensen Huang brushed off bubble concerns, saying the company’s latest chips were expected to generate half a trillion dollars in revenue. “I don’t believe we’re in an AI bubble,” he told Bloomberg Television. “All of these different AI models—we’re using plenty of services and paying happily to do it.”

A Wharton study in October found that 75% of large US firms already see a return on investment (ROI) from AI, and four in five expect positive returns in two to three years. Now in its third year, the report noted that two-thirds of firms invest over $5 million a year in GenAI, with more than 10% allocating $20 million or more. Nearly 72% track structured, business-linked ROI metrics such as profitability, throughput and productivity to ensure measurable outcomes.

Regardless of how you view these contrasting studies, companies spending aggressively could face margin pressure, slower payback, or weaker demand, while a shift in investor sentiment could easily deflate valuations and momentum.

The risk of a correction is real since the semiconductor industry’s average price-to-earnings (P/E) ratio currently hovers around 45, leaving little room for disappointment. To justify such valuations and massive infrastructure bets, companies must show sustained revenue growth from AI applications, not just infrastructure. They’ll need tangible productivity or margin gains and repeatable business models driven by software, services, and recurring usage—not merely hardware sales.

There’s another factor, too. An AI-focused data centre consumes as much power as 100,000 homes, and the largest under construction will use up to 20 times more, according to the International Energy Agency (IEA). The IEA projects global data-centre electricity demand to more than double—and quadruple for AI-optimised centres—by 2030. As the AI boom remains both energy-intensive and capital-heavy, soaring infrastructure costs could deepen any market correction if returns fail to keep pace.

Investors, meanwhile, are also betting on a world without major regulatory or geopolitical shocks such as export bans or data localisation mandates. If any of these assumptions fail, the gap between expectation and execution could widen quickly, cooling the euphoria and ushering in another ‘AI winter’ – a phase of disillusionment that follows inflated expectations.

Trending