NEXTDC Update: GTC 2026 reinforces infrastructure as the defining factor in AI execution

Mar 27, 2026

Share:

At NVIDIA’s GTC 2026 keynote, CEO Jensen Huang delivered a clear and direct message to global industry leaders: the AI era is no longer emerging, it is scaling. For boards, CEOs and senior infrastructure strategists, the implications are immediate. Infrastructure is now central to how organisations manage risk, capture growth and sustain competitive advantage.

As Huang stated, “AI is now infrastructure. It is becoming as fundamental as electricity and the internet.” This framing signals a shift from discretionary technology investment to essential capability.

Demand is accelerating beyond conventional planning horizons

One of the most consequential signals from GTC was the scale of projected demand. NVIDIA has revised its outlook for AI infrastructure to approximately US$1 trillion, effectively doubling earlier expectations. Huang linked this to the rapid expansion of compute requirements, noting that “the amount of computation we’re doing has increased a million-fold in just a few years.”

While such figures should be interpreted directionally, the underlying point is clear: demand for high-performance compute, power and connectivity is outpacing traditional infrastructure planning cycles.

For leadership teams, this introduces a new strategic constraint. The limiting factor is no longer access to AI models, but access to infrastructure that can support them at scale.

From models to production systems

A central theme of the keynote was the transition from model training to large-scale inference. AI is moving into continuous operation, embedded in enterprise workflows and customer-facing systems.

Huang described this shift succinctly: “We are moving from training AI to producing intelligence.”

This transition changes the role of infrastructure. AI workloads now require persistent, always-on environments capable of delivering consistent performance under sustained load. In effect, digital infrastructure is becoming a production system, not just a hosting environment.

The rise of AI factories

NVIDIA’s concept of “AI factories” captures this evolution. These are purpose-built facilities designed to generate intelligence at industrial scale, combining high-density compute, advanced networking and continuous throughput.

As Huang explained, “These data centres are no longer just storing information. They are factories generating tokens, generating intelligence.”

For boards and executives, this reframes infrastructure as a direct contributor to enterprise output. Decisions about location, power, cooling and connectivity are now directly linked to productivity, innovation and growth.

Top 10 announcements from NVIDIA GTC 2026

1. AI infrastructure demand revised to ~US$1 trillion

NVIDIA materially increased its outlook for global AI infrastructure investment, highlighting the pace of capacity expansion required.
Implication: Demand is exceeding traditional planning cycles, putting pressure on power, land and deployment timelines.

2. Introduction of the Vera Rubin architecture

NVIDIA unveiled its next-generation GPU platform, Vera Rubin, following Blackwell with further performance gains.
Implication: Faster hardware cycles will require infrastructure that can support rising power density and cooling demands.

3. Blackwell enters large-scale deployment

The Blackwell platform is now moving into widespread production and deployment.
Implication: Immediate demand for high-density, AI-ready environments is accelerating as customers scale.

4. AI factories defined as a new model

Data centres were reframed as “AI factories” producing intelligence at industrial scale.
Implication: Facilities are becoming production systems, increasing the importance of performance, uptime and throughput.

5. Shift to large-scale inference

The focus is moving from training to continuous inference at scale.
Implication: Infrastructure must support always-on, low-latency workloads with sustained efficiency.

6. Liquid cooling becomes standard

Liquid cooling was reinforced as a baseline requirement for next-generation AI infrastructure.
Implication: Facilities not designed for advanced cooling risk obsolescence or costly retrofits.

7. Full-stack AI integration

NVIDIA expanded its integrated stack across compute, networking and software.
Implication: Infrastructure must support tightly integrated systems with precise performance and thermal requirements.

8. Advancements in AI networking

High-speed interconnects such as NVLink were highlighted as critical to scaling AI systems.
Implication: Connectivity within and between data centres is increasingly as important as compute.

9. Growth of AI-native infrastructure providers

New AI-native and GPU-as-a-service players are scaling alongside hyperscalers.
Implication: Competition for capacity will favour providers that can deliver at speed and scale.

10. AI as foundational infrastructure

AI was positioned as a core economic platform, comparable to electricity or the internet.
Implication: Investment and regulation will intensify, increasing the importance of sovereign, compliant environments.

 

Operational certainty: accelerating with lower risk

In this context, operational certainty becomes a primary consideration.

As infrastructure complexity increases, so does execution risk. Delays in capacity delivery, power availability or system integration can materially impact business outcomes. GTC reinforced that scaling AI is not only a technology challenge, but a delivery challenge.

Organisations need infrastructure platforms that can be deployed at speed and at scale, without compromising safety, security or performance. Proven delivery models, integrated design and disciplined execution are becoming critical factors in reducing risk.

Future readiness: designing for continuous change

The pace of innovation outlined at GTC also highlights the importance of future-ready infrastructure.

Next-generation systems, including liquid-cooled environments and increasingly dense compute architectures, are redefining baseline requirements. At the same time, tighter integration between hardware, software and networking is increasing the dependency on optimised physical environments.

Infrastructure decisions made today must accommodate:

  • rising power density and cooling requirements
  • evolving AI architectures and deployment models
  • increasing regulatory expectations around security and sovereignty
  • integration with regional and global connectivity ecosystems
  • aligning infrastructure investment with long-term demand trajectories
  • prioritising partners that can build at speed and at scale, with proven operational integrity
  • ensuring alignment with regulatory and sovereignty requirements
  • treating infrastructure as a strategic enabler of growth and resilience

Future readiness is therefore less about predicting specific technologies and more about ensuring infrastructure can adapt without fundamental redesign.

Strategic advantage: infrastructure as a differentiator

Perhaps the most important takeaway from GTC is that infrastructure is becoming a source of strategic advantage.

Huang noted that “the next industrial revolution will be powered by AI factories,” underscoring the link between infrastructure capability and economic competitiveness.

Organisations that can secure access to scalable, compliant and high-performance infrastructure will be better positioned to innovate and grow. Those that cannot may face constraints that limit their ability to compete.

This is particularly relevant in Australia, where proximity to Asia, strong regulatory frameworks and access to land and energy create a foundation for serving both domestic and regional demand.

Implications for decision-makers

For boards and senior executives, the signals from GTC 2026 point to a need for deliberate, infrastructure-led strategy.

Key considerations include:

As Huang’s keynote makes clear, the AI era will be defined not just by models, but by the infrastructure that enables them.

NEXTDC’s continued investment in high-density, AI-ready environments reflects this shift. We are supporting customers with the operational certainty, future readiness and strategic advantage required to compete in the next phase of the digital economy.

Contact us to better understand how infrastructure architecture is core to your future.

Similar posts