Blog

The AI Power Surge: What Every CIO, CTO and Executive Needs to Know About Infrastructure Readiness

Written by NEXTDC. | May 18, 2025 11:52:22 AM

With next-gen AI systems like NVIDIA’s GB200 drawing up to 2,700W per module, power and cooling are no longer just infrastructure details, they’re critical to your organisation’s success.

AI workloads are scaling faster than most organisations expected. Chips are getting more powerful, models are increasing in size, and the infrastructure that supports it all is under pressure. For CIOs, CTOs, and organisational leaders, this is no longer a future risk, it’s a present-day priority.

Whether you're building a generative AI model, launching a GPU-as-a-Service platform, or deploying machine learning at the edge, your data centre partner must be able to keep pace. The real question is: can they?

The Problem: AI Chips Are Outpacing Infrastructure

The rapid advancement of AI hardware is pushing data centre infrastructure to its limits. Each new generation of AI chips consumes roughly 2 to 3 times more power than its predecessor, intensifying demands on power delivery and cooling systems. For example, the NVIDIA A100 GPU consumes around 250 watts under heavy load, while the newer H100 GPU demands up to 700 watts, nearly three times the power. The latest Blackwell generation GPUs like the B200 and GB200 push this even further, with power draws reaching 1,000 watts per chip.

Source: NVIDIA Media Assets, NVIDIA GB200 Grace Blackwell Super Chip

This surge in power consumption translates directly to increased heat density. Traditional air-cooled data centres, designed for racks consuming 5–15 kW, now face racks with AI GPU clusters drawing 120–140 kW. Managing such extreme heat output requires more efficient cooling methods. Liquid cooling has become essential, as it removes heat more effectively than air, reduces water usage, and supports higher rack densities. For instance, the NVIDIA H100 PCIe GPU requires cooling solutions capable of dissipating at least 300W per chip, often necessitating liquid cooling systems to maintain optimal operating temperatures.

To visualise the shift: if your existing data centre supports 10kW racks, moving to 120kW racks is not a small step, it’s a leap across a chasm. For comparison, a typical Australian family home uses about 20 kilowatt-hours (kWh) of electricity per day. A single AI rack drawing 150kW could consume more electricity in two hours than a home does in an entire day. Multiply that by dozens or hundreds of racks, and you're looking at power demands comparable to small towns or national infrastructure segments. This is the new reality data centres must be designed for. That leap involves not only new equipment but also new planning, design, energy supply, and even floor load considerations.

Despite these innovations, many existing data centres struggle to adapt. Their power and cooling infrastructures were not designed for such intense workloads, leading to bottlenecks and inefficiencies that threaten AI deployment at scale.

The Hidden Business Risk

Power limitations, inadequate cooling, and delays in infrastructure readiness pose serious risks to organisations betting on AI. Consider this: if your GPUs arrive before your infrastructure is ready, how much productivity, or competitive advantage, are you losing every month?

Organisations may face delays of 6 to 12 months waiting for AI-ready capacity. Others may rush to retrofit, driving up costs and compromising reliability. Overloaded cooling systems can also lead to performance throttling or even hardware failure, risking uptime for critical workloads.

A real-world example: imagine a medical research institute preparing to launch an AI-driven diagnostics platform that analyses real-time patient scans. If their infrastructure provider can't deliver the required power and cooling in time, they may miss critical go-live dates, delay partnerships with hospitals, and lose first-mover advantage to competitors who secured AI-ready capacity earlier. because their colocation provider can’t deliver liquid cooling until the next calendar year. That’s six months of delay, and six months of giving competitors a head start.

These challenges aren’t just technical. They directly affect speed to market, return on investment, and your ability to lead with innovation. Falling behind means giving ground to competitors. If you can't get your infrastructure ready in time, you're not just late — you're locked out of the next wave of AI transformation.

What Technology Leaders Should be Asking?

Now is the time to pressure-test your infrastructure partner. Use this checklist in your next strategy meeting:

  • Can your facilities support rack densities of 150kW to 300kW, and is your provider already designing for what’s next, with capabilities up to and beyond 600kW?

  • Are liquid cooling systems available, and proven in production?

  • How do you manage heat without driving up water consumption?

  • What’s your roadmap for supporting chips like the B200 and GB200?

  • Can you deliver high-density space with minimal lead time?

  • What’s your plan for energy efficiency and sustainability at scale?

These aren’t just technical questions — they’re strategic ones. The answers determine whether you can compete in an AI-first future.

NEXTDC: Infrastructure Built for What’s Next

NEXTDC’s growth story is unfolding in real time, fuelled by the rapid rise of AI. In March 2025, the company announced a historic 30% increase in contracted utilisation, bringing the total to 228MW, a record milestone. The surge is being driven by AI-native and hyperscale customers, particularly in Victoria, where contracted utilisation has reached 161% of built capacity.

But this isn’t just a headline. It marks a structural shift. The largest AI deployments in NEXTDC’s history are now underway, and the forward order book has grown by 54% since 31 December 2024. 

To keep pace with the rapid evolution of AI chips and the increasing performance demands of modern workloads, NEXTDC is accelerating investment in next-generation data centre infrastructure. Our facilities are purpose-built to support the extreme power and cooling requirements of advanced AI systems from high-density air cooling to cutting-edge liquid and immersion cooling.

We’re actively engineering solutions capable of supporting up to 600kW per rack, ensuring our customers have the flexibility and performance to scale as AI technologies and organisational needs evolve. Designed for what’s NEXT, our infrastructure is built to power the future of intelligent, high-performance computing across Australia and the Asia-Pacific.

NEXTDC operates exclusively in Australia, ensuring sovereign resilience and compliance by design. At the same time, we’re expanding our regional footprint with new campuses in Tokyo, Kuala Lumpur, and New Zealand.

This strategy isn’t just forward-looking, it’s already delivering results. The unprecedented growth in contracted utilisation shows that global and local organisations are turning to NEXTDC to power the next wave of AI. Our ongoing investment in capacity, cooling, and connectivity is meeting the moment for those ready to scale.

Whether you're an AI-first startup or a global hyperscaler, NEXTDC is ready to help you move faster, with infrastructure built for what’s next.

Power What’s Next with NEXTDC’s AI-Ready Infrastructure

NEXTDC is a certified NVIDIA DGX-Ready Data Center provider, delivering infrastructure purpose-built for AI and high-performance computing. From advanced liquid and immersion cooling to secure, sovereign facilities engineered for AI scalability, our data centres support the most demanding next-generation workloads.

With deep interconnectivity, national reach, and world-class operational resilience, NEXTDC enables you to deploy with confidence, innovate at scale, and stay ahead of what’s next.

Let’s build the future of AI infrastructure — together.

Connect with us → nextdc.com/contact