Blog

2025 AI Infrastructure: Key Insights from Uptime Institute Survey

Written by NEXTDC. | Jun 3, 2025 3:52:51 AM

Uptime Institute AI Infrastructure Survey 2025: Building Tomorrow’s Intelligence on Today’s Foundation

Artificial Intelligence is no longer just a concept it's changing the way organisations work, compete, and grow. But to power this transformation, we need the right infrastructure for the future.

 


The Uptime Institute’s 2025 AI Infrastructure Survey provides an urgent executive briefing: the strategic build-out for AI is in full swing, and proactive engagement is paramount to securing future competitive advantage. Based on comprehensive data from 519 global data centre owner/operators, the survey reveals a market adapting at speed, driving significant shifts in capital expenditure and demanding innovative approaches to manage AI's extreme density and power requirements.

For organizational leaders yet to embark on their AI infrastructure transformation, this report serves as a critical prompt: the foundational shift is underway across the industry, and readiness is key to harnessing AI's full potential.

 

Why Your AI Infrastructure Needs to Scale by 2026: 

AI is no longer a future concept; it's driving present-day competitive advantage, and its adoption curve is accelerating at an unprecedented pace:

  • 32% of global data centre operators are already deploying AI inference workloads, integrating AI into their critical business operations.
  • An additional 45% plan to implement AI soon, indicating a massive wave of imminent demand.

This means nearly 4 out of 5 organisations are either actively leveraging AI today or are in advanced stages of building the foundational capabilities for it. For leaders, this isn't just an IT trend; it's a critical indicator of market direction and competitive differentiation. Those who scale their infrastructure proactively will gain first-mover advantages, while those who delay risk significant strategic and operational setbacks.

This transformation requires not just compute power, but a complete rethinking of infrastructure strategy, power density, and cooling capabilities to remain relevant and competitive.

“Every company is becoming a technology company. And every company will become an AI company.”
— Jensen Huang, CEO, NVIDIA, Source: GTC 2024 Keynote

Why Your AI Strategy Must Stay Local

While the public cloud often dominates discussions, the reality for AI workloads is far more nuanced, driven by critical organisational and technical considerations. The Uptime Institute survey reveals a significant trend:

  • 46% of AI workloads are hosted on-premises, retaining direct control within organizational firewalls.
  • 34% leverage colocation facilities, balancing external expertise with dedicated infrastructure.
  • Just 14% currently rely on the public cloud for AI inference, indicating a prevailing preference for alternative models.

This distribution isn't arbitrary. It's a calculated response to the Top Factors Influencing AI Inference Location, as identified by the Uptime Institute survey chart above:

  • Data Sovereignty (46%): For leaders, this isn't just about data location; it's about mitigating regulatory risks, ensuring compliance with evolving data residency laws, and safeguarding intellectual property. Maintaining control over sensitive AI training data and models is a non-negotiable for many enterprises.
  • Ability to Reuse Existing Infrastructure (50%): A strategic financial decision. Leveraging existing data centres and server investments provides a cost-effective pathway to AI adoption, optimizing capital expenditure and accelerating time-to-value by avoiding significant greenfield cloud investments.
  • Power Availability (37%) & Proximity to Data/Integrated Applications (29%): These factors directly impact operational efficiency and organisational agility. Critical AI workloads often require guaranteed power delivery and ultra-low latency access to source data and integrated systems, crucial for real-time decision-making, customer experience, and ensuring application performance.
  • Overall Cost (30%): While cloud offers scalability, the sheer density and continuous nature of AI workloads often lead to unpredictable or prohibitive operational costs (OpEx) in the public cloud, making on-premises or colocation a more financially sustainable long-term strategy for many organizations.

Ultimately, effective AI deployment isn't merely about compute power. It's a complex interplay of compliance, performance, total cost of ownership, and strategic control over physical location, factors that directly impact your organization's risk profile, financial health, and ability to innovate at speed.

Power & Cooling: AI’s Infrastructure Breaking Point

AI workloads are not just digitally demanding; they are physically transformative, imposing power and heat loads far beyond anything traditional IT infrastructure has experienced. The exponential energy consumption of GPUs, the core of AI processing, is creating a new category of infrastructure challenge.

The Uptime Institute survey reveals the intensity of this shift:

  • 27% of AI training racks exceed 50kW
  • Even AI inference workloads are intensifying, with many racks now operating in the 31–50kW range, signifying a universal challenge across the AI lifecycle.

This unprecedented demand is forcing executive action:

  • 52% of operators are urgently upgrading their power infrastructure. This isn't just maintenance; it's a strategic capital investment to prevent outages, enable scale, and secure future AI capabilities.
  • 51% are modernising cooling systems to manage the massive heat generated. Failure here risks equipment damage, operational downtime, and ultimately, an inability to execute AI-driven strategies.

For organizational leaders, this translates directly to strategic risk and financial considerations. Underestimating these power and cooling demands can lead to:

  • Delayed AI initiatives: Inability to deploy critical AI applications.
  • Increased operational costs: Higher energy bills and potential retrofitting expenses.
  • Compromised uptime: Risk of outages due to overloaded systems.
  • Limited growth potential: A physical ceiling on your organization's AI adoption.

Addressing this "breaking point" requires not just technical fixes, but a comprehensive, strategic infrastructure plan that accounts for the long-term energy and thermal requirements of your AI roadmap.

“The data centre is no longer a warehouse for servers. It’s an AI factory.”
Ronnie Vasishta, SVP of Telecom, NVIDIA
(Source: NVIDIA AI Factories Brief)

Connectivity & Ecosystem: AI Factories Need More Than Power

AI thrives in highly connected environments. That’s why location, latency, and ecosystem access are now mission-critical. As AI workloads scale across regions — from federated learning and multi-region training to real-time inference at the edge, being milliseconds closer to the data path can unlock real business value.

NEXTDC’s data centres are strategically located in every major Australian capital city, with direct access to international subsea cable stations, providing a distinct performance advantage into Asia and the Pacific. This proximity enables faster data movement, reduced latency, and more efficient AI operations at scale.

If you're delivering GPU-as-a-Service, building a global AI platform, or offering cross-border AI insights, being physically closer to the network core translates directly into faster model training, lower operating costs, and superior customer experiences.

Colocation with NEXTDC also unlocks:

  • Global connectivity via subsea cable systems for real-time collaboration and data movement

  • Dense digital ecosystems including research institutions, hyperscale cloud platforms, and enterprise partners

  • Sovereign-grade infrastructure aligned with compliance needs in healthcare, defence, and education

  • Ultra-low latency and massive bandwidth for high-performance workloads like distributed inference, LLM training, and digital twins

AI needs more than just compute power, it needs to be connected, localised, and ecosystem-enabled. NEXTDC delivers the infrastructure advantage to make that possible.

Strategic Drivers: Why Leaders Are Investing Now

Organisations aren't just building AI infrastructure to merely keep pace; they are strategically leveraging it to lead, innovate, and secure their future competitive position. Our latest Uptime Institute survey reveals that investment in AI infrastructure is directly tied to achieving core business objectives, moving beyond theoretical capabilities to tangible, quantifiable outcomes.

  • The top drivers for implementing AI infrastructure projects highlight this strategic imperative:

    • 50%: Improve operational efficiency. This isn't just about faster processes; it's about optimizing resource utilisation, reducing waste, streamlining workflows, and significantly lowering long-term operational costs. Leaders see AI infrastructure as key to unlocking new levels of productivity across the enterprise.
    • 49%: Create new products and services. For almost half of surveyed organisations, AI infrastructure is the foundational engine for innovation. It enables the rapid development and deployment of cutting-edge, AI-powered offerings that open new revenue streams and redefine market landscapes.
    • 41%: Enhance customer experience. Investing in robust AI infrastructure allows for real-time personalisation, predictive support, and seamless interactions, directly translating to increased customer satisfaction, loyalty, and competitive differentiation.
    • 28%: Boost employee productivity. AI tools, supported by dedicated infrastructure, empower teams with faster insights, automate repetitive tasks, and enable more strategic work, leading to improved talent utilisation and a more agile workforce.
    • 25%: Differentiate ourselves in the market. Beyond incremental gains, a quarter of leaders view AI infrastructure as a direct means to forge a unique competitive edge, setting their organisation apart through superior capabilities and service delivery.

While these top drivers focus on growth and innovation, it's critical to note that AI infrastructure investments also underpin foundational organisational resilience and risk mitigation. The Uptime Institute survey also shows that organisations are investing to:

  • Protect and maintain existing products and services (17%)
  • Reduce labor costs (16%) and other costs (13%)
  • Meet regulatory or legal requirements (12%)

This holistic view underscores that AI is no longer just a technological frontier; it is an organisational-critical capability and a strategic investment. The robust, scalable infrastructure you commit to today will not only enable your AI ambitions but will fundamentally define your organisation's agility, competitive posture, and long-term viability in tomorrow's digital economy

 Bottom Line: What CIOs and CTOs Need to Know

The AI infrastructure race is on. And whether you’re ready or not, your organisation is already being benchmarked by how well it can support:

  1. High-density GPU workloads — can your facilities handle 30kW, 50kW, 100W or 300kW per rack?
  2. Thermal and electrical resilience — are your cooling and power systems built for continuous inference or model training?
  3. Data gravity and sovereignty — is your infrastructure located where AI data is collected, processed, and legally allowed to live?
  4. Subsea-enabled proximity — can your cross-border workloads execute quickly, securely, and close to your user base?

For CIOs and CTOs, this is no longer just an IT project, it’s a strategic infrastructure decision that impacts customer experience, product speed, and global competitiveness.

 Download the Survey. Talk to NEXTDC. Get AI-Ready.

Want to see how your infrastructure stacks up?

→ Build where AI thrives. Start your journey with NEXTDC.