Blog

AI at Scale: How data centre connectivity Drives Performance, Security and Sustainability

Written by NEXTDC. | Jun 2, 2025 5:18:05 AM

By Sean Rinas, Head of Network Operations

As artificial intelligence moves from theory to production, infrastructure is being stretched in new and unfamiliar ways. Models are growing larger, and workloads are shifting closer to the edge. Meanwhile, the pressure to deliver insights in real time is putting every layer of the digital stack under scrutiny - including the network.

For organisations chasing AI-driven transformation, the conversation is no longer about whether their data centre is powerful enough. Serious consideration needs to be applied to data centre connectivity: a fresh view to whether critical infrastructure is connected in the right ways – i.e. with the speed, reliability and ecosystem reach that will keep up with demand.

Because, for AI to work, interconnection must work first.

Close to the data, close to the action

AI inferencing relies on fast, uninterrupted access to data (and not just from one source). You’re pulling from cloud services, partners, edge locations and internal platforms, often simultaneously. The ability to move that data across environments without delay is critical to the speed and success of your models.

That’s why location matters. When your infrastructure is physically close to the cloud on-ramps, AI accelerators and specialist service providers you depend on, you reduce latency and increase throughput. It’s a bit like putting everything within arm’s reach, including the ability to scale up or down without starting from scratch.

Our S6 Sydney Data Centre is built with exactly that in mind. It’s Australia’s first purpose-built AI factory, certified from Day 1 for the deployment of NVIDIA DGX-ready reference data centre architecture utilising high-density power and advanced liquid cooling. We’re seeing customers across a range industries scoping and using it to train, infer and deploy AI models at production scale. As it is, it is ready to deploy rack densities up to 130kW and proximal AXON access up to 100Gbps. Our whole national footprint is designed so that this NVIDIA-certified capability can also be offered, where needed, in Melbourne, Brisbane, Perth, Adelaide, Darwin and beyond.

Of course, AI doesn’t just need compute; it also needs access. That’s why we’ve made it easier to plug into a range of emerging neocloud, GPU-as-a-Service and Infrastructure-as-a-Service models, which are fast becoming the default for teams that don’t want the cost or complexity of owning and hosting their own hardware. These environments only add value if they’re reachable at low latency, with predictable performance and secure interconnects. This is exactly what our AXON virtual interconnection platform is designed to deliver.

Security that offers certainty at every layer

AI models are only as trustworthy as the environment they run in. That means protecting both the data and the infrastructure it moves through.

Security needs to start at the network level. With AXON, our customers control their interconnections end-to-end – with encrypted pathways, geo-redundant routing and private connectivity across our national platform. These are baseline requirements, especially for workloads handling sensitive data or operating under regulatory scrutiny.

For many organisations, this also means knowing exactly where their data is and being confident that it’s staying in the right jurisdiction. Interconnection through sovereign infrastructure helps eliminate uncertainty and reduces the compliance burden, particularly when working across hybrid or multi-cloud environments.

The operational value of this can’t be overstated. When you’re rolling out AI across business units or customer-facing services, downtime, packet loss or routing failures can cause more than frustration - they can also open the door to substantial risk. That’s why our focus has always been on building infrastructure that stays up, stays fast and stays secure, no matter what’s happening on the outside.

Reducing overhead with sustainable data centres

Then there is the all-important conversation about AI, data centres and sustainability. The infrastructure required to support AI isn’t small. It draws power and creates heat. If it’s not deployed thoughtfully, it can drive up both operating costs and environmental impact.

We’ve taken a long-term view of that challenge. Our facilities are NABERS-rated, energy-efficient, and optimised for water conservation. We also see interconnection as a key part of the sustainability equation. When data can move directly between platforms without unnecessary hops, detours or long-haul transfers, you save more than time. You also reduce the energy intensity of the entire workflow.

On top of that, flexible access to neo-cloud infrastructure services, like GPUaaS, means organisations can avoid overbuilding, reduce waste and consume only what they need, when they need it. As sustainability expectations rise, these architectural choices really matter.

NEXTDC is where AI lives (and thrives)

Whether you’re still experimenting or already running AI at scale, your ability to extract value will depend on the strength of your interconnection strategy. Not just how much traffic you can move, but how confidently, how securely, and how sustainably you can move it.

At NEXTDC, that’s what we do best. We create operational certainty, we remove friction, we connect what matters and we build infrastructure that scales with your ambitions, not against them.

If you're reviewing your current setup, or want to understand what good looks like, our Interconnection Excellence Checklist is a practical place to start.

Or, if you'd rather talk it through, reach out to the NEXTDC team. We’re always happy to explore what’s possible, and how interconnection can help you get there faster.