Connect, transform, prosper

Nov 17, 2019

Share:

 

By Adam Gardner, Head of Products, NEXTDC

There is an age old saying, that there’s only 24 hours in a day. Like most, I often find myself wondering how to squeeze a bit more time into my day. The debilitating impact of latency is a fact of life for most of us, but it’s often largely underestimated when it comes to optimising your digital platform. But has the solution to reclaiming more time in our day been under our nose the whole time?

Digital transformation combined with a multi-cloud architecture is supposed to be a journey bringing us closer to customers. This is a powerful value proposition if you get the strategy right, enabling your business to execute quickly, cost-effectively and scale as your business grows.

There are several complexities that arise from multi-cloud deployments and distributed hybrid IT architectures, with many organisations now prioritising optimising their technology platform to drive improved customer satisfaction and ultimately more profitable outcomes.

As digital change reshapes the economy, technology platforms form a key component of the innovation engine behind every successful organisation. The process of building new platforms designed to deliver customer delight - because ultimately that’s what it’s all about – can largely underestimate the underlying connectivity architecture that is required for long-term success.

Strategic planning and ongoing management when building a hybrid environment is particularly important given organisational and customer needs change with increasing frequency, resulting in increased complexity. Connectivity to carriers, service providers, cloud platforms and customers themselves all form part of a complex, but critical ecosystem delivering successful outcomes. The spiderweb of interconnectivity is growing larger by the day.

Compounding complexity and increased interconnectivity demands can manifest as latency; both in network and system terms, but also as poor customer experience. Poor performance and data transfer delays can mean investments in platforms that are built to increase agility and flexibility and improve productivity and cost reduction won’t reach their full potential.

Basic physics

It’s easy to get confused between latency and bandwidth. Although linked, they each have a different impact on the performance of applications. Latency refers to the time taken for data to travel from one point to another. Bandwidth is the amount of data that can be transferred within a fixed period.

A great way to think about latency vs bandwidth is to imagine vehicles travelling along a highway. The highway has a fixed speed limit of 100kmh. In network terms, the speed limit is the speed of light. In our example, we are law abiding citizens, so we travel at exactly 100km/h - much like in networks, where the speed of light is a law that cannot be broken.

Latency is usually measured as round trip time (RTT) which is the time taken for the data to get from point A to point B and back again. Thus, if the highway was 100km long, then the latency on this trip would be two hours. With the constraints mentioned, we cannot travel this path any faster.

If we wanted to get more cars from point A to point B in the same time period, we can build more lanes to the road. This, in effect, is increasing the bandwidth of the road. We could now have two, three or more cars travelling from A to B at the same time due to the larger road (bandwidth), but even with more lanes available, the time taken to get from A to B (latency) remains the same.

In network terms, the connection latency is a factor of the speed of light, distance and any network infrastructure the data must pass through. The further your data must travel and the amount of network infrastructure it passes through all contributes to increasing latency for the data transmission between point A and point B.

Depending on where you are connecting to, the difference may only be milliseconds, but this can compound and become significant. Going back to our analogy, if we establish a way to shorten the road to only 70km (reduce latency), we would complete more trips over the same time period. The reduction in latency (distance) will positively impact your user experience.

We may be only talking about milliseconds, but when it comes to digital platforms the extra latency can have a very real impact on network performance thereby eroding the customer experience, irrespective of how much bandwidth you throw at the problem.

Imagine a shopping cart experience on your website. The e-commerce engine sits in AWS, your customer database sits in Microsoft Azure, your financials are in Oracle Cloud and your inventory and ERP systems are on premise. All these systems need to talk together in real time in order to process a shopping cart transaction. If the round-trip time from your on-premise data centre to the cloud is 20ms for example, then it could easily take as much as 200ms to process and complete a transaction as multiple reads and writes are required in each system. If there are bandwidth constraints due to higher levels of traffic to your website, it could take even longer to process.

Today's multi-cloud world delivers many significant advantages, but it does require you to re-think your network strategy in order to remove as much latency as possible. With Cisco estimating cloud traffic accounting for 83% of data centre traffic, placing your in-house IT as close as possible to the clouds greatly reduces the multi-cloud latency and delivers a quicker more enjoyable customer experience.

Perhaps another example illustrates the real and tangible impact latency can have. Imagine connecting to an application via the internet and seeing a 60ms round trip time. Over the working day, there could be 100,000 data transactions occurring across various applications. In this example, the total time is therefore 100,000 x 60ms = 6,000 seconds.

However, let’s say we found a way to shave 5ms from the round-trip time. In this case, by directly and privately connecting to the cloud rather than using the public internet, using the same example, the total time would now be 100,000 x 55ms = 5,500 seconds.

500 seconds difference. 8.3 minutes… All that maths and explanation for just 8.3 minutes?

Telling your CEO you can save 8.3 minutes probably wouldn’t get you invited back to many more discussions. However, what if we consider a broader perspective? If we look at this over a year with ~240 working days, that equates to 33.2 hours or 4.4 full working days of time saved per employee!

Importantly, the example above doesn’t take into consideration internet congestion and assumes the internet is operating in optimum conditions. When this isn’t occuring, the potential time savings can be even greater.

I'll leave that thought with you to digest, but join me next week where I’ll delve a little deeper into the compounding value derived from strategic connectivity architecture as data volumes scale up.

In the meantime, if you’re interested in exploring your options around rearchitecting your connectivity strategy so it’s purpose built to drive performance up and latency down, to learn more, contact one of our specialists.

 

Similar posts