- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
3 ways to reach the cloud and keep loss and latency low
Adoption of public cloud IaaS platforms like AWS and Azure, and PaaS and SaaS solutions too, has been driven in part by the simplicity of consuming the services: connect securely over the public internet and start spinning up resources. But when it comes to communicating privately with those resources, there are challenges to address and choices to be made.
The simplest option is to use the internet—preferably an internet VPN—to connect to the enterprise’s virtual private clouds (VPC) or their equivalent from company data centers, branches, or other clouds.
However, using the internet can create problems for modern applications that depend on lots of network communications among different services and microservices. Or rather, the people using those applications can run into problems with performance, thanks to latency and packet loss.
Two different aspects of latency and loss create these problems: their magnitude, and their variability. Both loss and latency will be orders of magnitude higher across internet links than across internal networks. Loss results in more retransmits for TCP applications or artifacts due to missing packets for UDP applications. Latency results in slower response to requests.
Every service or microservice call across the network is another opportunity for loss and latency to hurt performance. Values that might be acceptable when back-and-forths are few can become unbearable when there are ten or a hundred times as many thanks to modern application architectures.
The greater variability of latency (jitter) and of packet loss on internet connections increases the chance that any given user gets a widely varying application experience that swings unpredictably from great to terrible. That unpredictability is sometimes as big an issue for users as the slow responses or glitchy video or audio.
Faced with these problems, the market has brought forth solutions to improve communications with cloud-based resources: direct connection, exchanges, and cloud networking.
Dedicated connections to the cloud
Direct connection is what it sounds like: directly connecting a customer’s private network to the cloud provider’s network. This typically means putting a customer switch or router in a meet-me facility where the cloud service provider also has network-edge infrastructure, then connecting them with a cable so packets can travel directly from the client network to the cloud network without traversing the Internet.
Direct connects typically have data-center-like loss and jitter—effectively none. As long as WAN latency to the meet-me is acceptable, performance gets as close as possible to an inside-to-inside connection. On the downside, direct connects are pricey compared to simple internet connectivity, and come in large-denomination bandwidths only, typically 1Gbps and higher.
Exchanges to reach multiple CSPs
An exchange simplifies the process of connecting to multiple cloud providers or connecting more flexibly to any provider. The exchange connects to major content service providers (CSP) with big pipes but carves those big physical connections into smaller virtual connections at a broad range of bandwidths, under 100Mbps. The enterprise customer makes a single direct physical connection to the exchange, and provisions virtual direct connections over it to reach multiple CSPs through the exchange. Enterprises get a simpler experience, maintaining only a single physical connection for multiple cloud destinations. They can also better fit capacity to demand; they don’t have to provision a 1Gbps connection for each cloud no matter how little traffic needs to cross it.
Internet access to an exchange
As an intermediate solution, there are also internet-based exchanges that maintain direct connects to CSPs, but customers connect to the exchange over the internet. The provider typically has a private middle-mile of its own among its meet-me locations, and a wide network of points of presence at its edge, so that customer traffic takes as few hops as possible across the internet before stepping off into the private network with its lower and stable latency and loss.
Cloud networks and network-as-a-service (NaaS) providers can also step into the fray, addressing different aspects of the challenge. Cloud networks act like exchanges but came into existence specifically to interconnect resources in different CSPs. NaaS providers can, like internet-based exchanges, work to get traffic off the public internet as quickly as possible and get it to shared points of presence with CSPs. It looks to the enterprise like internet traffic but touches the public internet only between the enterprise and the nearest NaaS-provider PoP within a meet-me facility.
Most enterprises use cloud providers, but not just one and are using more all the time. Most enterprises are not 100% migrated to cloud, and may never be. So, closing the gap between on-premises resources and cloud resources, and among cloud resources, is going to continue to be a challenge as well. Luckily, the array of options for addressing the challenges continues to evolve and improve.
Copyright © 2022 IDG Communications, Inc.