- I can't recommend this rugged power station enough to drone users -- now with $340 off for Black Friday!
- Give your iPhone 16 thermal camera superpowers with this gadget
- This power station has an irreplaceable emergency feature (and now get $350 off for Black Friday)
- This ultra-thin power bank is a must-have travel gadget (grab it cheap in this Black Friday deal)
- The Jackery Explorer 1000 V2 is one of the best entry-level portable power stations (and it's now half price for Black Friday)
Achieving Low Latency in Different Types of Wireless Networks Requires an End-to-end Focus
Low latency numbers are a moving target. They’re higher for applications with low-throughput requirements than they are for applications with higher-throughput requirements. In general, latency is a function of the proper network design. Radio latency must be considered alongside end-to-end IP latency and the round-trip delay. One major factor that contributes to latency is shared by 5G, Wi-Fi, and Cisco Ultra-Reliable Wireless Backhaul (URWB) technologies: The closer applications are located to where data is being processed in data centers, clouds, or at the network edge, the lower the possible latency.
Low Latency Requirements Vary by Application
In VoIP, 150 milliseconds of latency in one direction is not noticeable by users and therefore perfectly acceptable. With collaboration applications like WebEx or Microsoft Teams and augmented and virtual reality (AR/VR), sub-50 millisecond bi-directional response times are required. If you’re using wireless connections to run an Autonomous Mobile Robot (AMR) or an Automated Guided Vehicle (AGV) in a factory, sub-20 ms response times in a high-throughput network are necessary, while some closed-loop process control traffic requires 10 ms or less end-to-end latency.
How Latency is Usually Calculated
End-to-end IP latency is usually calculated in one direction: from the wireless device to the wireless network, IP transport network, and application server. Round-trip Time (RTT) is the calculation of bi-directional latency (e.g., the time required for a network ping). Achieving lower RTT latency is simplified with the closer proximity of applications hosting the wireless devices.
In calculating end-to-end IP latency, it’s important to consider the typical round-trip time (RTT) latency between an end user and a cloud provider or content distribution network (CDN) provider. In a network design aiming for 150 ms of RTT latency, the time split between each network segment or building block from the local device to its application must be estimated. A device attaches to a local wireless network with its over-the-air latency, then data is transited over public and private IP infrastructure, including switches, routers, and firewalls in the round-trip path. This often incurs unpredictable Internet latency before reaching the application.In addition, the processing time required before a response is sent back must also be considered in calculating overall latency.
How Different Wireless Technologies Handle Latency
Advanced types of 5G service ― like 5G Enhanced Mobile Broadband (eMBB) and 5G Ultra-Reliable Low Latency Communications (URLLC) ― include optimizations at every step of the radio hardware and uplink and downlink transmission processes. New radio features address low latency communications, allowing for a variable transmission time interval (TTI), which can scale from 1ms down to ~140 microseconds, depending on whether spectral efficiency in eMBB or low latency in URLLC is the main goal. In a 5G network, the User Plane Function (UPF) is the interconnect point between the mobile infrastructure and the data network and gets IP packets from the radio over a tunnel.
Wi-Fi, even though it operates in unlicensed bands, is strictly regulated by countries. Local regulations define maximum power levels for access points to avoid interference between users. This in turn determines range, coverage, penetration, and signal strength. The next generation of the Wi-Fi protocol is expected to enhance its determinism, allowing better latency control in a network design.
Although Wi-Fi and 5G use different types of encapsulations, IP packets in a Wi-Fi network similarly move from access points to the wireless radio network and through tunnels to a wireless LAN controller (WLC). If low latency is required for an application, a Wi-Fi WLC for the application server path should be designed as short as possible.
With Cisco Ultra-Reliable Wireless Backhaul (Cisco URWB), a wireless WAN backhaul technology derived from Wi-Fi and designed to serve mobile network environments, provides low-latency, highly reliable, long-range, high-bandwidth connections that can handle endpoints moving at high speeds with zero-delay handoffs (like vehicles, trains, or subways). Operating in unlicensed frequencies, the Cisco URWB segment requires an appropriate design to control latency and fast handover in less than 5 ms, while the end-to-end IP infrastructure beginning at the Cisco URWB gateway is like Wi-Fi and 5G topologies.
Recent enhancements deliver uninterrupted connectivity to fast-moving devices by sending high-priority packets via redundant paths. Cisco patented Multipath Operations (MPO) technology can duplicate protected traffic up to 8x, avoid common paths, and it works alongside hardware availability for lower latency and higher availability, limiting interference and hardware failures.
Low latency means different things in different applications and requires different solutions. The right network design can reduce latency to desired targets based on your company’s strategy and use cases.
Radio latency must be estimated in the context of end-to-end IP latency and the round-trip delay. While many different factors contribute to lowering latency, with 5G, Wi-Fi, and Cisco URWB, the closer applications are to where data is being processed, whether in a data center, cloud, or network edge, the lower the probable service latency.
Read the white paper for more information:
Share: