- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
New L4S standard is poised to speed up your internet and reduce latency
I have AT&T Gigabit Fiber Internet in my house. But there are still times when I’m watching a movie and the video starts to stutter and stop. What’s going on? It’s not that AT&T is failing to deliver the speed. No, it’s because the latency — the delay in data transmission between devices and servers — is bad.
Now, a new Internet Engineering Task Force (IETF) — Request for Comment (RFC) 9331, The Explicit Congestion Notification (ECN) Protocol for Low Latency, Low Loss, and Scalable Throughput (L4S) — is being deployed. And the good news is L4S aims to solve the internet speed trap.
Also: 10 ways to speed up your internet connection today
So, here’s why your speed can suffer. First, the speed you get from your internet connection depends on several factors. First, no matter how fast your connection, it will only go as fast as the slowest link between your machine and the internet. So, if you have a 1GB cable-based internet connection to your router, but your Wi-Fi Access Point (AP) dates back to the turn of the century and only supports 802.11b, your top speed is still going to top out at 11MB per second (Mbps).
Second, the speed is affected by the actual rated throughput of your connection and how it works. For example, a fiber connection should give you close to its full-rated speed. So, while my connection is rated at 1GB per second (Gbps), I tend to get 940 Mbps downstream and 895 Mbps upstream, which is good enough.
Also: Switching to better internet? Do these 5 things before you change ISPs
But if I use my Spectrum/Charter 1 Gbps cable connection, I usually see 850 Mbps downstream and 24 Mbps upstream. I see this difference because cable is optimized for downstream speeds, not upstream. Fiber, meanwhile, provides high bandwidth with both.
However, cable connections are also shared connections. When many people are online — say they’re all watching a new episode of The Gilded Age on Sunday night — everyone’s bandwidth rate will drop.
The final factor that affects internet speed is the one that L4S addresses. Whenever you make an internet connection to a website, streaming service, or an online game, your data is sent back and forth in data packets over a complex network of routers, switches, wires, and fibers.
Also: The best Wi-Fi routers you can buy
Every time your connection goes from one point to another, it’s prone to bottlenecks that can slow down data transmission. One significant contributor to this problem is ‘buffer bloat’. That issue might sound like the feeling you get from eating too much food, but it’s when networking equipment keeps too many packets in a buffer before sending them to their destination.
This bloat — and simple delays between equipment sending signals back and forth — leads to what’s called latency. Latency is the time it takes for a data packet to go from one place to another. Latency is measured in milliseconds using a variety of network tools, such as ping, traceroute, and One-Way Active Measurement Protocol (OWAMP). You can also get an idea of how much latency you’re dealing with by using the Ookla Internet speed test. The test shows your overall, download, and upload latency in terms of ping time in milliseconds.
The more latency your connection has, the slower it becomes. A good network latency measurement is less than 100 milliseconds, which would mean it takes less than a tenth of a second for data packets to travel from one point to another on the network.
You can think of latency as a packet traffic jam. Yes, in theory, you can drive your car at 70mph on the interstate, but when there’s a lot of traffic on the road or some bumps on the highway, you’ll be lucky to move at 35mph. It’s the same theory with internet connections — L4S doesn’t provide an express lane. Instead, L4S enables the network to know when traffic is congested, so it can scale down traffic to alleviate the congestion — and less congestion means less latency.
Also: Modem vs router: What’s the difference?
Now, ‘good’ is a relative term. For people who play massively multiplayer online role-playing games (MMORPG), such as World of Warcraft, Final Fantasy XIV Online, and Guild Wars 2, high latency can be the difference between winning and losing. Similarly, high latency in video-conferencing systems, such as Discord, Google Meet, or Zoom, can be annoying.
L4S introduces a more efficient approach to managing internet traffic. Under the usual internet rules, the network only finds out about a latency problem after it’s happened. L4S adds scalable congestion control, which provides much more frequent control signals from the network. Packets that run into congestion are marked, making it easier to adjust the traffic immediately to prevent further delays.
This approach enables the network to tune the traffic with fine control in less than a millisecond. This tuning seriously shortens the latency feedback loop. In turn, this approach enables devices to adjust data transmission rates quickly in response to congestion. The result is smoother data flow and reduced latency.
So, how big a difference will L4S make? When the technology started to be rolled out, Jason Livingood, Comcast’s vice president of technology policy, tweeted: “Latency (delay) went in the worst case (99.9th percentile) from several seconds on today’s Internet to roughly 10 milliseconds. In the downstream case, the 99th percentile was an astoundingly low one millisecond. Wait, what!? That is jaw-dropping… especially when you consider something like a web page where you need 10 or 20 round trips to load the page, which means 10x – 20x the latency. This technology means the wait is over — quite literally.”
Also: How to update your router’s firmware (and why you should be doing it regularly)
Well, the wait isn’t over yet. In the summer of 2023, Comcast, in partnership with Apple, NVIDIA, and Valve, started testing L4S. Xfinity customers with the latest Xfinity 10G Gateway XB7 and XB8 models, or who own an Arris S33 or Netgear CM1000v2, will be the first users to get L4S, either this year or early next.
Comcast has also released documentation for other ISPs and developers to start deploying L4S. In the meantime, Apple has been a prominent advocate for L4S, incorporating beta support for the standard in iOS 16 and MacOS Ventura. The company is gradually rolling out L4S in iOS 17 and MacOS Sonoma.
CableLabs, the non-profit company behind cable modem technology, is also working on deploying L4S as part of its 10Gbps initiative. The company sees L4S as ideal for applications that are optimized for high data rates, consistent ultra-low latency, and near-zero packet loss, such as cloud gaming and virtual reality/augmented reality applications, and high-quality video conferencing. L4S will also be beneficial for other applications that are latency-bound, including web browsing.
While L4S marks a significant advancement, it is not a panacea for all internet speed issues. Physical limitations will always impose constraints. However, L4S can minimize additional delays, making it a big step toward a more efficient internet. I expect we’ll see L4S in use on most high-speed internet connections, and newer operating systems and devices, by this time next year.
Personally, I’m ready to use it today.