Today’s high speed optical links would fail if not for this technology (Part 2)
Written by Vasanta Rao, Technical Marketing Engineer, Transceiver Modules Group (Cisco Optics)
Understanding FEC Part 2: The Trade-Offs of Using FEC
In part two of this two-part blog post, we discuss the trade-offs and performance impact of forward error correction (FEC) on the network. Part one reviewed the basics of this essential technology.
Applications like social media, streaming video, the Internet of things and the pervasiveness of cloud computing are all contributing to skyrocketing bandwidth demand. Meeting that demand requires ever greater network speeds. The problem is that increasing data rate also raises the likelihood of transmission errors. Forward error correction (FEC) can address this issue by detecting and correcting a certain number of errored bits in each data frame. More isn’t necessarily better, however. Read on to gain a better understanding of the trade-offs and what’s involved in implementing FEC in your network (hint: very little).
FEC 101
To correct errors in a bitstream, FEC uses a combination of sophisticated algorithms and redundant bits appended to the message block prior to transmission. At the receiver, the FEC algorithm checks for errored bits and if necessary, uses the redundant bits to reconstruct the initial message.
FEC is based on n-symbol codewords made up of a data block that is k symbols long and a parity block (the code and redundant bits) that is n-k symbols long (see figure 1). We denote an FEC code with an ordered pair (n,k). The type and maximum number of corrupted bits that can be identified and corrected varies from error-correcting code (ECC) to ECC.
FEC involves trade-offs
The coding techniques used in FEC reduce the signal-to-noise ratio (SNR) necessary for a link to operate at a specified bit-error ratio (BER). In effect, the ECC enables the link to perform as though it has a much higher SNR than it actually does. Thus, the figure of merit for an FEC ECC is known as coding gain. The higher the coding gain, the greater the number of errored bits an ECC can correct.
There are limits, however. The improvement is the result of adding overhead in the form of parity bits. Increased overhead reduces available bandwidth for message blocks. This increase in codeword size does more than just consume bandwidth, however. FEC decoders need to receive the full codeword before they can act on it. Stronger FEC algorithms might offer higher coding gains but they require larger codewords, and larger codewords increase latency.
This relationship has two implications. First, FEC reaches a point of diminishing returns. This is particularly the case given that there is an absolute limit to the number of errored bits that a given FEC scheme can correct. RS(528,514) can only correct seven symbols, for example, while RS(544, 514) can correct 15 symbols. Second, the latency increase is limited, and a non-issue for all but the most time-sensitive applications, such as high-speed online trading. In most cases, the effect does not impact performance and is far outweighed by the benefits of FEC.
FEC is designed for ease of use
FEC may sound complicated but the optical communications ecosystem has worked hard to make it easy to use for integrators and network operators. The choice of FEC code to use in a given link is determined by standards or by multi-source agreements. The working groups crafting the standards and MSAs for the most part take responsibility for specifying the exact type of FEC for each class of optics. Manufacturers have developed smart components that automatically recognize and enable the correct FEC code. Unless the user requires some specific customization, reaping the benefits of FEC is as easy as plugging in an optic.
For more details on FEC, see our new white paper, “Understanding FEC and Its Implementation in Cisco Optics.”
Share: