How Can an SD-WAN Solution Improve Network Performance?

The 2016 State of the WAN report contained the results of a survey in which the respondents were asked to indicate the factors that were having the biggest impact on their WAN. Given that unlike the LAN, the WAN exhibits performance limiting characteristics such as high levels of packet loss and latency, it isn’t surprising that two of the top five factors that the respondents indicated were performance-related. Since we are going through a fundamental shift from traditional WANs to SD-WANs, this is an important time to look at the various ways that an SD-WAN solution can improve network performance.

Add Bandwidth

Because of the way that service providers charge for enterprise WAN links like MPLS, the typical WAN runs at megabit speed while the typical LAN runs at gigabit speed. One of the advantages of an SD-WAN is that it enables network organizations to close that gap a bit by more fully leveraging relatively inexpensive Internet bandwidth. There is no doubt that in some instances adding bandwidth improves performance. However, the performance improvement is seldom, if ever, linear. To understand what I mean by that, consider the example of sending a large file over a WAN link from Boston to San Francisco. If the size of the WAN link is doubled, the amount of time it takes to send the file might well be reduced, but it is highly unlikely to be cut in half. There are many reasons for this. One reason is the TCP window size, which is the amount of data that can be sent before the transmitting device stops to wait for an acknowledgement from the receiving device. Sticking with the example of the file transfer, it is highly likely that due to the impact of the TCP window size, the file transfer will not be able to take full advantage of the added bandwidth and might only see a negligible improvement in performance.

Implement Forward Error Correction and Packet Order Correction

A factor that can have a significant impact on WAN throughput is packet loss. The effect of packet loss on TCP throughput has been widely analyzed[1]. Mathis, et al. provide a simple formula that offers insight into the maximum TCP throughput on a single session when there is packet loss. That formula is:

Figure 1: Factors that Impact Throughput

Where MSS is the maximum segment size; RTT is the round trip time and p is the packet loss rate.

The equation in Figure 1 shows that throughput decreases as either the round trip time or the packet loss rate increases. To illustrate the impact of packet loss, assume that MSS is 1,420 bytes, RTT is 100 ms. and p is 0.01%. Based on the formula, the maximum throughput is 1,420 Kbytes/second. If, however, the loss increases to 0.1%, the maximum throughput drops by 68% to 449 Kbytes/second.

One way to eliminate the negative impact of packet loss on WAN throughput is to implement an SD-WAN solution that features packet-level forward error correction (FEC). Such solutions transmit a small number of extra or parity packets. These parity packets can be used at the receiving end to reconstitute any lost packets and hence eliminate the throughput-limiting impact of packet loss.

Even if packets are not lost, but are delivered out of order to the receiving device, throughput will be reduced. The reason for this is that in most cases TCP will force the transmitting device to retransmit the out-of-order packets. The way to eliminate this phenomenon is to implement an SD-WAN solution that features Packet Order Correction (POC). POC dynamically re-sequences out-of-order packets on the receiving end of a wide area network.

Leverage WAN Optimization Functionality

In a previous blog entry I mentioned a couple of use cases in which having an SD-WAN solution that integrates WAN optimization functionality can significantly improve application performance. One of those use cases is disaster recovery (DR). DR requires large files be transmitted between a primary and a secondary data center that are purposefully located far apart. Due to the previously discussed impact of the TCP window size on WAN throughput, the DR application may not be able to fully utilize the WAN bandwidth and hence may not be able to transmit all the data needed to support the company’s DR plan. For many companies a better solution is to implement WAN optimization functionality referred to as de-duplication. De-duplication keeps the primary data center and the secondary data center in sync while only sending a minimum amount of data over the WAN link.

The other use case mentioned in the previous blog was supporting a chatty protocol such as the Common Internet File System (CIFS). A chatty protocol requires hundreds of round trips to complete a transaction. For the sake of example, assume that a business transaction requires 250 round trips. If that transaction takes place over a WAN with 60 ms. round trip delay, the chatty nature of the transaction adds 15 seconds of delay which would cause users to complain bitterly. Adding bandwidth will add cost but it won’t improve performance. WAN optimization functionality referred to as spoofing includes several techniques that overcome the impact of chatty protocols and reduce the overall transaction time.


Network organizations can take comfort in knowing that virtually any SD-WAN solution that they implement is likely to make at least a modest improvement in the performance of some use cases. However, network organizations that are thinking of adopting an SD-WAN solution and which want to make a more significant improvement to the performance of a broad range of use cases, should closely analyze the potential solutions. They should look toward SD-WAN solutions that support advanced functionality such as FEC and POC and which also offer integrated WAN optimization functionality such as de-duplication or spoofing on an as needed basis.

[1] The macroscopic behavior of the TCP congestion avoidance algorithm by Mathis, Semke, Mahdavi & Ott in Computer Communication Review, 27(3), July 1997

About the author
Jim Metzler
Jim has a broad background in the IT industry. This includes serving as a software engineer, an engineering manager for high-speed data services for a major network service provider, a product manager for network hardware, a network manager at two Fortune 500 companies, and the principal of a consulting organization. In addition, Jim has created software tools for designing customer networks for a major network service provider and directed and performed market research at a major industry analyst firm. Jim’s current interests include both cloud networking and application and service delivery. Jim has a Ph.D. in Mathematics from Boston University.