Data Acceleration “Worst” Practices

Bad IdeaI am constantly getting emails with “Best Practice” advice — for everything from traveling to IT. Well, it got me thinking: why doesn’t anyone provide “Worst Practice” advice? Can’t we all learn from other people’s mistakes? So here goes, this is my top 5 list of misconceptions and things to avoid in the data acceleration space:

Misconception #1: More bandwidth = better application performance

When it comes to the WAN, people love to talk about bandwidth. Given its cost, I can see why. But bandwidth is only one small piece of the “WAN performance” puzzle. Latency due to distance and packet loss due to congestion are just as critical to determining “effective throughput” over a WAN, which is a key measurement for the performance of many applications. Here is a throughput calculator that shows what I mean.

As WANs become larger in size, bandwidth becomes increasingly less important. It is no longer always the “short pole in the tent” –  i.e. the throughput bottleneck. This is especially true as MPLS and the public Internet become more prevalent, because these shared networks introduce a whole host of new packet delivery issues that can really muck with the performance of your critical applications.

Ask yourself, “What am I trying to achieve with data acceleration?” If the primary goal is to lower ongoing telco costs, then bandwidth reduction should be your first and foremost concern. But if your goal is to improve the performance of certain applications and/or ensure the success of certain projects, don’t assume that more bandwidth is always the answer.

Misconception #2: Building a business case is all about telco savings

Often times when people try to calculate the business case for data acceleration they gravitate towards bandwidth savings. This is an easy calculation to make, and in some cases offers a viable metric. If you can avoid or delay an upgrade that would cost X dollars/month to a service provider, and data acceleration costs Y dollars as an upfront investment, calculating a Return on Investment (ROI) is pretty easy: Y / X = ROI in months.

But bandwidth is becoming increasingly cheap, which makes the above ROI increasingly less attractive. And as pointed out in #1 above, bandwidth is not the only thing that impacts application performance. WAN quality, network congestion, and distance play just as important a role in application performance as WAN capacity.

So, to calculate a business case you must look at the big picture. What strategic project are you working on that requires a high performing WAN? Remote data replication? Data center consolidation? Server/storage centralization? Cloud? VDI? What is the cost of any of these projects failing? That is the true benefit of data acceleration — it ensures the success of strategic projects. How do you put a price on that? Stay tuned — we have some business case tools coming that will help.

Misconception #3: I have a 0% SLA, so I don’t have packet loss

Even when the physical layer of a WAN is error-free, some technologies and provisioning practices still lead to packet loss at the network layer. In fact, it is not unusual to see network packet loss rates as high as 8% in some networks. When this type of loss is coupled with high latency and the retransmission and congestion avoidance behavior inherent to the Transmission Control Protocol (TCP), it is not surprising that application performance suffers across a WAN.

Some telcos will offer a 0% service Level Agreement (SLA) to allay concerns over packet loss. The problem, though, is that they base this SLA on a monthly average. You can have reasonable loss for a full business week (or two) and still maintain a monthly average of 0%. For example, in a 30 day month you can have 35 hours of .1% loss and still meet a 0% SLA.

The first step to understanding (and fixing) packet loss is to accurately measure its existence. Many enterprises do this using ICMP Pings, which involves sending one ping per second between hosts and counting how many times the request is lost. Unfortunately, the “ping method” of analyzing loss is prone to incorrect measurements based on statistical limitations. For example, let’s assume we are dealing with a WAN with 0.1% average packet loss, which is common in most MPLS networks. This network is expected to lose 1 in every 1000 packets, which equates to one dropped packet every 17 minutes. Since a good statistical measurement requires at least 40 data points to be valid, a ping test would have to run for 12 hours, non-stop, to produce an accurate measurement of loss. Further complicating matters is the fact that fluctuations in the loss levels over this period will skew the results and invalidate the test. In other words, ping measurements are only valid if performed over a long period when packet loss is constant, which rarely occurs.

The best way to accurately detect packet loss is to monitor and track every packet going over a WAN in real-time. Then, of course, you need tools in place to fix the loss when it occurs. To learn more, here is a great whitepaper on how to accurately detect and correct packet loss.

Misconception #4: Application plugins — fix or feature?

Some data acceleration vendors tout a “long list” of application plugins as a competitive advantage. The fact of the matter, though, is that only a handful of these plugins actually optimize the application itself. The rest were created to overcome deficiencies in their architectures. In other words, without plugins these solutions are forced to bypass some traffic entirely. With the plugins, they can do their basic optimization techniques on that traffic.

It’s like Boeing coming out with “special software” to cool lithium ion batteries. Is that a fix or a feature?

In contrast, Silver Peak does not require any special plugins to optimize traffic. Anything that runs over IP is eligible for our full suite of network optimization techniques. This includes deduplication and TCP acceleration, as well as QoS, traffic shaping, and packet loss/order correction. No plugins are required, which enables seamless deployment and operations, and future-proofs your WAN for the best ROI.

Misconception #5: Virtual cannot scale

A couple years ago this was true — virtual WAN optimization software could not scale as high as physical devices. That is why most vendors only came out with virtual WAN optimization solutions for small locations, i.e. 45 Mbps and below.

But this is no longer the case. The increased use of virtualization and the presence of underutilized computing resources within the network have enabled companies like Silver Peak to introduce very high capacity virtual offerings. In fact, there are large companies, like this entertainment provider, who are using virtual WAN optimization on 10 Gbps WAN backbones.

Also, there was a time when VMware was the only hypervisor capable of supporting these high capacity environments. But even that has changed. Check out this cool video that shows multi Gbps WAN acceleration in a Microsoft Hyper-V environment.

Virtual isn’t for everyone. In some situations physical devices make perfect sense. But in today’s day and age, it is becoming rarer and rarer to see performance as the deciding factor for choosing one over the other.

Image credit: pasukaru76 (flickr)