Concurrent Connections: The Secret to Buying Scalable IT Gear

The Secret to Buying Scalable IT Gear

Concurrent Connections: The Secret to Buying Scalable IT GearIT folk love numbers, so it’s somewhat understandable that many buyers may be swept away by the gigabits of WAN capacity a device can saturate and forget about another equally important measurement – connection counts. The number of concurrent connections has grown and continues to grow due to the shift towards Web applications, coupled with the characteristics of mainstream business applications. As such, it’s even more important that equipment connecting to the WAN can accommodate that growth.

Back in the ‘90s, peer-to-peer applications were the first to strike fear in the hearts of IT pros by spawning dozens of concurrent connections to mitigate the problems of the WAN. At the time, most applications were still using a single connection, whose maximum throughput was limited by the interaction of pack loss, bandwidth, and the WAN’s latency. A connection across a 10Mbps LAN, for example, might be able to consume 9 Mbps or more, but on coast-to-coast link (about 100ms of latency) throughput drops to under 1.65 Mbps when packet loss reaches just 0.5 percent, very typical for an MPLS or IP VPN network. Increasing the number of connections is one of the best ways to avoid this problem (albeit with potential ramifications for the serve and other applications sharing the network).

Today, the physics of the WAN haven’t changed. WAN connections are still limited, and to deal with the problem even mainstream business applications are spawning multiple connections, albeit perhaps on a smaller scale than P2P applications. (See this excellent paper from Peter Sevcik and Rebecca Wetzel at Netforecast explaining how to calculate WAN bandwidth.)

Two years ago, a study into Internet usage found that on average, users generated 10 to 15 concurrent flows. Today, you should only hope that your users max out at 15 flows per client. In a branch office, where users in the branch access Exchange servers in the data centers while running a few Web applications, you can expect to see over 20 connections per user. A connection consists of at least two flows.

The default settings on Microsoft Exchange 2010 alone account for 16 connections per user, says Shamil Fernando, Silver Peak’s systems engineering director of Asia-Pacific and Japan. And as we move to the cloud, Web-based applications become even more important. They can generate at least two concurrent connections per page and up to six per host process, Damon Ennis, Silver Peak’s vice president of product management, told me the other day. At the time I wrote this, I have 23 pages open; I guess that makes me an IT nightmare.

The thing is, equipment suppliers can get skittish around connection counts. They know that supporting the additional connections often requires more memory, disk and processing, adding to equipment costs. So some vendors play games, particularly in the branch, by winning customers with less expensive, undersized equipment. Enterprises are then forced to upgrade to more expensive appliances.

Today, connection counts are growing so rapidly that enterprises can no longer afford play the innocent victim of these bait-and-switch games. Doing so will invariably lock organizations into costly hardware upgrades, downtime and dissatisfied users until IT upgrades its WAN optimization solutions, firewalls, and Network Address Translators (NATs).

You can fight this trend by getting a more precise picture of an office’s connection requirements and then buying equipment to scale. Run “Netstat” on a Windows command line to see the number of concurrent connections. I know my system typically runs around 40 connections. And for many appliances, it’s not just active connections that matter. Even those connections where there is no traffic can be counted against the appliance’s connection count. And as one user found out here, it pays to be conservative. Often connection growth exceeds even vendor expectations.

In an office of 20 users, hundreds or even thousands of concurrent connections are entirely realistic. Network appliances must be able to scale appropriately to accommodate this capacity today – and growth for tomorrow. Failure to do so will invariably result in spending more on infrastructure in the long run.

This article was originally published by Network World here: