Silver Peak network efficiency

Optimizing the Data Center

Silver Peak network efficiencyAt a time when most data centers are bursting at the seams – short on space, capacity, performance and power – more is being demanded of them. With the gap steadily increasing between budgets and resources and existing and new demands, organizations have to do more with less. One way of doing so is to increase the efficiency of the network.

According to two new studies from InformationWeek, “2012 State of the Data Center” and “2012 IT Spending Priorities Survey,” constrained budgets is the number one concern, and those budgets will mostly decrease or stay flat in 2012, respectively. Only 5% of the respondents expect demand for data center resources to decrease, while over half, 58%, expect demand to increase somewhat, and 15% expect demand to increase significantly, more than 25%. The top requirement for applications was reliability and availability, at 74%, followed by security and data protection (57%), and flexibility to meet new business needs (40%).

On the spending side, upgrading the networking infrastructure only came in fourth in the top-five objectives, behind improving security, virtualization and upgrading storage, and ahead of building or enhancing a big data/business intelligence/decision-support system. Of the 39% who expect budgets to increase, 55% will increase up to 9%, 28% will go up 10-14% and 17% will see spending jump 15% or higher.

Unlike 2012, which saw data center equipment upgrades surge 59%, 2011 growth was a more modest 9%, reports Infonetics Research. However the company is forecasting double-digit percent revenue increases for the overall data center equipment market this year and next, driven primarily by investment increases in data center and cloud service offerings. Data center network equipment — data center Ethernet switches, application delivery controllers (ADCs) and WAN optimization appliances — grew 9% sequentially in 4Q11, to $2.36 billion, but was up 15% year-over year.

In a recent blog from Storage Switzerland senior analyst Eric Slack, entitled ‘Data Center To Data Center WAN Traffic Optimization‘, he noted that large data center-to-data center WAN traffic, or “Big Traffic,” is creating a problem for branch WAN optimization tools — and it may be time for a different approach.

Slack notes that Big Traffic, i.e. remote backups, replication and often machine-to-machine transfers that are ‘bursty’ in nature, involves a different data profile, one that typically has fewer connections but much higher bandwidth per connection. Because the time required to complete a data transfer is the key success metric, latency is the problem that needs to be solved, said Slack.

Effectively handling Big Traffic requires a WAN optimization solution that’s designed to reduce latency, but not only don’t many WAN optimization devices address this issue, the data reduction techniques they rely on to save bandwidth can actually increase latency. Scalability is another issue because Big Traffic involves much higher bandwidth per connection than branch WAN traffic, which typically leads to multiple WAN optimization devices connected through load balancers or complex policies upstream to channel data to each box.

One approach to optimizing the network is WAN virtualization, blogged Andy Gottlieb. “Where WAN Optimization is good at reducing bandwidth consumption, speeding up multiple transfers of the same data and in particular speeding up remote file access, WAN Virtualization…provides multipath multiplexing for both aggregating bandwidth and delivering reliability, with some implementations offering sub-second reaction, dynamically engineering around network trouble – outright link failures, and also high packet loss or excess latency – as it occurs.”

Regardless of which path you choose, network optimization can pay big dividends when you need more from your data center.

About the author
Steve Wexler
Steve is a proficient IT journalist, editor, publisher, and marketing communications professional. For the past two-plus decades, he has worked for the world’s leading high-technology publishers. Currently a contributor to Network Computing, Steve has served as editor and reporter for the Canadian affiliates of IDG and CMP, as well as Ziff Davis and UBM in the U.S. His strong knowledge of computers and networking technology complement his understanding of what’s important to the builders, sellers and buyers of IT products and services.