It seems that everyone today is talking about “the cloud,” but most of the discussion seems to be about putting a bunch of machines in one place to achieve elastic compute capabilities. What hasn’t received as much attention are the “networking” aspects of cloud deployments. And if there is any attention to networking, it tends to be about networking within a data center. People forget that getting data into and out of the data center is just as important.
How Much Data is Flowing into and Out of the Data Center?
I recently saw an interesting white paper from Cisco that suggests that of the zettabytes (yes!) of networking traffic associated with clouds, about 76% of the data is LAN traffic within the data center and 24% is WAN traffic to and from the data center. Given the cost differential between LAN bandwidth and WAN bandwidth, it’s clear that the latter will dominate cloud networking costs.
The study further splits that 24% into two categories: datacenter-to-user (about 17% of total data center traffic) and datacenter-to-datacenter (7% of total data center traffic). The datacenter-to-user component is important because it dominants the end-user experience. Where quality and consistency of the end-user experience are key — in VDI initiatives, for instance — WAN optimization solutions can be used to provide private-line like quality of service (QoS) over standard internet access links.
Data Replication Drives Datacenter-to-Datacenter Traffic
The datacenter-to-datacenter component is often overlooked, but it’s one where Silver Peak is getting a lot of traction. I’m somewhat surprised that this isn’t higher up on the enterprise radar because, as I see it, it’s one of the most critical factors to cloud adoption. From an enterprise point of view, if you take all of your enterprise data and upload it into some third-party cloud, you become completely vulnerable to that particular cloud provider having some sort of an outage. We have already witnessed this first-hand with Amazon and others, and it ultimately shows us that none of the cloud providers are perfect.
With this in mind, it is essential to replicate your data out of the cloud to some other cloud — or to some second site that you own. Even if that second site doesn’t provide full operational capabilities, at least it has a full replica of your data. And if something terrible happened to your service provider, and it went offline for two days, you would have access to your own data. The cloud providers might refute this and claim they offer some sort of resiliency inside their offering. While this might be acceptable for a small business, it is not acceptable for an enterprise to have all its eggs in one basket.
It is this requirement for data replication that drives the datacenter-to-datacenter component. This is a WAN optimization use case which Silver Peak pioneered, and where the ROI is a no-brainer. Our technology not only saves money by reducing the amount of data that needs to be transferred — using deduplication and compression — it also allows customers to use cheaper, lower-quality bandwidth, by compensating with TCP enhancements and forward error correction techniques.