Nov 28, 2014
More data and applications are being moved into the cloud than ever before. If you believe the marketing hype, the cloud will solve all of your problems, for 1/10th the cost, in half the time, and make you a cup of coffee in the process. But what happens when you move data protection (backup, snapshots, and disaster recovery) to the cloud?
Moving data protection to the cloud can help reduce costs, and recovery times, but there are three things that you need to pay attention to. Do any of these three wrong and you are going to have a bad time.
BACKUP – The first backup is always a challenge, even it is performed locally on-site to tape or disk. When the first backup is going to a cloud service provider the limitations of the network become painfully obvious. Instead of near-zero latency, and a reliable lossless network, you face the Internet. Bandwidth and loss fluctuations are a way of life on the Internet. It is one thing to have your streaming movie interrupted; it is another to have your backup or replication fail because of the Internet. The first backup, replication, or snapshot is the foundation that your data protection is built upon, so it is imperative that it is reliable.
ACCESS – Once your data is in the cloud, you need to be able to use it if you have a disaster. Keep in mind that a disaster doesn’t mean a smoking crater or floodwaters. A disaster can be as simple as a server or storage array that stops functioning. Being able to recover servers and applications in the cloud is a key value proposition to using these services. Having everything recoverable in the cloud cuts down on restore time, and it also reduces the amount of hardware that needs to be kept idle waiting for a disaster to strike. The problem is that once everything is up and running again in the cloud, your users will have to access their data and applications over the Internet. Yes, it is better than them not having access to data and applications, but the experience is going to be very frustrating and productivity will be reduced. Users are now fighting for bandwidth with cat videos and uncle Larry’s vacation photos. Providing some access isn’t enough in our always-on rapid response world. Users need to be able to work at the same speed regardless of where the data or application is located.
If your users lose productivity for more than 72 hours did you actually survive the disaster or just prolong it?
RESTORE – We have covered the first backup to the cloud, recovery in the cloud, now we have to move data back to the primary data center from the service provider once everything is repaired. Similar to the first backup, a restore from the cloud is going to be painful if you have a lot of data to move. Even more challenging is the lack of downtime and maintenance windows that most companies have today. Running in the cloud can become expensive if you also have a full data center waiting for data to be restored. Plus, most providers charge a premium while recovered systems are running in the cloud, increasing costs at a time when most businesses are just trying to survive.
Bandwidth is the first thing that most companies look to when trying to solve these problems, but more bandwidth is seldom the solution. The new WAN is a different animal, and the old solutions don’t hold up anymore.