Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail

How Cloud is Changing the Face of Disaster Recovery.

The advent of cloud infrastructure, both public and private, has completely changed the way the data center operates today. It has increased data center agility, reliability and scalability, all while lowering costs. There is one area, however, where cloud can play a vital role which almost no one is taking advantage of today: Availability.


Guest article by Sash Sunkara, CEO, RackWare Inc.


Data center workloads are typically divided into two camps: (1) critical workloads and (2) non-critical workloads. Critical workloads, the ones that can’t tolerate even a few minutes of downtime, are usually protected with real-time replication solutions. These solutions require a duplicate system that acts as a recovery instance should the production workload experience an outage. Non-critical workloads can tolerate a wide range of outage times, and are typically unprotected, or backed-up with image or tape archive solutions. Cloud technology has introduced the possibility of an intermediate solution, where non-critical workloads can get the benefits of expensive and complex high availability solutions, such as failover, without the high cost.

There are four ways that Cloud infrastructure can be used to improve your data center’s availability today:

  1. Prevent Downtime by Reducing Resource Contention

information recovery diagramWhile unplanned downtime occurs for many reasons, one of these is due to processes fighting over resources (resource contention). As businesses grow, the demand for resources usually grows proportionally. Data center workloads are typically not architected to handle the variable demand, and outages may occur due to the inability to handle peak load times. That’s where cloud scaling and cloud bursting come into play. The cloud has afforded data center managers a way to accommodate drastically changing demands on workloads by allowing for the easy and automatic creation of additional workloads in the cloud without changing or customizing its applications. This is especially true of public clouds, since data centers can “burst” or extend their infrastructure into a public cloud when needed. This alleviates resource contention through automatic scaling out to the cloud when necessary and ensures that resources are available to accommodate spikes in demand and prevent downtime and increase overall availability.

  1. Replicate Workloads into the Cloud to Create Asymmetric “Hot Backups”

user information recovery diagramCloud infrastructure has created the wondrous ability to clone the complete workload stack (OS, Applications, Data). When combined with a technology that can decouple the workload stack from underlying infrastructure, this “portable” workload can be imported into public or private clouds. In the case of downtime on the production workload, sessions can reconnect to the cloud instance where processing the service can resume, even if the production workloads and recovery workloads are on differing infrastructures. The cloud allows data centers to move beyond traditional ‘cold’ backups, where only data is being protected, which requires OS, and applications to be restored manually before data is restored.  The notion of the Asymmetric “Hot Backup” is made possible by the cloud because every workload stored as an image can be “booted” into a live, functioning virtual machine, which can take over workloads while the production server is being repaired. This differs from the traditional replication solutions whereby a duplicate set of hardware is required to take over workloads should the production workload fail. Changes that occur to the production instance are replicated into the recovery instance on a periodic basis to keep it up-to-date.  Cloud introduces the flexibility of having a flexible recovery instance to save on costs, since the hot backup can be “parked” when not in use, or a smaller instance can be provisioned.

  1. Introducing the Concept of “Failover” and “Failback” Typically Reserved Only for Critical Workloads

fail back workload diagramSoftware replication technology has existed for decades, being used to protect and replicate in “real-time” between production and recovery workloads.  Typically these setups are extremely expensive from a software and services perspective, as they often require a duplicate identical recovery setup, doubling the cost of maintaining and the running costs of infrastructure. Meanwhile, the rest of the workloads in the data center are under-protected, typically protected only by slow-to-restore images and tape schemes, which take days to restore, and much undue manual effort. By automating the switching of users or processes from production to recovery instances, downtime can be reduced by up to 80 percent for the majority of under-protected workloads in the data center.

  1. Using Dissimilar Infrastructure for “Off-Premises” Redundancy

Fail back to original infrastructureFor added protection, data centers should consider dissimilar cloud infrastructure to use as part of their disaster recovery strategy as well. Cloud infrastructure can be prone to failure, and for data centers that require an extra level of protection, workloads should be replicated off-site to different cloud providers. Physical-to-Physical, Physical-to-Cloud, or Cloud-to-Cloud replication can offer a level of protection that can be robust enough to overcome site-wide Denial-of-Service attacks, hacking, or natural disasters.

For more on the costs of downtime, read the Aberdeen report Preventing Virtual Application Downtime


Subscribe to the Tech Pro Essentials Weekly Newsletter. All the key tech news and analysis, in your mailbox once a week



Sash SunkaraSash Sunkara is chief executive officer and co-founder of RackWare. She is a seasoned technology executive with deep expertise in solutions for data centers in both enterprise and hosting environments that leverage commodity server solutions, server virtualization, and leading edge storage architectures. Prior to founding RackWare in 2009, Sunkara was vice president of marketing for QLogic’s Network Solutions Division with responsibility for the switch product lines. Previously, she was co-founder and chief business officer at 3Leaf Systems, a venture-backed server virtualization company where her role as chief business officer spanned marketing, business development, support, and operations. Earlier in her career, Sunkara served as vice president of program management at Brocade Communications where she was responsible for the execution of the company’s overall roadmap, including both product and strategic initiatives. Sunkara started her career at HP developing networking switches and routers. Sunkara holds a BSEE degree from California State University, Sacramento, where she graduated with honors.

Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail
Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

You have Successfully Subscribed!