textarea

Jan 21 2013
Data Center

Why Every Business Needs to Plan for Disaster Recovery

New tools help IT leaders and managers sidestep the risk of doing nothing when it comes to disaster recovery.

Although capital expenditure budgets may be stagnant or growing slowly, enterprise data volumes remain on a sharp upward trajectory.

The latest IDC Digital Universe Study estimates that 1.8 zettabytes, or 1.8 trillion gigabytes, were created last year alone. The study further predicts that data volumes will expand 50 times by 2020.

These numbers represent evergrowing business continuity (BC) and disaster recovery (DR) challenges for IT departments. Should an outage occur, IT managers need access to tested DR technologies, policies and procedures to restore vital production systems. Each gigabyte of data that comes under an organization’s control increases the complexities of establishing a viable strategy that can be initiated when — not if — needed.

Some striking figures suggest that many IT leaders and managers, overwhelmed by the prospect of preparing their systems environments for the direst of inevitabilities, have done little or nothing to prepare.

A full third of the business technology professionals responding to InformationWeek’s 2011 Business Continuity/Disaster Recovery Survey late last year pointed out that their organizations had no formal BC or DR plans in place. Moreover, 10 percent said they weren’t even on the verge of addressing the topic. Why? Continuity projects are seen as too complex by 39 percent of respondents, while 20 percent cite cost.

Yet, BC and DR have become more critical today than even five years ago, says Matt Kimball, a product manager for microprocessor maker AMD. “It’s not just the usual suspects, such as healthcare and financial services, that require BC and DR anymore.”

Fortunately, IT managers now have a wide range of technology options to help tame the daunting complexities of keeping data and systems highly available without busting budgets.

“Having dedicated equipment that matches everything in the production environment but sits unused 90 percent of the time is an expensive proposition,” acknowledges George Ferguson, HP’s marketing manager of continuity services. But now there are new options. “The stars have aligned around virtualization, the cloud model and lower-priced disk storage.”

Define Your Goals

Before any organization can evaluate technology solutions for BC and DR, it must define its high-availability needs.

The Data Center Institute, an IT think tank, defines continuity planning as a strategy that describes the steps to keep operations running in the event of an unforeseen occurrence. DR plans offer a roadmap for returning the organization to partial or normal operation. The institute notes that organizations may need multiple plans to tailor responses to the needs of individual departments, user groups or systems.

Identifying the Right Tools

What technologies should be in the arsenal of a new or updated continuity and DR plan? Industry experts list five important tools:

  1. Cloud-based services
  2. Server virtualization
  3. Uptime and storage
  4. Remote backup
  5. Tape drives

Here’s a detailed look at each of these options:

1. Cloud Coverage

Backup services from third-party cloud providers promise a wide range of cost- and complexity-lowering benefits. Such services provide a quick way to set up offsite resources for replicating data, especially for systems with relatively low volumes of information that can efficiently travel over WAN connections.

Cost savings represent another potential benefit.

“The economics become a no-brainer,” Kimball says. “With the cloud, organizations can pay pennies on the dollar compared to buying and maintaining onpremises recovery solutions.”

2. Server Virtualization

Separating operating systems and applications from physical hardware is at the heart of server virtualization, and this strategy pays off for continuity efforts.

For example, IT managers can create procedures for automatically moving virtual machines (VMs) from any physical devices encountering problems to other servers in the virtual pool that possess enough excess capacity to pick up the slack.

In addition, systems administrators can schedule virtual environments for real-time or preset backups to recovery sites. Besides setting the desired frequency of backups, organizations can also establish specific backup procedures based on the criticality of the virtualized system.

For example, all of the data associated with a VM may be fully backed up — the traditional choice with physical machines. Or IT managers may save only the changes and updates that occurred since the last backup. This second approach involves far less traffic over the LAN. A third option? Saving periodic “snapshots” of VMs and sending them to offsite servers.

Server virtualization can also help avoid the inefficiency of resources sitting unused or underutilized during problemfree times. Administrators can instead designate physical servers for test and development duties but quickly shift VMs over to them should a crisis arise.

3. Uptime and Storage

Storage virtualization aids operations continuity and disaster recovery efforts by creating large pools of storage capacities on storage area networks (SANs) and network-attached storage (NAS) units. Resources can be centrally managed and reallocated as needed to maintain uptime.

Thin provisioning enhances storage virtualization by allowing IT managers to dynamically allocate any available storage capacity within the pool.

The result is that organizations can create backup layers using arrays of economical iSCSI and SATA hard drives. Data deduplication further optimizes storage and can help meet sometimes tight after-hours backup windows.

When configuring storage resources,IT managers should wisely plan for both local and remote replication.

4. Remote Backup

To guard against larger disasters, such as a hurricane that shuttersan entire data center, organizations must establish a DR site in a distant geographical region.

For the requisite remote backups, IT shops may choose either synchronous or asynchronous data replication.

The synchronous approach sends information from the primary to the backup site in a series of transmissions that allow for the sites to confirm that one transfer has occurred successfully before the next one begins. This approach helps ensure that information isn’t lost because of short network outages or other problems.

Asynchronous replication continuously sends information without confirmations that the data is flowing without interruption. While faster, this method doesn’t make IT managers immediately aware of transmission problems.

5. Tape Backups

A number of newer technologies may offer performance and cost advantages for continuity and DR efforts. However, backing up to tape continues to offer a reliable and economical way to store data and easily transport it to geographically dispersed locations in the face of skyrocketing data volumes.

At only pennies per terabyte, should a full-scale restore of a primary facility be necessary, it’s a cost-effective alternative to copying large quantities of data over WANs.

People Power Counts

Advanced technological capabilities are not silver bullets when it comes to continuity plans. Organizations must also train employees to use those technology resources to maintain uptime as well as to react appropriately and quickly when problems occur.

In addition, someone – or a team of employees – must regularly review the plans to assure they remain relevant and up-to-date. Some organizations dutifully create policy and procedure manuals only to ignore them until disaster strikes. When a crisis hits, the chance for confusion rises.

“Organizations should test their plans regularly — preferably twice a year — and subject them to rigorous change management standards,” Ferguson recommends. “An untested plan is barely worth the paper it’s printed on.”

iStockphoto/ThinkStockPhoto
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT