To assess its backup requirements and arrive at a viable solution, an organization must analyze its internal information landscape and make fundamental choices about what type of cloud service will accommodate its needs.
First, it must determine how much storage it requires, then how best to procure that space in a cloud.
The most basic question to answer is how much data the company needs to back up. Start with how much data is currently stored, taking into account storage systems and devices as well as local data on servers, desktops, notebooks and even mobile devices.
There isn’t a one-to-one proportional relationship between actual storage and required backup capacity. This fact introduces additional questions: How often does the data change? How often does it need to be backed up? How many versions must be held? How should backup data be retained?
It is risky to base a backup solution on today’s requirements alone. Organizations typically commit to using backup solutions for three to five years, so it is important to forecast data requirements carefully. One standard benchmark estimates that the amount of data an organization stores, on average, triples every three years. But if a company has a higher retention level (the length of time it keeps data) or dynamic data, then its growth rate may be significantly higher.
Most remote-backup providers price their services in terms of gigabytes and offer them at different tier levels. Considering data growth, it may be advisable to overestimate usage to get the best price for the long term.
Cloud computing is often categorized by three models — infrastructure as a service (IaaS), platform as a service (PaaS) and SaaS. PaaS targets primarily application developers and software vendors and is of limited interest for data backup. IaaS and SaaS, on the other hand, can both be used for backup solutions, albeit with very different approaches.
IaaS is the most flexible option. It can accommodate almost any application that can run on a physical computer. In fact, cloud-based block storage can be used by legacy applications, including backup programs, with relative ease. IaaS leverages few of the benefits of an Internet-based, virtualized, utility priced delivery model.
In general, if there is a standard-offering SaaS solution available that meets an organization’s requirements, is priced appropriately and doesn’t pose significant security concerns, it is usually the most compelling choice among remote-backup options. It’s important to understand, however, that there also may be off-the-shelf solutions an organization can purchase or license to run on a cloud platform or infrastructure. SaaS doesn’t always equal cloud and vice versa.
The key question regarding a SaaS offering is whether it will require significant customization. The more necessary a custom backup solution is for an organization, the less attractive SaaS may be. In that case, companies would want to consider an off-the-shelf commercial offering that either runs on a cloud platform or can be virtualized and run on cloud infrastructure.
An environment that is owned and controlled by the consumer allows greater flexibility for customization. But keep in mind, doing so is likely to result in additional cost.
What is unique about data backup is that it entails both infrastructure (where data is stored) and software (the interface for data backup). Some solutions run both components in the cloud, while others do not.
In addition to considering the cloud delivery mode, organizations need to consider the delivery source. In its earliest definition, cloud computing referred to solutions in which resources were dynamically provisioned over the Internet from an offsite, third-party provider who shared resources and billed on a fine-grained, utility computing basis. Known today as the public cloud model, this approach offers many advantages in terms of cost and flexibility, but it has governance and security drawbacks.
Many companies have looked at ways to leverage some of the benefits of cloud computing while minimizing the drawbacks. Their efforts have led to a more restrictive private cloud model.
Typically, a private cloud is hosted on-premises, scales “only” into the hundreds (or perhaps thousands) of nodes, and runs over private network links instead of the public Internet. Along those same lines, a community cloud caters to a group of organizations with a common set of requirements or objectives.
The most prominent examples are government clouds open to federal and municipal agencies. Similarly, major industries may have incentive to work together to leverage common resources, including remote-backup storage.
The categorization of cloud services into public, private and community clouds is a simplification. Not only is there no clear boundary between the three delivery models, but customers also aren’t likely to confine themselves to any one approach.Instead, companies can expect to see a variety of hybrid constellations and consider tailoring their backup solutions to any combination of them.
For more on remote backup in the cloud, read this CDW white paper.