A growing reliance on Big Data analytics and the Internet of Things (IoT) are driving a massive increase in the amount (and importance) of data for enterprises in every sector. Organizations are gradually responding to this trend, but many still don't know exactly how, or where, to store all of that information safely, economically and accessibly.
Simply throwing more conventional storage at the problem won't help, at least not over the long term. While traditional disk-based storage solutions are cheap and plentiful, they're simply not capable of handling the massive amounts of data that most data centers are facing, nor can they provide the quick access required for real-time data analytics.
Big Data and IoT are game-changing technologies that demand entirely new data storage approaches. “While many companies do not have a plan yet for Big Data or IoT, this technology is growing at an unprecedented pace,” says Prabu Rambadran, director of product marketing for Nutanix. “Choosing the correct infrastructure that can handle this level of data and scale with the information is a critical decision and should not be taken lightly.”
Shifting Enterprise Needs
Not so long ago, storage area networks (SANs) were viewed as the best approach to coping with soaring storage needs. When SANs first arrived, the technology appeared to offer an almost perfect solution to a potentially crippling data storage challenge. Rather than purchasing and managing isolated puddles of storage, organizations could tap into a highly efficient shared pool of almost boundless capacity.
But enterprise needs have changed. Many shared external storage networks simply weren't designed to work within today's virtualized infrastructures. SAN management saps IT resources and time by requiring staff to provision individual logical unit numbers (LUNs) to map storage volumes to virtualized machines. Further, such configurations have to be constantly updated to ensure acceptable levels of availability and performance. A number of critical hardware issues also must be addressed when using a SAN with virtualized servers.
To deal with today's growing data glut, organizations should think about incorporating multiple approaches, including hyperconverged storage solutions, software-defined storage and cloud storage, as well as all-flash arrays. “IT departments need to work with their business departments to plan out the requirements for data volume, velocity and value to feed the data pools for Big Data and IoT projects,” says Wayne M. Adams, chairman emeritus and a current board member of the Storage Networking Industry Association.
The Benefits of Hyperconvergance
Hyperconverged storage is an important new data storage management approach that blends storage, computing, networking and virtualization technologies into a single physical unit that's managed as one system.
Hyperconvergence returns storage to servers, where it is managed as a shared resource pool across multiple hyperconverged infrastructure nodes. These nodes may be located within a single data center or distributed across multiple facilities. Either way, the approach makes storage more flexible, while cutting costs.
“In a hyperconverged solution, both computing and storage functions are delivered from the hypervisor on a common x86 platform,” says Skip Bacon, vice president of products, storage and availability for VMware.
Hyperconverged storage can help organizations create a greatly simplified — and overall cheaper — IT infrastructure, says Mike Matchett, a senior analyst with Taneja Group, a consulting firm focused on storage, server, networking and other infrastructure technologies, adding that IT staff are then freed to focus on business needs.
While still an evolving concept, software-defined storage (SDS) offers a fresh path to storage by decoupling the programming that controls storage-related tasks from physical storage hardware. This approach provides centralized and simplified control over storage resources via a consistent interface. SDS adopters also benefit from enhanced scalability and cost savings. By making a single, logical software investment to manage all storage hardware, an organization can reduce expenses. “In common use, the term software-defined storage typically refers to storage products deployed on dedicated servers — whether atop bare metal or in a hypervisor — that aren't also providing compute functions,” Bacon says.
“SDS can be a cost-effective way to manage a heterogeneous pool of storage or storage systems without the built-in reliability, availability and serviceability/data protection/data lifecycle management in a physical storage system,” Adams observes. “For example, an IT department can repurpose existing equipment and augment it with additional equipment to increase the amount of data for a Big Data or IoT project.”
“By combining SDS with a hyperconverged design, companies can realize all the benefits of the public cloud without sacrificing the predictability, security, cost and control of on-premises offerings,” Nutanix’s Rambadran says.
The Utility of the Cloud
Cloud storage typically refers to a hosted object storage service, but the term has expanded over time to encompass several other types of data storage services, such as block storage, a type of data storage typically used in storage-area network environments where data is stored in volumes. “Cloud storage is an elastic, burstable, utility-cost model that offers high protection and is geo-distributable,” Matchett says.
“With the cloud, you can scale as big as your checkbook supports,” says Greg Schulz, founder and senior adviser with Server StorageIO, a storage technology advisory and consulting firm. “With the cloud, you're not spending capital upfront.”
All-Flash Arrays Deliver Data Fast
A solution of growing importance in many storage strategies is all-flash array technology. Packaged as a storage system that uses multiple flash drives rather than conventional spinning hard disks, all-flash arrays deliver faster data transfer rates and more efficient use of data center resources in selected applications.
Fast access to massive amounts of data is becoming increasingly important. “Some Big Data projects need to reach into historical data for trending. And when the use case is real-time decision-making, having enough storage resources with enough bandwidth and low-latency interconnects is a critical part of the storage architecture,” Adams says. “With the advent of Tier 0 storage and all-flash arrays, the low-latency challenges are being resolved.”
All-flash arrays “are the hottest technology this year, and flash adoption will continue to accelerate as prices continue to fall,” according to a recent report from 451 Research, a technology research firm. “Disk is far from dead, but its use with high-performance workloads is waning,” the report adds.
“Flash storage should replace disk for performance, energy, space efficiency and ultimately for lower cost and greater reliability,” adds Bernard Spang, vice president of software-defined infrastructure at IBM.
“If you have the budget, buy as much solid-state flash as you can get,” Schulz advises. “If you don't have an unlimited budget, and you need to boost productivity, buy as much solid-state flash as you can and supplement it with disk. Use the two in hybrid ways.”
Detangling the Debacle
Organizations that recognize the threat posed by the rapidly approaching data tsunami should begin planning their storage approaches now, before existing storage resources become overwhelmed.
Adams notes that a partner can help organizations determine which type of storage solution best fits their needs. “Typically, a systems integrator has experience with a range of solutions and can tap into other clientele experiences to arrive at a solution that can be leading edge, proven and repeatable,” he says. “An integrator can bring knowledge to all facets of an IT project: requirements, planning, procurement, piloting, roll-out, retirement and replacement.”