How the Software-Defined Data Center Works
The software-defined data center comprises three operational siloes: server/computing, storage and networking. Layered atop these systems is an abstraction layer that presents the discrete, underlying hardware as a unified pool of available resources, enabling programmatic access to the capabilities within. Each of these siloes is powered by a foundational technology, each a variation of virtualization.
Server Virtualization
Server virtualization is a mature and widely adopted technology served by multiple, proven hypervisor products, including those from leading vendors such as Citrix, Microsoft and VMware.
A ZK Research survey in 2013 found that server virtualization has emerged as the dominant computing model among enterprises. About 52 percent of respondents reported that more than half of their workloads in 2013 were virtualized, up from just 18 percent in 2008.
Storage Virtualization
Storage virtualization, or software-defined storage (SDS), applies the same hardware abstraction concepts that drive server virtualization to the arena of distributed storage. The goal is to enable unified, software-based control and management of disparate storage hardware, while enabling key functions such as snapshot, cloning, data recovery, backup and deduplication.
By abstracting storage silos based on storage area networks (SANs), network-attached storage (NAS) and other technologies, SDS presents a pooled storage infrastructure that administrators’ applications access via API calls.
Importantly, these APIs will enable applications to specify provisioning requirements for performance, data protection and other characteristics, so the SDS infrastructure can provide the appropriate blend of storage hardware to each task. Key SDS players include leading storage vendors such as EMC, Hitachi, HP, IBM and NetApp — many of which offer virtualization and management software for their hardware solutions.
Network Virtualization
Network virtualization, also called software-defined networking (SDN), is arguably the least advanced of the three foundational components. However, according to the 2013 MarketsandMarkets report, the SDN market is expected to grow from just $198 million in 2012 to $2.1 billion in 2017.
Industry standards promise to improve interoperability and enable software-based control and orchestration of network infrastructures. The open-source OpenFlow protocol provides an agreed-upon way to enable a centralized, programmable network, with APIs for managing network traffic and enabling network-aware applications.
Virtual LAN overlay standards, such as Virtual Extensible LAN (VXLAN), Network Virtualization using Generic Routing Encapsulation (NVGRE) and Stateless Transport Tunneling (STT), enable transmission of packets over virtual networks laid atop the physical network hardware. Industry experts caution that progress toward a fully abstracted network layer may be slowed by overlapping standards efforts and the vested interests of leading hardware incumbents.
Critical to SDN development is the OpenStack suite of industry standards, most notably the OpenStack Neutron (formerly Quantum) project, an API-based software system that provides extended control over software-defined networks and enables SDN controllers to interact with higher-level orchestration systems. OpenStack Neutron enjoys broad industry support.
Sophisticated management software and intervening layers of logic serve to present pooled and virtualized data center hardware as a cohesive, unified resource offering orchestration and automation. The OpenStack project provides open APIs for computing, networking, storage, security and management components. The system enables orchestration of diverse hardware environments, providing infrastructure as a service (IaaS) to organizations seeking to create private clouds.
Want to learn more? Check out CDW’s white paper, “Defining Moment: The Software-Defined Data Center.”