Future-Proofing Your Virtualization Investment
We reached a major milestone in 2012: Virtualization levels in industry-standard x86 servers hit 51 percent, meaning there are now more virtualized servers than nonvirtualized servers on the market.
That might not be surprising, because virtualization rates have grown steadily over the last decade, but it strongly indicates that Moore’s Law, which states that processing power doubles every 18 months, is still true. The performance capabilities delivered by the newest CPUs, such as Intel’s Sandy Bridge, drive companies to capitalize on virtualization benefits to maximize data-center performance and cost efficiency.
There are several steps companies in the process of virtualizing their environments can take in order to future-proof their systems for a fully virtual tomorrow. Because many companies will be virtualizing at least one aspect of their businesses in the coming years, making sure today’s choices will continue to pay dividends down the road merits a close evaluation.
Choosing Your Caching Solution
Caching solutions have become widely used in data centers to accelerate application performance and reduce latency. In nonvirtualized infrastructures, caching can be used to store frequently accessed hot data. This extends the life of an existing system by offloading the hottest workloads from the back-end storage system. By adding affordable caching solutions, many companies can delay system upgrades and prolong the life of existing hardware.
In virtualized infrastructures, caches help companies overcome many of the performance limitations of legacy storage architectures. In virtualized systems, when workloads mix, even sequential writes become random, increasing the challenge of providing performance through traditional storage systems. Using a cache to speed-read operations while writing through to shared storage eliminates this headache, accelerating all virtualized machines (VMs). It also allows companies to increase VM density and run more VMs per host server.
Many caching solutions work either in nonvirtual or virtual infrastructures — not both. Choosing a single caching solution across both environments lets companies future-proof their data centers, allowing easy migration from physical to virtual workloads when it makes sense.
Caching up the Stack
Deciding where to cache in your virtualized architecture presents multiple performance opportunities, as each caching location throughout the stack provides different benefits. Choosing a caching solution that can be used at any layer of the stack ensures you can easily adapt your caching architecture without incurring additional costs.
For virtualized or physical environments, caching solutions begin by implementing a fast media, like DRAM or flash memory, on a bare-metal architecture. This is a simple architecture that provides an immediate application performance boost and a reduction in SAN workload. Certain software features require another deployment step to deliver the maximum performance for virtual environments.
Caching on the hypervisor is our next stop up the stack to maintain seamless interoperability with value-added virtualization-software features, such as vMotion. These types of features within the hypervisor are not available on bare-metal caching.
The last step is virtualization-aware caching on the VM itself. Caching data with intelligence in the VM keeps data closest to the application, which delivers maximum performance and the understanding of application requirements. For example, you might want to be sure that database tables are cached, while log files are not. Caching at the guest VM is the newest breakthrough in virtualization, although it’s not possible for all architectures. As the industry continues to evolve methods to maximize application performance in virtualization, guest VM caching stands to become the most efficient way to accelerate data-intensive applications.
Finding a unified caching solution that can cache on all of these levels means your system will be ready whenever you decide to take the next step in the evolution of virtualization.
Finding the Right Hardware
Choosing the right hardware solution to power your virtual infrastructure can save you money and headaches down the road. Not too long ago, DRAM was the only option for caching, with some smaller capacities on the CPU’s L1 and L2 caches. Today, there are many more options for higher capacity and efficiency in a nonvolatile memory medium, ensuring data integrity in the event of an unplanned power cut.
DRAM is fast, but it’s also power hungry, expensive and volatile. Since it’s not persistent, everything in the cache will be lost and will need to be reloaded from shared storage if your system suffers an outage. However, it is easy to implement in any system, and some of today’s x86 servers have DRAM capacities that approach 4TB, if you have the budget to spend on that much capacity.
Flash memory platforms are growing in popularity for server-side virtualization caching. By implementing flash as a true memory, flash can approach DRAM speeds and deliver more than 10 times the capacity at a lower cost. This is an affordable alternative to adding massive amounts of DRAM to your system, and because of the higher capacities, flash caches can dramatically reduce cache misses that cause latency spikes.
Building a solution that delivers high capacity for caching can help you achieve a higher density of VMs per host. Whether you decide on flash or DRAM, choose a solution that will deliver enough headroom to scale with expected growth.
Building the Right Ecosystem for the Future
Choosing a flexible caching solution will help companies ensure they are prepared for a virtualized future. While there are many more options to consider when choosing the right solution for your enterprise, these considerations will get you off the ground and ensure you are prepared for the future.