VMware has done a smashing job of making its vSphere hypervisor, ESXi, incredibly easy to install and configure. It’s a simple out-of-the-box experience, even for tech teams with close to zero knowledge about how ESXi works.
Unfortunately, many systems administrators start slinging virtual machines onto a hypervisor without spending a few minutes of extra effort to properly tune the device to handle serious virtual workloads. That’s a mistake that can lead to serious performance lags later on.
Follow these simple tweaks to fine-tune your vSphere installation and eliminate problems before they crop up down the road.
ESXi allows for a variety of power management modes: high performance, balanced (the default), low power or custom. These are worthless, unless the physical ESXi server’s BIOS policy is set to allow vSphere to take control on the fly.
Most BIOS settings are tunable by restarting the server and pressing a vendor-defined key (such as F2 or DEL). Once in the BIOS, go to the power setting and change it to “OS controlled” or a similar option (the exact terminology varies depending on the hardware maker); vSphere will now be in charge of power consumption based on virtual machine needs. (Extra tip: Use the “balanced” mode to lower power usage during idle times by way of special C-states, and use “high performance” to maximize performance during heavy processing loads.)
VMware serves up essential security vulnerability patches and bug fixes well. Updating is especially easy before the ESXi host is running any VMs, as you can patch without impacting production. But for simple update management once the server takes on VMs, stand up a VMware Update Manager server at installation. This software, free with the vCenter Server DVD, eases the pushing of patches and updates.
The percentage of applications that are deployed on virtualized servers by SMBs
SOURCE: “The Evolving State of the Network” (Enterprise Strategy Group, December 2013)
Although the vSwitch technology included with ESXi is virtual, it still requires maintenance and troubleshooting from time to time. The default behavior of both the Standard and the Distributed versions listens only for network neighbor announcements provided by the Cisco Discovery Protocol (CDP) and the Link Layer Discovery Protocol (LLDP).
This means that while vSphere administrators can see physical network devices the ESXi host connects to, the network team cannot. But system admins can do the network team a favor by changing the behavior of CDP or LLDP to listen — and advertise — their network interfaces by changing the configuration setting on vSwitch from its default to “Both.”
System admins typically create new clusters of ESXi hosts using identical hardware — servers with the exact same configuration — to make clusters as homogenous as possible and therefore easier to manage.
But it’s also wise to future-proof clusters, as servers with faster and more powerful processors come online, by enabling Enhanced vMotion Compatibility (EVC) mode.
Each generation of CPU features new instruction sets that provide special types of workloads the ability to offload complex calculations directly to the CPU. Although VMs typically do not touch these instruction sets because they aren’t needed, having different instruction sets in processors breaks the compatibility between ESXi hosts in a cluster and can degrade performance incrementally.
Using EVC masks off those newer features as newer servers are added to a cluster, allowing system admins to mix newer CPUs with older CPUs without negatively affecting performance.