The initial concept behind x86 server virtualization was clearly intended to benefit the server and infrastructure teams responsible for the deployment, configuration and maintenance of servers that host business applications.
In many IT environments, the use of virtualization was relegated down to workload tiers earmarked for development and testing, with more mainstream methodologies reserved for production and revenue-generating applications.
This bottom-up approach to virtualization had many positive results, such as thorough testing before implementation and valuable consolidation of resources in lower tiers, but it also left a lot of the other technology food groups — such as networking and storage — in the hands of the server team.
That’s fine for most traditional IT settings because virtualization hypervisors such as VMware’s vSphere are incredibly adept at simplifying (and, in some cases, completely hiding) network topology. Each vSphere server, called a host, acts as an edge networking device with the virtual workloads consuming virtual switch ports. For the first phase of virtualization, which is aimed at simple consolidation ratios, this edge networking model requires minimal networking knowledge beyond plugging in a network adapter and assigning virtual local area network (VLAN) IDs.
Many server admins neither know nor care how the bits arrive at their destination, as long as they arrive within a tolerance that keeps users happy. But what happens when the network portion of the model begins to change?
The Shifting Tectonics of Network Infrastructures
A great example of this change is the journey toward a private-cloud model for enterprise data centers. The idea behind a private cloud is to abstract and pool all of the technology resources, including networking, into policy-driven containers. A container can be filled with one or more virtual workloads, which are then subject to the policies attached to them.
With VMware’s vCloud Director, for example, many of the networking devices that provide load balancing, firewall rules and Network Address Translation (NAT) are both virtual and deployed alongside a container. As a result, the server team is often left holding the bag for container creation, having to make sure that all of the network components are configured and functioning properly.
However, as a good portion of the students who’ve watched my vCloud Director courses agree, the networking portion is the hardest part to grasp, digest and master for virtualization specialists — as it hasn’t always been a key component in their toolkit — while at the same time being the most crucial for a successful implementation.
This gap in their knowledgebase needs to be filled. Here are a few tips to quickly get them started:
Get involved with the organization’s network architecture and design. Many of the folks who have picked “networking” as a career enjoy teaching what they’ve learned, and love showing off their topology to someone who is truly interested.
Dive into routing and switching knowledge and certification. Cisco Systems has a library of books focused on their associate level certification, the CCNA, which covers many of the fundamentals necessary to understand networking. For those into Juniper, shoot for the JNCIA
- Grab some gear. Many vendors have learning labs available to partners and customers that can be used free of charge, while others have “rack rentals” that can provide access to an entire rack of gear on the cheap. Additionally, many businesses have older networking gear sitting around in a work lab. If all else fails, simulators like Graphical Network Simulator (GNS3) or home lab kits for Cisco or Juniper are handy and often affordable enough for any budget.
Start taking these steps now, and they will swiftly pay off in a better understanding of the shifting networking tectonics of today’s virtualization technologies.