Speed, reliability and efficiency are the key traits that most IT workers dream of when it comes to running a data center. With the rise of virtualization and cloud computing, there is lots to consider when planning for data center optimization.
Here are few key areas in your data center that warrant a closer look.
Today’s traffic speeds of 10, 40 and even 100 gigabits per second may have many network managers wondering why they need all that bandwidth. The answer: virtualization.
In the days before virtualization, a rack might hold three or four servers. Top-of-rack switches would have been a waste of money, and pulling patch cables from an in-rack panel back to a central switch was the strategy of the day. If a server saturated a 100 megabit-per-second connection for a few minutes, that was a busy day.
In the virtualization era, it’s common to see 10 (or more) guest servers on a virtual host and 40 of those 1U virtual hosts in a single rack. That’s not just 10 times the density; it’s 100 times the density.
Not every rack will have 400 virtual servers (the equivalent of 40Gbps), of course, and not every server is going to saturate its connection at the same time. But it’s clear that a 1Gbps connection isn’t enough. Running multiple 10Gbps connections from each full rack in the data center is sound network engineering — more for reliability than performance.
Some network managers have chosen an equipment strategy based on a primary and secondary data center, putting new gear into the primary data center and moving last year’s model into the secondary data center. While this approach is economical, it sends the wrong message about business continuity and encourages staff and application managers to focus on one data center, treating the other data center as a second-tier afterthought. Then, when a service interruption event occurs, the other data center isn’t up to date with configurations, performance or capacity to handle the needs of the business.
IT staff who have lived through data center problems, which are rarely fires or floods and more often UPS failures, human errors, software problems and temporary failures, know that the best strategy is to treat data centers as peers. Each data center is constantly operating and constantly connected, and no one should care which one is currently hosting a particular application. By treating the networks in both data centers as equals, the network manager supports the requirements for business continuity and simplifies overall management and software version control.
Multiprotocol label switching (MPLS) is a carrier-focused technology used to build very-high-speed networks, combining some of the best features of (connectionless) packet switching and connection-oriented telecommunications into a single network. The end product of asynchronous transfer mode, frame relay and the X.25 suite of protocols, MPLS is a cutting-edge method for delivering data communications services.
When dealing with carriers utilizing MPLS, network managers will encounter two MPLS-specific terms worth understanding: customer edge (CE) and provider edge (PE). Both CE and PE define routers, one at the customer’s site and the other within the carrier’s network.
The CE router is normally owned and installed by the carrier and isn’t managed or controlled by the customer. The CE forms the demarcation point between the customer’s network and the carrier’s network. It will normally have Ethernet to communicate with the customer equipment and some type of WAN interface to connect to the carrier’s network, specifically to the PE device.
To learn more best practices, insights and strategies on routing and switching, read our "Ultimate Guide to Routing and Switching."