Jul 23 2014
Networking

What It Takes to Build Next-Generation Data Centers

Advanced hardware is placing new and greater demands on data center networks.

For most network architects, virtualization was the warning shot across the bow: Organizations shouldn’t build data center networks data networks today the way they built them even five years ago, because the fundamental building blocks of enterprise applications have changed.

But virtualization is only one of the changes persuading data center managers to abandon the traditional core-distribution-edge architecture in favor of flatter and faster models. Some of the other trends pushing new requirements on data center architects are:

  • Jumps in server speed and density, virtualized or not, requiring burst speeds faster than 1 gigabit per second
  • The heavy level of interserver traffic caused by changes in software application design (“east-west” traffic flows)
  • The increase in high-availability configurations with heavy replication requirements
  • The slow decline of Fibre Channel storage systems shifting traffic over to Ethernet networks

Few network managers are in the position to do a rip-and-replace on their data center network. But the installation of new storage and virtualization equipment does offer the opportunity to rethink data center design rather than bolt new equipment onto old structures.

The Four Requirements of Rethinking Data Center Support

Data center networks are being rearchitected as part of a transition to the next generation of data centers, reimagining how applications and data centers are built. This change extends from the power and cooling to the servers and storage, as well as the networking. The push to rethink how networks support data centers is being driven by four key requirements:

  1. Nonblocking (and high speed): As devices and storage systems require microbursts up to 40Gbps, the need for a nonblocking switching architecture in data centers becomes critical to predictable application behavior and user satisfaction. The average speed of servers may still float below 1Gbps in most data centers, but engineering for averages will affect application performance. Server network connections are universally moving to 10 Gigabit Ethernet in the next few years.
  2. Lower latency: Movement away from edge-distribution-core toward spine-and-leaf architectures is the most significant change in current designs, reducing hop counts. The terminology has been around for a decade or more, but the technology is only now becoming widely available.
  3. Layer 2 flattening: Virtualization and virtual machine migration around and between data centers requires Layer 2 extension to maintain IP addressing. Traditional architectures that fill subnets based on optimized routing need to be rebuilt to support new requirements brought on by virtualization and high- speed data center interconnects.
  4. High availability: Network managers are becoming serious about end-to-end high-availability designs, from dual-rail power to redundant network connections at every point in the data center. At the same time, the need for failover times measured in milliseconds, not seconds, is driving new protocols and approaches to high-availability switching and routing.

Requirements in the data center for higher security and more distributed management and control have added to the challenge. Network management and configuration control, long decoupled from daily operations, are being pushed away from dedicated network teams and into server managers. This stems from virtual switching platforms and aggressive development and operations teams trying to get network complications out of the way of their applications.

At the same time, administrators are reconsidering security. Traditional approaches in data centers that assume a trusted status of systems are being upended. The daily news about breach after breach of “secure” applications clearly contrasts the costs of security with the much higher costs of insecurity.

This particular trend is being collectively driven by technical requirements, equipment replacement, virtualization, security and changing views of network management.

Want to learn more? Check out CDW’s Tech Insights, “Networking: Connecting the Dots.”

Ingram Publishing/ThinkStock
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.