Understanding the Different Layers of Routing and Switching
Networks empower people to do their jobs, to communicate and collaborate, to teach and learn. At their heart of the network lie the workhorses — the often underappreciated switches and routers that connect users to data centers. While every network is different and has to be crafted to the needs and scale of the enterprise, some helpful patterns have emerged that both network managers and network equipment manufacturers follow. When network designs follow these patterns, it’s easy to find a diverse mix of good products and, using those products, build reliable and economical networks.
One of the most time-tested (though somewhat paradoxical) rules for network design is this: Switch where you can, route where you must. In essence, networks should be deployed using switching technology wherever possible, interconnected by routers only where required. Of course, the definition of “required” can vary a great deal and change over time as different network fads (often driven by the products that manufacturers and resellers are marketing most heavily) come in and out of fashion.
A good start is to break up networks into different zones, separating user access networks, whether staff or guest, from data centers. These separations are logical points for routers and, in some cases, firewalls. Although many users spread through a building may look similar to many servers in a data center, the networks have very different performance and reliability requirements. By creating a clear separation, network managers can focus on designing networks that are fit for purpose rather than over- or under-engineering.
User Access Networks
User access networks connect end-user devices, such as desktop and notebook computers, printers and Voice-over-IP handsets to enterprise networks. Generally, the user access network consists of a wiring plant between offices and per-floor wiring closets and switches in each wiring closet, as well as the interconnection between the wiring closets and building data centers.
In large buildings, these networks are built in two or even three layers in a tree topology. In user access networks, the edge of the network (the branches of the tree) connects directly to end-user devices, while the core (the trunk of the tree) connects the entire user access network to the rest of the enterprise.
It’s important to remember that the physical tree topology of the network doesn’t necessarily match the IP subnet architecture of the network. When designing user access networks, the first constraint is always physical: It includes the wiring closets, the fan-out from closets to user workspaces, the distance between devices, and the number of floors and buildings. Once a solid physical layer is in place, IP architecture optimizes the flows within the network for performance, management and security purposes.
The Edge Layer
The switches that connect directly to end-user devices are called “edge” or “access” switches. The edge switch makes the first connection between the relatively unreliable patch and internal wiring out to each user’s workstation and the more reliable backbone network within the building. Each user may have only a single connection to a single edge switch, but everything above the edge switch should be designed with redundancy in mind. The edge switch is usually chosen based on two key requirements: high port density and low per-port costs.
Low port costs are desirable because of the cost of patching and repatching devices in end-user workspaces. If ports are expensive, and only a few ports are available, then each time a user moves a workstation, printer or phone, someone has to go into a wiring closet and repatch to their network port — a cost that quickly overwhelms the savings of buying fewer ports. Since the primary purpose of an edge switch is to move around Ethernet packets, there’s no reason to buy expensive “feature-full” switches for most buildings.
High port density is desirable because of the costs associated with managing them. Each switch is a manageable element, so more switches lead to greater management complexity, associated costs and potential network downtime due to human error.
Network managers achieve density in different ways, depending on the size of their building and the number of devices that must connect to each wiring closet. Chassis devices, which include blades (typically with 48 ports each) are popular and can scale up to a large number of users. Switch stacking, which treats a cluster of individual switches as a single distributed chassis with a high-speed interconnect, is a very popular and economical alternative.
The Distribution and Aggregation Layer
In small networks of a few hundred users, edge switches can be connected redundantly directly to core switch/router devices. However, for larger networks, , an additional layer of switching, called the distribution layer, aggregates the edge switches. The main goal of the distribution layer is simply cabling reduction and network management, taking the many uplinks from edge switches and aggregating them into higher speed links.
If edge switches are chosen so that each wiring closet has only a single redundant uplink, then the distribution layer is usually placed next to the network core, with a minimum of two devices (one for each half of the redundant uplink) connecting to each wiring closet.
However, if the edge switch topology creates multiple redundant uplinks (for example, if nonstacked switches are selected or if there are an enormous number of connections in each wiring closet), then an aggregation layer (really just another distribution layer) can be placed in each wiring closet. The aggregation layer connects to the uplinks of the edge switches and is uplinked to the distribution layer toward the network core.
Generally, network managers are moving away from in-closet aggregation layers, where practical. The aggregation layer’s performance in the user access network is minimal, but each layer can increase the complexity and correspondingly decrease reliability. However, some network and building layouts operate best with an aggregation layer.
One additional factor in designing user access networks is performance — specifically, anticipated bandwidth requirements and network growth. For a typical office network, speeds toward the end user of 1 gigabit per second with an uplink speed of 10Gbps are satisfactory and will handle heavy network applications, such as network backup and system imaging.
In small networks, uplink speeds of 1Gbps might even be satisfactory, depending on the total number of user devices served by each wiring closet and the application demands on the network. Wiring closet uplink speeds of greater than 10Gbps are required only in very unusual cases.
Aggregation and distribution layer switches are usually selected over edge switches for their greater reliability and larger feature set. While the aggregation/distribution layer should always be redundant, devices at this layer should offer nonstop service, such as in-service upgrades (software upgrades that don’t require reboots or significant traffic interruption) and hot-swap fan and power supply modules.
Aggregation and distribution layer switches also have more stringent performance requirements, including lower latency and larger MAC address table sizes. This is because they may be aggregating traffic from thousands of users rather than the hundreds that one would find in a single wiring closet.
The Core (or Backbone Layer)
For many network managers, a pair of core switches represents the top of their network tree, the network backbone across which all traffic will pass. Although LANs such as Ethernet are inherently peer to peer, most enterprise networks sync and source traffic from a data center (either local or in the WAN cloud) and( to a lesser extent) from the Internet.
This makes a large core switch a logical way to handle traffic passing between the user access network and everything else. The advantage of a core switch is backplane switching — the ability to pass traffic across the core without 1Gbps or even 10Gbps limits, achieving maximum performance.
Generally, the backbone of the network is where switching ends and routing begins, with core switches serving as both switching and routing engines. In many cases, core switches also have internal firewall capability as part of their routing feature set, helping network managers segment and control traffic as it moves from one part of the network to another.
Data Center Networks
Data center networks often use the same multitier architecture as do user access networks. However, the higher bandwidth requirements and peer-to-peer aspect of data centers call for a more careful design, with greater attention to performance engineering and very high reliability.
It’s often helpful to think about the data center (or centers) as a “building” unto itself that minimizes the overlap between user and external networks. This lets the data center design reflect very high performance requirements without expensive spillover into other areas of the network where latency and throughput are less demanding.
Top-of-rack Switches
The “edge” of the data center is the rack, populated by servers, sometimes 40 to a rack. Top-of-rack switches are designed slightly differently from user edge switches, often incorporating two or four 10Gbps uplink ports, 48 1Gbps server ports, multiple power supplies, and a higher speed interswitch stacking system than what is normally found in user edge switches.
Generally, top-of-rack switches are installed in stacked pairs, enabling any one switch to be swapped out or rebooted, with each server in the rack redundantly connected to both switches. In highly virtualized data centers, top-of-rack switches may even be 10Gbps across all ports — an expensive high-performance option.
Data Center Distribution Layer
In many data centers, the distribution layer is phased out in favor of direct connections to core switches. This “collapsed” (or “two-tier”) network ensures the highest possible performance for two data center devices communicating, possibly across Layer 3 subnets.
When very high performance is required, network managers design for a nonblocking fabric, meaning there are no bottlenecks (such as multiple ports aggregating into a single connection) between any two devices in the network. Nonblocking fabrics are an extreme performance requirement (applicable more to large data centers than the typical enterprise). If a standard Layer 2 distribution layer is present, then all traffic between subnets has to move back to the core switch, which can overload the 10Gbps links between the distribution and core layers.
Although experienced network managers avoid pushing any routing into the user distribution layer, it’s occasionally appropriate in a data center distribution layer, effectively turning this layer into a “mini core” by itself.
For networks in which a distribution layer is in place, network managers should consider looking at higher speed connectivity between the distribution layer and the core, such as the emerging 40Gbps and 100Gbps links, or at least use multiple 10Gbps links to ensure satisfactory performance.
Data Center Core Switch/Router Devices
Depending on the size of the building, number of users and amount of traffic sent offsite, some network managers may build their data center network with a completely separate core switch pair, connected to user access and external networks via firewalls. Choosing a separate core comes with both capital and operational expenses and should be avoided unless absolutely necessary.
As with user access core layers, data center cores generally also include a Layer 3 component, routing between subnets and providing choke points for in-chassis or external firewalls to work their security magic.
To learn more best practices, insights and strategies on routing and switching, read our "Ultimate Guide to Routing and Switching."