The network of today is nothing like the network of five years ago — and nothing like the network will be five years from now. Today’s typical network is no longer simply an interconnected set of technology that gets data from Point A to Point B. Instead, it is pressed into service in ways that network managers couldn’t imagine.
Organizations today have adopted many technologies and services that make operations more efficient and effective — but put tremendous pressure on network performance, bandwidth, capacity and security. What’s more, it’s happening at a time when IT budgets and resources are stagnant. (No significant increases are forecasted in capital budgets or headcount in 2013 per the research firm Computer Electronics.)
The use of cloud computing has also complicated the issue. For example, servers and storage used to sit side by side, but today, many organizations can have parts of applications sitting in different physical locations.
To add to the complexity, some of the infrastructure isn’t even owned or managed by the organization that is delivering the app to users. That can increase network latency and make WAN-related connectivity – things like response time monitoring — more critical than ever before.
Fortunately, there is a range of tools and strategies designed to keep entities ahead of the seemingly increasing plethora of network changes and demands. In addition to being technology enablers, they can also serve as catalysts to improve service quality, reduce cost and enhance security.
Network Complexity at an All-Time High
The fundamental nature of networking is shifting. Today, bandwidth is stretched by the impact of unified communications (UC) and multimedia-enabled applications (including bandwidth-hungry video), while mobility adds security and compliance complexities.
The increase of virtualization, while extremely beneficial in many ways, puts a great burden on the network by increasing the bandwidth input/output needed for each physical server. The growth of e-commerce requires always-on, no latency networks. And the network may even be asked to handle physical security.
“You can test an application you have developed by putting it on a server and connecting that server to a 10G switch in a lab. Put a PC on the other side of that 10G switch and sure enough it works perfectly,” says Doug Roberts, director of product strategy at Visual Network Systems (a part of Fluke Networks). “But if you take that same application and deploy it across an eclectic mix of never-before-thought-of types and kinds of bottlenecks and forms of interconnectivity and distances, all of a sudden the app doesn’t work.”
The crux of the issue, he says, is how to deal with new interdependencies of the network, applications and servers.
“How those pieces work together is what dictates the data sources you need to leverage, where you need to collect data from, how you need to collect the data and most importantly, how you need to present data to key stakeholders so they can resolve problems quickly,” Roberts says.
At the same time, network traffic is increasing — in a big way. According to the Cisco Visual Networking Index: Forecast and Methodology, 2011-2016, annual global IP traffic will reach 1.3 zetabytes (one zetabyte equals one sextillion bytes) by 2016.
The forecast also predicts the following:
- By 2016, 1.2 million video minutes (equal to 833 days) will travel the Internet every second.
- Average global IP traffic will reach 150 petabytes per hour.
- IP video conferencing will grow more than twice as fast as overall IP traffic.
“Each of us increasingly connects to the network via multiple devices in our always-on connected lifestyles,” says Suraj Shetty, vice president of products and solutions marketing at Cisco. “The sum of our actions not only increase demand for zetabytes of bandwidth, but also dramatically change the network requirements needed to deliver on the expectations of this ‘new normal’.”
It’s critical to provide the bandwidth for the growing amount of network traffic to travel, but it is equally important to monitor and prioritize that traffic. Without the right tools and processes in place, organizations will experience higher costs, lower productivity and reduced security.
“As capacity requirements increase, service levels will often degrade if demand is not monitored and managed,” says Bob Tarzey, an analyst and director at Quocirca Ltd., in a May, 2012 report. “Standing still will, in effect, mean going backward.”
To manage these issues, most organizations are using some networking management and monitoring tools. Around-the-clock monitoring saves time, supports administrators in planning resources, and helps optimize the network.
“To maximize the user experience, constant network monitoring is needed to ensure that all network ports are used to the full extent and that every last drop of available bandwidth is consumed before more capacity is purchased,” Tarzey says. “Furthermore, when network traffic increases, upgrades can be planned rather than implemented in a hurry while fire-fighting.”
For today’s organizations, the message is clear: the network must be managed as a mission-critical asset. As Tarzey succinctly puts it, “A well-managed, high-availability, high-performance and secure network can be a distinct competitive advantage, a poorly managed one is a fundamental risk.”
Modern Network Management Tools
Network management today means being able to diagnose and resolve bandwidth, latency and performance problems before they impact productivity; gain visibility into load-balanced environments; and identify security threats such as zero-day threats, malware, insider breaches and policy violations.
All have the basic ability to monitor connections, CPUs, memory utilization, bandwidth, latency and server uptimes and downtimes. All have analysis and reporting capabilities, and most include a dashboard-like user interface allowing for remote management via the web, a desktop client or mobile device.
Beyond that, different types of network management tools have different levels of capabilities, from the basic (device discovery) to the complex (real-time monitoring and tracking; root cause analysis and event correlation; real-time behavior analysis, rule-based threat classification; location tracking and visualization; and the ability to see application traffic as it is traversing the network).
The option an entity chooses can rest on many factors. Budget is one factor — there are big-budget and small-budget options, while specific points of pain are another.
Network management at its basic level mainly means knowing which devices are connected to the network at any point in time. One example is Cisco’s FindIT Network Discovery Utility, which allows organizations to discover most Cisco products and display information on status, serial number, IP address and version. HP’s Enterprise Discovery goes a step further, discovering and taking inventory of all devices and software on a network, up to 50,000 devices per server and 500,000 devices through multiple distributed servers.
The Peregrine application suite displays where each device and software is located, and provides metrics on utilization. Microsoft takes yet another approach with its Network Discovery tool, which searches the network for IP-enabled resources by querying Microsoft Dynamic Host Configuration Protocol (DHCP) servers, Address Resolution Protocol (ARP) caches on routers and Simple Network Management Protocol or SNMP-enabled devices. It can also search Active Directory domains and IP subnets.
Using these types of tools is often a good first step for organizations that need more information on what is accessing the network, along with information on when and how. That information is critical to moving to the next step; knowing how traffic flows and where the biggest demand is focused. This can help organizations determine the biggest problems and how to address them.
That’s where point solutions can make sense. If you have specific bottlenecks or network pain points, it can make sense to add a tool that monitors a specific aspect of the network. If the network is experiencing a lot of faults and availability issues, for example, a network node manager might be a good choice.
If some network thresholds are routinely exceeding capacity, a network traffic analysis and management tool is a good bet. The same is true for application performance problems. For load-balancing issues, consider a load-balancing tool or link load-balancing tool, depending on the issue. Other types of valuable point solutions include sniffer analysis, packet flow visibility and event managers, which can detect network performance issues and send important information to the service desk for resolution.