Apr 01 2015
Data Center

They May Grab Fewer Headlines, But Power and Cooling Remain Essential

The transformation of the data center is changing the way companies have to manage power, cooling and physical infrastructure.

With all of the attention being paid to the cloud, Big Data, mobility and other hot-button IT issues, it’s easy to forget about seemingly mundane concerns such as data center power and cooling.

But IT decision-makers do so at their own peril, because the transformations taking place in the data center are making it essential for IT departments to change the way they manage power and cooling. And any failure to make those changes can leave organizations unacceptably vulnerable to power outages, overheated equipment and excessive utility bills.

The Impact of the Cloud

One common misconception is that increased use of public cloud resources makes power protection in the data center less of an issue for IT staff. After all, if a company’s applications and data reside in a service provider’s facility, then reliable and efficient power should be the provider’s concern — not the IT department’s.

The reality, however, is that cloud adoption simply makes protection of the network equipment that connects an organization to the cloud even more important.

“There was a time when IT wasn’t so concerned with power protection for routers and switches, because people could still get work done on their desktops,” notes Brandon Zimmerman, a senior solution architect for power and cooling at CDW. “Now, the network wiring closet is your lifeline to the cloud, so you have to completely rethink how you’re going to protect that lifeline from a blackout or brownout.”

David Cappuccio, chief of research for the infrastructure team at Gartner, agrees. “As more IT resources are shifted out of the on-premises data center, there is much more emphasis on network reliability,” he observes. “And appropriate power and cooling is fundamental to ensuring that reliability.”

IT managers should consider several factors when it comes to protecting network equipment in the wiring closet. One factor is time tolerance. Every organization has a different tolerance for network outages, depending on the true economic impact of lost connectivity. These tolerances can even vary between an organization’s locations and departments.

Once time tolerances are understood, appropriate power protection can be put in place. In some cases, it may only be necessary to sustain power for a few minutes, so that users can complete their tasks before an orderly shutdown. In others, it may be necessary to keep the network up and running with a backup generator. Alternatively, battery power can be put in place to extend the organization’s ability to continue operations in the event of a power outage.

Given the increased importance of an organization’s network connection to the outside world, IT departments are paying more attention to the physical quality of their wiring closets as well. Historically, these closets often received scant attention in terms of both cooling and physical organization. To put it more plainly, they were often a bit of a mess. By moving to higher-quality enclosures, better cable management and appropriate cooling, IT departments can avoid the kind of self-inflicted outages that can be disruptive to an organization — and embarrassing for technical staff — while also making it much easier to service and troubleshoot equipment.

The Impact of Virtualization

While many organizations are just starting to offload infrastructure to the cloud, most have already moved ahead very aggressively with server virtualization. By doing so, they have been able to reduce hardware costs and more flexibly allocate computing power to their application workloads. Because virtualization allows enterprises to get more work done with fewer physical machines, it has eased data center power and cooling requirements to some degree. However, it has also dramatically increased utilization of existing hardware. Where 20- to 30-percent utilization was once the norm, some machines are now running at 80 percent or more. This greater utilization causes hardware to consume more power and generate more heat. The result: Data center hotspots get even hotter and per-rack power consumption significantly exceeds historic levels.

“The trend toward extreme density in the data center is presenting IT with new challenges when it comes to managing physical conditions in their facilities,” says Tom Karabinos, director of partner channels at Emerson Network Power. “Those new challenges are driving demand for new solutions and adoption of new practices.”

One such practice is the segregation of hot equipment. In the past, data center managers typically tried to mix equipment running at different temperatures in order to spread the cooling load across their facilities. But as virtualization generates more extreme differences in heat, the trend now is to segregate the hottest equipment in order to more efficiently focus cooling.

“It no longer makes sense to spend energy cooling an entire floor, when only a percentage of your equipment is throwing off most of the heat,” explains Cappuccio. “That’s why we’re seeing data centers using Mylar dividers or even building walls to create special contained regions where they can more efficiently deploy cooling systems for their hottest-running hardware.”

The consolidation of multiple application workloads onto a smaller number of machines through virtualization is also changing the rules of the game when it comes to power distribution. Racks that previously demanded only 3 to 5 kilowatts of power are now demanding 8 to 12 kilowatts. This intensified power demand is leading to broader use of three-phase power distribution units (PDUs), which can meet higher energy requirements without adding the complexity or additional cabling of single-phase PDUs.

The advantages that IT organizations have derived from server virtualization is leading many to now embrace storage virtualization, which offers similar benefits in terms of more efficient use of hardware and more adaptive allocation of available capacity. This facet of virtualization will have similar consequences in terms of power and cooling. “As companies generate more data and deploy more advanced storage technologies, many are finding that power and cooling for that storage is even more resource-intensive than it was for their servers,” says Cappuccio.

The Rise of DCIM

As virtualization and the cloud combine to transform the data center into an increasingly dense, power- and cooling-hungry environment where the health of every server, storage and networking device is essential to business continuity, data center managers are under more pressure to be smart about not only how they add capacity — but also where. For example, isolation of the hottest equipment into dedicated cooling zones makes sense economically and operationally. It also makes sense to avoid overloading any individual PDUs or uninterruptible power supplies (UPSs).

As organizations address the need for efficient power and cooling, the adoption of data center infrastructure management (DCIM) is on the rise. With DCIM solutions, data center managers can more easily track the equipment in their data centers and how it is behaving. For example, before adding more virtual workloads to a “cool” server that might push it to become a “hot” server — and therefore require it to be moved to a special cooling zone — a data center manager can look for available capacity in a machine that’s already running hot.

Similarly, when it’s time to add more disk capacity to a pooled storage environment, a data center manager can use a DCIM solution to choose the rack with the most power to spare.

“As IT infrastructure has become more critical to the business, data center managers have become more concerned about how closely they can monitor their environments,” says Emerson’s Karabinos. “DCIM addresses those concerns by giving them direct insight into temperature, power consumption, humidity and device telemetry from a single graphical interface that they can log into from anywhere at any time.”

The Importance Controlling Costs

Perhaps the most powerful force affecting physical management of the data center is the specter of tight budgets. Despite the growing reliance of organizations on technology, IT budgets generally have remained flat in recent years. Considering the effects of inflation, data center managers are under pressure to do much more with much less.

One way to save money is the use of tools that allow remote control of data center facilities. Another is to take advantage of a new generation of pre-fabricated, all-inclusive enclosures that combine physical management, power distribution, power protection and cooling in a single unit. By using these enclosures, IT staff can avoid time-consuming assembly work.

Srdan Mutabdzija, global data center solution manager at Schneider Electric, points out that standardizing on a common enclosure architecture across all locations drives down ownership costs. “When you use different equipment configured differently at different locations, you waste a lot of time at the beginning of every task first just figuring out what’s there before you can get started on what it is you want to accomplish,” he says. “It’s far more efficient for your technicians to know what they’re walking into, whether they’re at your data center, a remote location or a colocation facility.”

Mutabdzija emphasizes the importance of a modular design approach. “Fixed-capacity designs force you either to over-build, which means you’re spending money in advance unnecessarily, or to swap out your enclosures and lose your investment when you need more capacity,” he says. “With modular designs, you can instead start with right-sized infrastructure and simply add capacity granularly where and when you need it.”

David Slotten, vice president of product marketing at Tripp Lite, suggests another way IT departments can stretch their budgets. To protect devices against power supply issues, many data center managers have deployed two power supplies per machine. But a new generation of PDUs can detect problems and switch power in milliseconds. “If you think about what a midsize enterprise spends on redundant power supplies, switch-enabled PDUs can save you thousands,” he says.

Given the uniqueness of every company’s power and cooling needs — and given the many ways power and cooling solutions can be tailored to meet those needs — many organizations can find great value bringing in a partner with the expertise to propose a complete data center plan.

“You definitely don’t want to skimp on the protection of your IT infrastructure, but there’s absolutely no reason to waste money on an inefficient environment either,” says CDW’s Zimmerman. “That’s why, with the changes taking place in today’s data centers, it’s a great time to re-evaluate your approach to power and cooling to find out if you’re either leaving yourself exposed to unidentified risks or missing out on opportunities for substantial savings.”

Green Energy
(graphicsdunia4you/ThinkStock)

The Importance of Going Green

IT infrastructure can account for 20 percent or more of an office building’s energy consumption, and in developed nations it has been estimated to contribute 5 to 6 percent of total power consumption. Those numbers will likely rise with the continued growth of technology.

Energy consumption by IT is therefore coming under greater scrutiny both in terms of cost and corporate policies related to climate change.

IT departments can take many measures to reduce its power consumption, including the use of more energy-efficient equipment, higher utilization of hardware capacity and automated sleep modes during periods when equipment is not being used. Measures directly related to power and cooling include the rightsizing of UPS units (since low utilization of oversized units is inefficient) and the use of devices that meet the specifications of the federal ENERGY STAR program.

manfredxy/ThinkStock
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.