May 13 2014
Data Center

5 Power and Cooling Myths You Should Stop Believing

Following these common but mistaken notions could be costing your organization money.

Conventional wisdom isn’t necessarily true, and this especially applies to power and cooling in the data center. Many popular IT beliefs are in fact myths, and clinging to these notions can impede energy efficiency or cost savings.

1. Evaporative Cooling Won’t Work in Humid Climates

Relative humidity does affect the efficiency of an evaporative cooler, but a number of manufacturers offer indirect evaporative cooling systems, which can offer substantial cooling benefits at a lower cost than standard air conditioning. The technology uses a heat exchanger combined with evaporative cooling, which means that the interior air isn’t humidified.

Evaporative cooling can be used in conjunction with air conditioning in a hybrid system as well. Some manufacturers claim savings of up to 95 percent for indirect evaporative cooling over standard air conditioning. Other types of cooling systems achieve high efficiency through a heat pump that circulates water through underground pipes to dissipate heat.

2. PUE Is an Ineffective Measure of Data Center Efficiency

Power usage effectiveness (PUE) is simple, but not ineffective. Most objections to PUE revolve around the fact that it can be difficult to measure. Unless a data center is specifically equipped to measure power consumed in other aspects of operation, it can be hard to determine what portion of the total energy used is from IT equipment.

Because it’s hard to measure, many data centers measure only a few points (typically, the servers and UPSs), and most don’t measure the power consumed by cooling or distribution losses. The lower the PUE metric, the better — ideal scores are close to 1.0. A PUE score of more than the average of 2.9 means that the data center uses 2.9 times as much power as the servers consume.

3. Running Equipment in Your Own Computer Room Saves Money

By design, large data centers can consume much less energy per CPU than a small installation. Hosting or cloud providers with dedicated large-scale cooling systems achieve high efficiencies. Factor in economies of scale, and these facilities can provide computing power at a far lower cost than an in-house data center or server room, particularly one that’s part of a general-use building not specifically engineered for optimum energy efficiency.

4. The Colder, the Better

Some folks maintain that keeping servers and other IT equipment cold will extend their useful life. That’s true — running a system at a higher-than-recommended temperature can cause premature failure. However, lowering the operating temperature below what’s recommended doesn’t increase lifespan.

Many data centers run ambient temperatures 20 or 30 degrees cooler than necessary to ensure that all systems are under the maximum operating temperature. Optimizing air flow, using hot and cold aisles and measuring temperatures throughout a data center can enable a higher average ambient temperature at a much lower cost than keeping the entire data center icy cold.

5. Solid-State Disks Will Reduce Power Consumption

Solid-state disks (SSDs) consume less power than traditional hard disks, especially high-performance 10,000 and 15,000 rpm drives. However, with a maximum draw of around 10 watts, even a 15,000 rpm drive doesn’t use a huge amount of power compared with the 4 watts of an SSD.

Given that SSDs cost an average of 10 times more per gigabyte than hard disks, IT managers are unlikely to ever save enough on power to justify the expense. They can save more by using more efficient CPUs and power supplies.

GuidoVrola/iStock/ThinkStockPhotos
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.