Everyone’s talking green IT and with good reason: It takes a tremendous amount of energy to power a data center. Yet going green is easier said than done. With that in mind, here are five areas where a few small process tweaks can translate into big dividends by reducing overall power consumption in your data center.
Consider increasing the thermostat setting to cut costs. Traditionally, data centers are cold because evidence suggests that hardware runs better when it’s cooler. But making a data center too cold can result in a big energy bill.
Working closely with IT equipment manufacturers, the American Society of Heating, Refrigerating and Air-Conditioning Engineers determined that data center equipment can withstand higher temperatures and wider humidity ranges than previously thought. Five years ago, ASHRAE recommended an environmental range of between 68 to 77 degrees, with relative humidity between 40 percent to 55 percent. In 2008, the organization widened the recommended temperature range to between 64 to 81 degrees and the relative humidity range to between 35 percent to 60 percent.
Most data centers operate at between 65 to 70 degrees, while some run as low as 60 degrees to guard against emergencies, such as failure of the cooling systems. The strategy is to make the data center as warm as possible without putting equipment at risk of overheating, says Bill Kosik, Hewlett-Packard’s energy and sustainability director.
Any rise in temperature reduces the response time in the event of an emergency. A typical rule of thumb: For every 1 to 2 degrees you increase your set point temperature, you will save from 2 percent to 4 percent on cooling, Kosik says.
A few design changes can help improve the air flow, thereby reducing your cooling costs. One option is rearranging the perforated floor tiles to implement a hot-aisle/cold-aisle configuration. You can also install energy-efficient lighting and retrofit cooling systems with variable speed motors so they generate less heat and consume less power.
Using contained cabinets that take air from the floor and vent it directly out the top will dramatically improve air flow, Kosik says.
American Power Conversion and numerous other manufacturers offer products in this area, including highly efficient in-row fan and chilled-water solutions to cool hot spots at the rack level. APC also sells a cabinet, called a Rack Air Removal Unit, that prevents hot air from mixing with cold air.
By powering down PCs at night from the data center with power management tools and remote systems management software, IT administrators can enforce a power-off policy and either shut down, hibernate or put to sleep any idle computers.
Energy savings from shutting down machines at night could net $15 to $20 per computer annually, says Avocent Chief Technology Officer Ben Grimes. Avocent’s LANDesk Management Suite and tools like it let IT administrators remotely manage the power settings on every computer on the network, and automatically shut down, hibernate or suspend PCs at night. The software lets IT customize wattage settings to specific groups of users and shows estimated power savings before policies take effect. It also provides reports on the amount of power, kilowatt-hours and dollars saved.
Lenovo offers LANDesk in its Lenovo desktop and notebook PCs, while Hewlett-Packard offers similar remote power management software, called Verdiem Surveyor. The Verdiem product lets system administrators centrally control desktop power settings.
IT organizations can install equipment and sensors to measure everything, from the amount of energy that servers, storage, networking and cooling equipment use to the temperature and humidity in front of server racks and in every corner of the data center. That provides the baseline data IT administrators need to determine how to make their data centers more efficient, which in turn helps save energy and money.
“Without measuring, you have no basis for trying to optimize the data center,” says Herman Chan, manager of Raritan’s power management business unit. “If you don’t measure, how do you know if you are overcooling or if you have hot spots in certain rows? How do you know if you’re just running 10 percent of the nameplate power in any one rack or about to trip a circuit breaker because you are consuming 80 to 90 percent of the load?”
Such data will let IT pinpoint hot spots that need attention and raise the temperature of the data center, then analyze its effects. You might also want to consider changing the voltage settings on equipment: Rather than convert power to 120 volts, run equipment at higher voltages, such as 208 to 480 volts. That way, power supplies can run more efficiently.
Most new uninterruptible power supplies maintain at least 97 percent efficiency, which means only 3 percent of incoming power leaks out as heat. Older UPS systems operate at 70 to 80 percent efficiency, which means 20 to 30 percent of power is lost.
Buying new modular UPS systems, such as the Symmetra PX, can also save energy. A 500kW system, for example, can be made up of 20 25kW power modules. In the past, IT departments traditionally used two large UPSes side by side for redundancy, with each UPS operating at 50 percent loads (or less, if the data center anticipated growth).
Today, IT departments no longer need to buy two large UPS systems for redundancy, says Bill Bockoven, vice president of sales at APC. With modular UPS systems, if one module goes down, another can pick up the workload. The result: IT departments can buy one modular UPS, operate it at 90 percent load and still get the redundancy they need.