Nov 12 2009
Hardware

Review: The HP DL160 G6, a Solid Leap for Servers

The HP DL160 G6 increases performance through a new CPU and motherboard architecture.

With the DL160 G6, HP maintains its reputation as a provider of servers that are rugged and easy to install in the rack. The device ships with the standard HP deployment and management tools that make HP one of the most desirable server brands to have in the data center. The new setup and deployment tools (Version 1.0) replace the HP SmartStart tools and makes for painless deployment of Microsoft, Novell and Linux operating systems to the new hardware.

In addition, the DL160 G6 incorporates a new CPU and motherboard architecture that increases performance markedly for the Xeon processors. These improvements allow for much greater RAM capacity, which makes the DL160 an outstanding choice for an application or virtualization server.

Advantages

New chipset, new bus architecture and CPU improvements make the new HP DL160 G6 a solid leap forward in server technology. These technologies are not unique to HP, but the HP advantage in deployment and ease of support and maintenance combine to maintain the brand’s edge over its competitors.

The architecture improvement comes down to the new Intel 5520 chipset, which replaces the traditional front-side bus (FSB) architecture for the Xeon with what Intel has dubbed Quick Path Interconnect. For those not steeped in the arcana of CPU and chipset design, who might not be able to fully appreciate the implications of these changes, here’s a brief explanation.

Computer speed has traditionally been measured by the speed of the central processors. For years, CPUs were a single-entity item. One CPU chip equaled one CPU core and its associated communications assemblies. But as Moore’s Law began to butt up against the boundaries of physics, power consumption and heat dissipation, the design strategy shifted.

Taking a page from supercomputer design, chip manufacturers shifted to a strategy of putting multiple, lower-power CPU cores on a single die to leverage parallel processing. Thus was born the multiple-core architecture we see in use on desktops such as Intel’s Core2 and AMD’s Athlon/Phenom, and in servers such as Intel’s Xeon and AMD’s Opteron.

But as processor speeds increase rapidly with the addition of more cores, so too does the need for communication between the CPU and the remaining computer components. This is accomplished by using various levels of cached memory to queue information and instructions for access by the cores. These memory caches are built onto the CPU die and are named by their closeness to the CPU as Level 1, Level 2 and Level 3. Level 3 cache is usually the largest and is shared by all the cores, and it is on the edge between the CPU and the rest of the machine. Level 1 cache is closest to the cores and assigned to individual cores.

Last, there is the controller chip on the motherboard itself, which is the traffic cop for information coming in from the RAM, video and PCI buses (including PCI-connected drives). This is called the Northbridge. The speed at which the Northbridge can communicate with the CPU is the all-important FSB speed.

Recent multiple-core CPUs (such as the quad-core Intel Xeon and AMD Opteron) have increased in speed and complexity to the extent that it isn’t really accurate to call them CPUs anymore; they are more like systems on a chip. As the CPUs become faster, they become starved for information from the FSB.

Intel has overcome this disadvantage with what they brand their QuickPath Interconnect. The QuickPath technology eliminates the Northbridge altogether and allows the CPU to directly access the system RAM through a revised I/O hub that replaces the Northbridge controller. This is a dedicated path for communications between the CPU and peripherals. Theoretically, it should double or triple the throughput between the CPU and the rest of the unit, including memory, video and disks.

This advantage should then be obvious; a solid leap forward in system performance over its predecessors.

For this review, I put the DL160 G6 through its paces in our lab against a DL360. The test environment was a VMware virtual machine consisting of a 40 gigabyte drive image of Server 2003 Release 2, running Apache and a MySQL database-driven web application. Even though the DL160 G6 was running only the 32-bit OS with mirrored drives, it booted the image about 10 seconds faster than the DL360.

I used an Apache JMeter to test the virtual machine websites and found that the DL160 was about 20 percent faster in page load speeds, with an average load time of about 1,000 milliseconds per page with 20 requests per second versus 1,280ms per page for the DL360. This can logically be attributed to the architectural improvements of the DL160 G6.

Disadvantages

The DL160 G6, like most 1U servers, packs a lot into a tight space by necessity. But there are good ways and bad ways to do this. On the DL160 series, the half-height PCI expansion card for the RAID controller is locked away under a heat shield that is bolted to the case. Replacing this card essentially requires complete disassembly of the unit. It’s possible to do this, technically, with the unit slid out on the rack rails. But in our lab, we had to remove the unit altogether in order to access screws in the back. It’s a bit of a design flaw.

When purchasing the new DL160 Series, it’s critical to know what your needs are and to match the memory technology to those needs. Then carefully examine the specifications of pre-configured base models.

Connor Anderson is vice president of Riverfront Technology in Clinton, Iowa.
textfield
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.