At the end of the day, the microprocessor is only one part of the computing equation, but it is an incredibly important part. Increasingly, the processor is taking on more tasks than before, particularly as IT departments turn to server virtualization to reduce server sprawl and get the most juice out of their processing resources.
BizTech editor-in-chief Lee Copeland spoke recently with Kevin Knox, vice president of commercial business at Advanced Micro Devices, about the chip-maker’s Barcelona quad-core server line, built off the 10h architecture, and about how AMD tweaked the processor to better support the horsepower needed for virtualization. Available to the channel in late March, AMD officials say the new chips should start making their way into IT shops in early May.
BizTech: First off, can you tell us a little bit more about where AMD is with the Barcelona launch, and why you think the quad-core architecture makes sense for small to medium-size businesses?
Kevin Knox: It’s really important to highlight that the release of Barcelona is really about the introduction of our next-generation processor architecture and more than just the quad-core chips. Yes, we’re introducing quad-core, but more important, we’ve made pretty significant enhancements to the processor and the core architecture that we think will have impact across all different market segments and all different application workloads.
As an example, we’ve added some things around virtualization that we think will have a profound impact on virtualization performance. So, it’s really first and foremost about the new architecture. And then, obviously, we’re introducing quad-core, which we think for certain types of applications — specifically applications that are written to take advantage of multithreading — will also take advantage of the multiple cores inside of Barcelona.
BizTech: In terms of applications that can take advantage of multithreading, virtualization obviously is one of those, and I know that there’s support for virtualization and power. So, let’s talk about both of those and start with virtualization: How is this chip set going to help better enable virtualization?
Knox: Well, there’s really two ways that it does that. First is based on the core architecture itself and some of the things we’ve done with Hyper Transport and embedding the memory controller. We continue to see tremendous, tremendous performance levels, particularly in some of the higher-end systems. So, if you look at AMD’s four-way market share — for example, in the United States, we have high, 50 percent market share — and a lot of that has to do with the architecture and the fact that it is extremely good on memory-intense allocations like virtualization.
The other thing that we’ve done with Barcelona, as we’ve done in the past, is we’ve actually added an instruction set which allows virtualization software to run much more effectively. Our goal, and I think the goal of the industry, is to make sure there is no performance penalty when you’re running in a virtual session versus a native session. So, at the processor level, by working very, very closely with the key software vendors, we’ve made sure that we are eliminating any potential performance impact by running virtualization.
BizTech: Can you give me an example of how performance may be affected and what you’ve done at the chip level to deal with that?
Knox: At a very cursory level, we have essentially taken things that traditionally had been done with more standard instructions through software and put those instructions into the hardware. So, obviously, the fact that we’re doing things natively to enhance virtualization has a major impact on that. This is not AMD trying to replace VMware or Xen or anybody else, it’s really us working closely with them to see what functions within virtualization we can off-load and handle those in silicon.
BizTech: And is there a way that you’re increasing how much memory can be stored at the chip as opposed to having that happen at the software level?
Knox: Yes. We’re always working from a memory perspective, and there’s two ways to look at virtualization. The first is, how quickly can a virtual session run? The second way to look at it, and frankly the one that we’re seeing more people being attracted toward is, how many virtual sessions can I get on a specific server?
And if you look at the latter one, that has to do with memory and the way that information is passed between the separate virtual sessions on top of a specific server. And when you look at some of the technology architectures that we have, [such as] Hyper Transport technology, and the fact that we don’t have a front-side bus, which in a lot of cases has been the bottleneck in systems architectures in the past — we think we have eliminated that, and now we’re further enhancing that with specific virtualization instructions.
BizTech: A key focus on the new chip set has been around power. What kind of things has AMD done to improve power consumption?
Knox: Well, you know, power consumption is an ongoing issue. And before I answer your specific question, I think it’s important to talk about power in general. Because when we talk to a lot of people, specifically a lot in the small and medium business and the government and local space, power can mean a lot of things.
We look at power in four separate ways. First is the actual power, “How much is my electric bill to run this technology?” And, certainly, it’s getting a lot of attention with, you know, oil hitting over $100 a barrel. So, the issue of the dollars and cents of having, you know, high power requirements and having to have that electricity is a big issue.
The second one is actually the opposite, which is air conditioning. Which is, “How much air conditioning do I need to cool these servers?” There’s actually a study done not too long ago that basically determined that many organizations were spending as much, if not more, on cooling the servers than they were on actually the electricity to run the servers. So, the cooling is a major aspect.
The third is density. And this is just basically built off of the first two. We need to make sure that we’re getting better utilization out of our assets, whether it’s a data center or whether it’s a wiring closet. But the fact is, a lot of people are not getting full utilization because (a) they either can’t cool it, or (b) they can’t get enough electricity to run it. And building new data centers is a pretty expensive endeavor.
And, the final one is really the green aspect. We need to look at the impact that some of the technology is having as far as emissions and that kind of stuff; so, there’s a green aspect to this as well. ... In the end the processor is only one part of the equation. We really need to look at system-level power requirements, and we work very closely with our key partners to make sure that we’re developing systems that have fantastic thermal characteristics and allow IT decision-makers to make a decision and continue to be able to utilize or be able to take advantage of that decision they made because the power requirements are staying stable.