textarea
May 23 2012
Data Center

NVIDIA’s Shared GPU Could Be What Virtual Desktops Need to Break Through the Glass Ceiling

The limited GPUs of today’s thin clients could soon be a distant memory.

Most IT workers would probably love to roll out thin clients or virtual desktops in their organization today. Having the power and the flexibility to centrally manage and secure the entire fleet of enterprise devices is a significant convenience.

But the challenge with virtual desktops is that applications requiring intense usage of the graphics-processing unit (GPU), especially those that use video, can suffer in performance. This in turn causes some users to shy away from virtual desktops.

NVIDIA, however, is working on the Virtual GPU Experience (VGX), a shared GPU for multiple virtual machines, that it believes will lessen the performance disparity between thick clients and thin clients. And Gunnar Berger, a research director at Gartner, is a firm believer in NVIDIA’s new product.

Berger got a sneak peek at the new GPU, and he believes that the new shared GPU has the potential to make virtual desktops mainstream. Berger shared a few good reasons why in his post on the Gartner blog:

Application Compatibility: [For] medical imaging or high-end engineering workloads, there are some applications out there that rely on a GPU (and that won’t work with a virtual GPU). Having a real GPU makes it possible for these applications to run on a virtual desktop.

Reduce CAPEX: I know some environments spend anywhere from $2,000 to $10,000 per engineering desktop, and these desktops have none of the advantages of virtual desktops. So not only are they very expensive to purchase (CAPEX), but they are also expensive to maintain (OPEX). If these same environments move to a virtual desktop model, there is potential to save on CAPEX.

Better User Experience: The user experience is the big win for virtual desktops; people genuinely like it (or genuinely have no idea that they are running on a virtual desktop). Having a GPU enables IT departments to enable a better user experience for things like Aero or transparent windows.

CPU Offload: One thing that I find particularly interesting about this technology is that there is potential to offload the rendering of the video from the x86 architecture. The shared GPU technology is cross-platform, [which means] Citrix, VMware [and] Microsoft will benefit from this technology. This removes some of the stickiness from a hardware choice but accomplishes the same task: reducing the amount of CPU required to render a video.

User Density: The next obvious step to CPU offload is if you can decrease the amount of CPU required per user, you can potentially get more VMs per core (more users per core). This is a standard pitch for increased user density.

Protocol: I’ve asked a lot of questions about this, and if I’m understanding this correctly, the shared GPU can basically send a video stream down a different channel using H.264 and has the potential to improve as the codecs improve. This means it could work outside the standard protocol and potentially open up a floodgate of the future of the protocol in the virtual desktop market.

If the GPU was as powerful as a thick client, would that be enough to sway you toward deploying a virtualized desktop environment?

textfield
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT