Sep 24 2024
Data Center

What Is a Data Processing Unit, and How Does It Help with Computing?

DPUs allow enterprises to offload networking capabilities from CPUs and GPUs, making other processors more efficient.

Graphics processing units have become much more prominent in recent years because of their ability to perform the complex calculations on large data sets that underpin modern artificial intelligence applications.

And while GPUs are certainly having their moment, they are not the only important component in an enterprise’s toolbox of processing capabilities. 

Alongside GPUs and central processing units are data processing units, or DPUs. “The CPU is for general-purpose computing, the GPU is for accelerated computing, and the DPU, which moves data around the data center, does data processing,” according to NVIDIA CEO Jensen Huang.

By leveraging DPUs, IT leaders can optimize their infrastructure, improve performance and alleviate CPU workloads, making them a critical component in modern data centers and cloud environments.

Click the banner to learn how to accelerate your DevOps processes.

 

What Is a DPU, or Data Processing Unit, and How Does It Work?

The best way to think about a DPU, says Jim McGregor, principal analyst at TIRIAS Research, is as “an internal or overhead workload accelerator.”

It’s “designed to handle the overhead tasks normally required by the CPU, such as security, network processing or compression,” he says, adding that network processing often refers only to data transfer. 

While a DPU could be a completely programmable unit, such as a field programmable gate array, most are essentially a “system on a chip” (SoC), with core blocks optimized for specific functions, according to McGregor.

Importantly, he says, “they are each unique and not interchangeable. A DPU from one vendor may have different functions and features than a DPU from another vendor.”

DPUs are primarily used in data centers (and in data center servers specifically), says Brandon Hoff, an analyst who leads networking and communications infrastructure within IDC’s Enabling Technologies team. DPUs are often widely deployed by hyperscale cloud providers such as Amazon Web Services, Microsoft Azure and Google Cloud.

Hoff notes that DPUs are critical for enterprises that want to build and use “AI factories,” standardized environments used to develop, deploy and manage AI applications at scale.

AI factories need a scheduled fabric for the GPUs to talk to each other, and DPUs can do that,” he says. “That’s how a lot of the hyperscalers actually started doing DPUs, to build out scheduled factories.”

RELATED: What’s the difference between CPUs vs. GPUs?

What Are Some DPU Capabilities?

Security, networking, storage and AI analytics are the most common functions being addressed by DPUs, according to McGregor.

A key function of DPUs is to network GPUs together and allow them to transfer data more efficiently. There are only so many transistors that can fit on a piece of silicon in a GPU, Hoff notes. 

“And so you connect these together in a scale-up network and scale-out network, and DPUS are using the scale out network,” he says. GPUs will process information, and then stop and share that information, Hoff notes, and the time used for that networking and data transfer can represent up to 50 percent of the total compute time, meaning they are running only half as efficiently as they could. 

DPUs offload that capability from GPUs, allowing them to be more efficient. “So that’s why networking is important and having a schedule fabric where you have guaranteed delivery and guaranteed throughput is important,” Hoff says. “And DPUs help do that with standard, off-the-shelf switching.”

Pairing a DPU with standard switching via Ethernet allows enterprises to create a “high-performance fabric that’s nearly lossless,” Hoff says, meaning a network that can transfer data with almost no packet loss.

LEARN MORE: How can you use serverless computing to build and modernize applications?

DPUs vs. GPUs vs. CPUs: What’s the Difference?

DPUs are “completely different from CPUs and GPUs because they are also known as system-on-a-chip, application-specific integrated circuits, designed for a specific set of functions,” McGregor says. 

CPUs are designed to a specific instruction set to be completely programmable for a wide variety of functions, McGregor says. They are the backbone of modern computing and handle a wide range of computing tasks for operating systems and applications.

“GPUs were designed for graphics and multimedia processing but increasingly have AI-specific cores because their massive parallelization makes them good for AI processing,” McGregor says. 

GPUs efficiently perform the kind of calculations needed for inferences in generative AI large language models, Hoff notes, saving organizations time and energy costs. 

“The DPU is on the infrastructure side, where I can then accelerate the network” for scheduled fabrics, security, encryption and storage connectivity, Hoff says.

DIVE DEEPER: How automation can help IT leaders alleviate DevOps challenges.

What Are the Benefits of DPUs?

DPUs offload networking and storage functions from CPUs, allowing them to run more applications. “They can run the applications more seamlessly and without having to worry that something’s going to get tripped up in sending the data,” Hoff says.

The primary benefit of DPUs is for use in infrastructure; specifically, in data center servers. DPUs can also simplify data center architectures, Hoff says.

“For hybrid cloud, it makes it so that my application is on a server at bare-metal or near bare-metal speed,” he says. “So I don’t have to worry about that hypervisor anymore; I’ve offloaded a lot of hypervisor services.” 

“Now, I can start really improving my hybrid cloud deployments, either on Amazon or inside my data center or wherever I want to put those workloads,” Hoff adds.

Dragon Claws / Getty Images
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.