Jun 23 2022
Hardware

CPUs vs. GPUs: What Makes the Most Sense for Your Processing Needs?

The graphical processing unit is used for a lot more than just graphics these days — and it may have as much of a home in your cloud infrastructure as it does on your desktop computer.

When it comes to the computing needs of the average business, the use cases are vast, covering the needs of both customers who interact with your business as well as the employees that who help keep the trains running.

​​One question that might come to mind is where the graphical processing unit (GPU) comes into play in the computing experience, and why it matters just as much as the central processing unit (CPU), if not more so.

Simply put, the GPU’s name may be a misnomer.

What Is a Central Processing Unit, and What Is It Used For?

A CPU is the most common kind of microprocessor used in computers. It is fundamental to the modern computing experience. 

First developed for commercial use in the early 1970s by Intel, these processing units have evolved significantly since then, for many years being pulled forward by Moore’s Law, an observation by Intel co-founder Gordon Moore that transistor counts double in integrated circuits every two years.

While the CPU is not the only type of chip fundamental to modern computing, it has increasingly come to take on a larger number of tasks in recent years, with the chip’s capabilities rising significantly because of competition in the processor space.

CPUs continue to function as the primary nervous system of any computing platform, with its strength being the ability to quickly chew through a single processing task.

RELATED: Learn about the latest advances in processors from AMD.

What Is a Graphics Processing Unit, and What Is It Used For?

A graphics processing unit (GPU), on the other hand, is a specialized type of microprocessor that was originally developed with three-dimensional3D graphics in mind. However, iIt has evolved over the past two decades to also become a fundamental element of modern computing.

Building from roots in high-end commercial graphics workstations, such as the Pixar Image Computer and the Silicon Graphics Onyx, 3D graphics chips moved into more mainstream settings starting in the late 1990s, thanks in part to video game consoles and home computers. One notable early player in the industry is 3Dfx, which sold some of the first dedicated 3D graphics cards, which. Those helped to create a large enthusiast market for highly capable GPUs that continues in the PC market among PC users to this day.

While 3D graphics chipsets were available before the first GPU was sold, NVIDIA is widely credited with producing the first device marketed as a GPU with its GeForce 256, beginning in 1999. By 2002, NVIDIA had acquired 3Dfx and gradually evolved into a dominant player in the 3D graphics space.

Paresh Kharya, NVIDIA’s senior director of product management and marketing for accelerated computing, noted that GPUs have traditionally taken a different route to increased performance than Moore’s Law, which relies on continual improvements in processor technology.

With Moore’s Law, Kharya said, “We've reached the limits of physics in some cases. So what GPU-accelerated computing does is, it constantly evolves the architecture to suit the needs of the applications in order to deliver those benefits of the architecture to the end applications.”

GPU add-in boards traditionally tend to rely on new types of memory, increased performance offered by ever-faster PCIe standards, higher power consumption, and the use of active cooling to help improve the GPU’s overall functionality. Kharya noted that evolution was happening under the hood of the chip as well, with the addition of cores that can handle tensor calculations, for example, helping to expand GPU capabilities over time.

Click the banner below to unlock exclusive cloud content when you become an Insider.

What Are Some Notable GPU Use Cases?

While GPUs are most famously used for 3D rendering, video production and video games, the technology has evolved into non-graphical areas that nonetheless benefit from similar types of processing, such as machine learning and cryptography. These use cases expanded the value of the GPU to several industries.

The GPU’s non-graphical use cases, oddly enough, have roots in the video game console. Jon Peddie, a well-known researcher in the GPU industry, explained that researchers at high-profile universities, such as Princeton and Stanford, took advantage of the graphical capabilities of the Playstation 2 for non-graphical use cases. The PS2, which supported Linux, was capable of running operations in clusters of consoles, making it a very capable (if unusual) method for parallel processing.

“Constantly on the hunt for inexpensive processors, they had tried tying a bunch of PS2s together, which worked reasonably well and provided a bunch of cheap FLOPS ([floating point operations per second),” ],” Peddie said.

Soon, companies that specialized in dedicated GPUs, such as NVIDIA and ATI (later acquired by AMD), began looking for ways to offer up this kind of parallel computing capability through the hardware they were already producing, with programming libraries such as OpenGL (Open Graphics Library), OpenCL (Open Computing Language) and Vulkan helping to give developers increased access to the GPU’s capabilities.

A key element of non-graphical GPU use cases is the CUDA compute platform, NVIDIA’s proprietary application programming interface for its GPUs, which has helped to make the parallel processing capabilities of GPUs available to a wider array of industries.

This has allowed for notable and novel use cases for NVIDIA GPUs, such as a recent effort at Stanford University to sequence the human genome in just five hours, opening up new possibilities for patients. Key to Stanford’s innovation was another important element of the modern GPU stack: cloud compute capabilities, offered in this case through Google Cloud.

READ MORE: Explore the differences between Intel's high-end Core i7 and i9 chips.

What are the Differences Between CPUs and GPUs?

Fundamentally, CPUs and GPUs excel at different things in the context of a system. CPUs are best used to throw a lot of processing power at a single task, while GPUs tend to excel at more complex data sets where a lot of information needs to be calculated at once.

Peddie pointed to the evolution in threading as a key differentiator between the CPU and GPU processor types, one that has emerged slowly over the past two decades, as programs began to move away from what Peddie described as “a serial processing fashion — first this, then that, and repeat.”

But while companies like Intel attempted to solve the problem of serial processing (increasing the numbers of processor cores on chips such as the Core i7), the already-in-use GPU proved to be particularly effective at the job of working through complex computational tasks in certain use cases.

“Companies began rewriting some of their programs to take advantage of multithreading, and most new programs were written in that fashion,” Peddie said. “GPUs are parallel processors and run multithreaded programs very efficiently.”

To take advantage of these differences in processing, GPUs often utilize different kinds of memory from CPUs, called Graphics Double Data Rate (GDDR) RAM, which is designed for higher- bandwidth use cases than traditional RAM types, which that emphasize lower latency.

What Are Integrated Graphics?

Integrated graphics are an effective use case for end users, who may not need the additional graphics capabilities on their specific machine.

The idea of CPU-bound integrated graphics dates back to at least the 1990s, with the Intel i740, an add-on card that shared memory and resources with the CPU. But the current phenomenon began in earnest in the mid-2000s, when AMD began experimenting with its accelerated processing unit (APU), a chip that integrated CPU and GPU functions onto a single processor. Intel also started working on CPUs with integrated graphics functionality around this time, leading to the release of Intel HD Graphics in 2010. (While not using AMD’s APU nomenclature, Intel’s chips serve a similar purpose.)

Many ARM-based processors, including the Apple Silicon line of chips and those included in many smartphones, combine dedicated CPU and GPU capabilities onto a single chip, allowing the capabilities of both processor types. (Apple Silicon processors also integrate memory, allowing all major processing functions to happen within a single chip.)

From a performance standpoint, integrated graphics generally can’t match the capabilities of a dedicated GPU, but allow users to access many advanced graphical capabilities while limiting power consumption along the way.

Laptop users have options for utilizing a dedicated GPU, including opting for a model with a discrete GPU chip, as well as utilizing an external GPU, an enclosure that makes it possible to connect to an add-in card through a technology such as Thunderbolt.

EXPLORE: See how you can ensure that your business is ready for AI applications.

Is a CPU Alone Powerful Enough?

One challenge that has faced those interested in using dedicated GPUs in recent years is that the latest models have been hard to attain, in part because market demand has outstripped supply, raising prices in recent years.

As Peddie explained, cost issues are generally not a problem at the high end, where the costs are more minuscule in comparison.

“They don’t waste time bickering about an add-in board’s price if it is $5,000 or $7,000, or even $10,000,” he said. “That’s noise compared to with the critical nature of the problem they are trying to solve.” 

For consumer-level (or even some corporate-level) use cases, however, that has raised the question about whether integrated graphics can do the job. 

While dedicated GPUs very much still very much have their place, GPU performance on integrated chipsets has improved greatly in recent years. Processors such as Apple’s M1 and M2 chipsets, AMD Ryzen APUs with Radeon graphics and Intel processors with built-in Xe graphics have helped to fill the gap in many end-user cases such as watching high-resolution video and image editing.

But improved performance in processors alone aren’t isn’t the only reasons why many might find integrated GPUs a good fit. In part, end users are benefiting from increasing GPU-bound computing capabilities in the cloud. AI technologies such as natural language processing rely on GPU-bound processing cycles — cycles that may not be happening on the machine itself.

And those use cases sneak into areas that may not seem obvious to the end users, but have a deep effect on their experience.

“American Express uses a new AI to detect fraud in credit card transactions in real time. Microsoft Teams uses NVIDIA AI to transcribe and caption meetings,” Kharya said. “The AI is everywhere.”

As a result, “Is is just a CPU alone powerful enough?” That may not be the right question for many businesses. A better question is, “How much GPU do you need, — and where do you need it?”

If that question has you flummoxed, it could be one to discuss with an IT partner, like CDW Amplified™ services, which can help you make sense of your computing stack.

Sefa Ozel/Getty Images
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.