Aug 01 2023
Hardware

What Is FLOPS and How Does It Help with Supercomputer Performance?

As the supercomputer market continues to grow along with workloads, it’s important to understand the performance capability that makes supercomputing possible in the first place.

Originally used in applications related to national security and now frequently used for data-intensive and computation-heavy scientific and engineering purposes across industries, supercomputers aren’t exactly new.

They also aren’t simply a flashy technology designed to wow.  Their purpose is actually quite practical: With their high storage capacity and real-time processing power, supercomputers are cost-efficient, blazing fast and capable of spurring innovation.

In 2020, the White House greenlit the COVID-19 High Performance Computing Consortium “to accelerate understanding and the pace of scientific discovery in the fight to stop COVID-19,” according to a 2022 article. But beyond scientific research, supercomputers are being used in large corporations for data mining, climate predictions and even animation for films. They are used in the automobile industry, in military and defense, and in intelligence agencies,  With all of these applications, the worldwide supercomputer market has grown billions in recent years and is expected to reach nearly $22 billion by 2030.

As the supercomputer market continues to grow, and as the technology becomes more widely used, it’s important to understand the performance capability that makes supercomputing possible in the first place: FLOPS.

Click the banner to discover more 2023 development and tech trends.

What Does FLOPS Mean and How Is It Used?

FLOPS stands for floating-point operations per second. Floating point is a method of encoding real numbers with most precision available on computers. Computation of floating-point numbers is often required in the scientific or real-time processing of applications, including financial analysis and 3D graphics rendering. FLOPS is a common measure for any computer running these applications.

FLOPS is also not the plural form of FLOP (floating-point operation), as it is commonly misinterpreted to be. FLOPS is actually the per-second rate of floating-point operations. HP calls FLOPS “the unit of measurement that calculates the performance capability of a supercomputer.” The key word there is capability. If supercomputers were sprinters, FLOPS would be the measurement used to express their top speed, not their average speed.

How Do You Measure Supercomputer Performance?

Whether a supercomputer is using Linux — as roughly half of the top 500 supercomputers do — or another operating system, a supercomputer’s performance capabilities can be measured in FLOPS. Regardless of whether a supercomputer is being used for operations in the corporate, science or intelligence space, FLOPS is the best way to gauge the device’s processing speed, which is the number of mathematical floating-point operations being conducted each second.

To truly understand these measurements, it’s helpful to know the different levels of FLOPS, also known as computing performance.

What Are the Different Tiers of FLOPS?

In computer performance, orders of magnitude are expressed using standard metric prefixes. Here are the tiers of FLOPS in use today, from base level to most advanced:

  • Gigaflops: The base level of FLOPS, 1 GFLOPS is equal to 1 billion (109) floating-point operations per second. For perspective, a 1 GFLOPS computer system can do in one second what it would take a human almost 32 years to do, performing one calculation every second. And a 1 GFLOPS system isn’t even anything to brag about: In 1993, Fujitsu’s Numerical Wind Tunnel became the first supercomputer to surpass 100 GFLOPS; nowadays, the fastest consumer processors surpass this with ease.
  • Teraflops: The next level of FLOPS, 1 TFLOPS is equal to 1 trillion (1012) floating-point operations per second, which would take a human performing one calculation each second nearly 32,000 years to complete. In 1997, using thousands of Intel microprocessors, ASCI Red became the first TFLOPS machine in history. Still, it was only a matter of time before this processing speed was surpassed.
  • Petaflops: The next tier of FLOPS, 1 PFLOPS is equal to 1 quadrillion (1015) floating-point operations per second. Put another way, a 1 PFLOPS computer system can do in one second what it would take a human performing a calculation per second almost 32,000,000 years to accomplish. Developed by IBM and NVIDIA, Summit is currently one of the most powerful supercomputers in the world, with at a processing power of 200 PFLOPS, and is used for research in fields including physics, energy, climate and healthcare.
  • Exaflops: The next and final currently achievable tier of FLOPS, 1 EFLOPS is equal to 1 quintillion (1018) floating-point operations per second. Operating at a speed of one calculation per second, it would take a human roughly 32 billion years to do what a 1EFLOPS computer system can do in a second. Optimized for high-performance computing (HPC) and artificial intelligence (AI) with AMD Instinct 250X accelerators and Slingshot Interconnect, Frontier is the only supercomputer in the world that can exceed 1 EFLOP, maxing out at 1.194 EFLOPS.

The next and most sophisticated level of FLOPS would be zettaflops. But a ZFLOPS supercomputer is still in the development phase, and it could be years until it’s formally introduced. To give an idea of its power, a 1 ZFLOPS supercomputer would consume about 21 gigawatts, which is equivalent to the energy produced by 21 nuclear power plants.

DISCOVER: How CDW can help successfully deploy high-performance computing.

What Are the Storage Capacities Needed for Different Tiers of FLOPS?

With countless calculations being performed at super speeds, supercomputers need smart storage to match. As with computer performance, there are various levels of storage capacity, expressed via standard metric prefixes. Here’s what you need to know:

  • Gigabyte: A single GB is equal to 1 billion bytes, not to be confused with bits: While download speeds for electronic data are commonly represented in bits, the smallest unit of measurement for digital information in computing, file sizes are commonly represented in bytes, and the 8-bit byte is the international standard. Whether using hard disk drives or a solid-state drives, it’s quite common for personal laptops and even cellphones and tablets to have storage capacities of hundreds of GBs. On average, supercomputers require more storage.
  • Terabyte: A single TB is equal to 1 trillion bytes, or 1,000 GB. It’s not entirely uncommon for personal devices to have a storage capacity in the 1TB to 2TB range. Although it performs just over 1 TFLOPS, ASCI Red requires 12.5TB of disk storage.
  • Petabyte: A single PB is equal to 1 quadrillion bytes, or 1,000TB. This is the magnitude of storage capacity needed by today’s top supercomputers. Performing at 200 PFLOPS, Summit has a storage capacity of 250PB. And performing at roughly 1.2 EFLOPS, Frontier has a storage capacity of 700PB.

These are the storage capacity sizes that correspond to different levels of computing power currently in use. But as supercomputers continue to evolve, supercomputers soon will have exabytes and even zettabytes of storage capacity.

FIND OUT: Why improving the speed and performance of data storage is critical. 

What Does the Future of FLOPS Hold?

As with other technologies, FLOPS capabilities are only set to increase moving forward. Some predict that supercomputers with tens and even hundreds of EFLOPS will emerge through the rest of this decade and into the 2030s, and that supercomputers could meet the ZFLOPS milestone as soon as 2036.

After that, it would be a race to build a yottaflops supercomputer — a 1 YFLOPS supercomputer would be 1,000 times faster than a 1 ZFLOPS supercomputer — estimated to arrive by the early 2050s.

To reach that point, supercomputers must include AI. Intel, for example, already has released its version of HPC, which includes AI. Companies such as NVIDIA are using their AI-powered supercomputers to advance nuclear fusion research. And IBM’s Summit and Sierra supercomputers were developed with Big Data and AI workloads in mind. As more companies discover new ways to incorporate AI into HPC, remember that no supercomputing would be possible without FLOPS.

CasarsaGuru/Getty
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.