High-performance computers (HPC), also known as supercomputers, give scientists the power to solve extremely complex or data intensive problems by concentrating the processing power of multiple, parallel computers. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS), the measurement used in conventional computing. The technology has a plethora of applications—including quantum mechanics, climate research, oil and gas exploration, chemistry, aerospace and automotive technologies, and much more.

During July R&D Magazine explored advancements and applications of supercomputers as part of our R&D Special Focus series. We also touched on a few advancements in quantum computing, an emerging type of computing based on the principles of quantum mechanics.

We kicked off our coverage highlighting exciting news in the supercomputer industry in our article “First ARM-Based Supercomputer to be Rolled Out This Summer.” The U.S. Department of Energy’s (DOE) Sandia National Laboratory is expected to deploy the first ever Advanced RISC (Reduced Instruction Set Computing) Machine (ARM) supercomputer prototype, which is used for processors that require fewer transistors than those with a complex instruction set computing architecture, ultimately lowering the costs while improving power consumption and heat dissipation. The new machine, known as Astra, represents the first of a potential series of advanced architecture prototype platforms, which will be deployed as part of the Vanguard program.


The applications for supercomputers are rapidly growing. In our article, Researchers Use Supercomputers to Assess Earthquake Risk,we highlighted a joint research team from the DOEs Lawrence Berkeley National Laboratory and Lawrence Livermore National Laboratory that are working as part of the DOE’s Exascale Computing Project to demonstrate how different seismic wave frequencies of ground motion affect structures of different sizes. HPC gives researchers a much more accurate and detailed way to assess how likely it is that a major earthquake will occur at given location. 

In another article, “A Little Mixing Goes a Long Way: Breakthroughs in Simulating Tides’ Impacts on Global Ocean Circulation” we featured researchers using SciNet’s Niagara—Canada’s most powerful research supercomputer to link how small-scale processes affect the global circulation of our oceans. This circulation in turn impacts global temperatures and weather, and the oceanic health and biology that billions of people around the planet depend on for food.

Supercomputers can also be used to gather data more efficiently. Currently the U.S. Department of Agriculture (USDA) releases data for national corn and soybean distribution about four to six months after the harvest, causing a lag for policy and economic decisions. In our article, “Supercomputers Expedite Crop Data From Satellites” we highlighted researchers from the University of Illinois at Urbana-Champaign who have developed a new method to distinguish between the two major crops from space with a 95 percent accuracy rate using NASA satellite data, a new algorithm, and the processing power of supercomputers. This allows scientists to collect the crop data by the end of July for the same year for each field, instead of the coming year’s spring reported by the USDA.

In addition to environmental applications, supercomputers are also key to many up-and-coming technologies, including autonomous vehicles. In our article, “Using Deep Learning, AI Supercomputing, NVIDIA Works to Make Fully Self-Driving Cars a Reality” we highlighted Xavier, a complete system-on-chip (SoC) that integrates a new graphics processing unit (GPU) architecture called Volta, a custom 8 core CPU architecture, and a new computer vision accelerator. It features 9 billion transistors and a processor that will deliver 30 trillion operations per second (TOPS) of performance, while consuming only 30 watts of power. This technology is the most complex SOC ever created and is a key part of the NVIDIA DRIVE Pegasus AI computing platform, the world’s first AI car supercomputer designed for fully autonomous Level 5 robotaxis.

Physicists are also benefiting from advances in HPC. At CERN, the European Organization for Nuclear Research, physicists and engineers are probing the fundamental laws of the universe using the Large Hadron Collider (LHC), the world’s largest and most powerful particle accelerator. The LHC collides protons and researchers measure the aftermath to test the predictions of the Standard Model, which has been the leading theory of particles and their interactions over the past 60 years. Particle collision experiments produce large amounts of data. Research at the LHC will generate around 50 petabytes (50 million gigabytes) of data this year that must be processed and analyzed to aid in the facility’s search for new physics discoveries. To do this they’ve turned to Theta, a massively parallel, 11.69-petaflop system based on Intel Xeon Phi processors. Theta features high-speed interconnects, a new memory architecture, and a Lustre-based parallel file-system all integrated by Cray’s HPC software stack. Theta’s parallel file system has a 10 petabyte capacity. We’ve outlined this research in our article “Speeding CERN LHC Research with HPC Systems and ALCF Workflow Optimizations.”

Considerations for supercomputing

Much of the focus in the supercomputing world is on improving performance. But energy efficiency is also a critical element that should be a large consideration in HPC. As part of our special focus on supercomputers we spoke with Natalie Bates, one of leaders of the Energy Efficient High Performance Computing Working Group (EE HPC WG). Our interview with her is featured in the article, “Green Supercomputing: Why Energy Efficiency is Just as Key as Performance in HPC.”

In the article, Bates explained that energy costs, operations and capital are an increasing portion of HPC total cost of ownership. This has driven a focus on Power Usage Effectiveness (PUE). Any HPC center with a PUE over 1.1 or 1.2 has potential to improve energy efficiency, and in turn, save money, said Bates.

Quantum computing

There has also been several recent advancements in the world of quantum computing. In our article,Microscopic Trampoline a Step Toward Linked Quantum Computers” we highlighted the work of a joint research team from the University of Colorado Boulder and the National Institute of Standards and Technology (NIST) who have developed a method to convert microwave signals, like those produced by quantum chips, into light beams that travel through fiber optic cables with a device that uses a small plate that can absorb microwave energy and bounce it into laser light. The device—dubbed a microscopic trampoline—is closing in on a 50 percent success rate for efficiently converting microwaves into light—a crucial threshold needed for quantum computers to become everyday tools.

Because quantum computing is a field that is still in its infancy, accessibility is essential. In our article, “Open-Source Software Framework Makes Quantum Computing More Accessible,” we highlighted ProjectQa free, open-source software framework for quantum computing that allows users to implement their quantum programs in the high-level programming language Python using a powerful and intuitive syntax. ProjectQ can then translate these programs to any type of back-end, either a simulator run on a classical computer or an actual quantum chip.

Next month’s special focus

In August we are taking a special focus on bioelectronics—a field encompassing the intersection between electronics and biological processes. This is one of the most rapidly growing areas of research in engineering and healthcare. Its success requires a significant multidisciplinary approach, bringing together engineers and technical experts in fields such as electronics, manufacturing, and software with researchers studying the human body and disease. Be sure to check back for more!