Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Data centers are increasingly exploring different ways to build more energy-efficient supercomputers, in addition to faster ones. Nvidia has been addressing this challenge in several ways, ranging from more efficient processors, improved CPU and GPU coordination, new networking technologies and more efficient libraries.
Nvidia lead product manager of accelerated computing, Dion Harris, said that in scientific computing, performance is key, but what is becoming more pressing is being able to do it as efficiently as possible. So Nvidia has been exploring different ways to get the most performance out of the smallest data center footprint and the smallest carbon footprint.
Here is an overview of the new developments:
- An Nvidia H100 GPU supercomputer demonstrates almost twice the energy efficiency of A100 implementations.
- A combination of Grace and Grace Hopper Superchips demonstrates a 1.8-times improvement for a 1-megawatt data center for accelerated computing.
- BlueField DPU demonstrates 30% energy improvement per server.
- Nvidia Collective Communications Library demonstrates 3-times improvement for simulations.
- Updates to cuFFT library demonstrates 5-times improvement in large-scale FFT execution.
More efficient supercomputers
Nvidia has been working with Lenovo on the first submission of a supercomputer built on the Nvidia H100 chip to the Green500 list of most efficient supercomputers. That is a milestone in and of itself. But early findings suggest that this may become one of the top contenders for the most efficient supercomputer.
In addition, this particular configuration is built on an air-cooled-based system, so it did not require any special piping or rack configurations that are sometimes required for high-performance and energy-efficient systems.
Harris said, “This will allow this type of configuration to be deployed anywhere in any classic data center.”
Improving data center efficiency
Nvidia has previously reported on how combining Grace and Grace Hopper Superchips can improve core CPU computing. New research suggests that it can also drive more efficient accelerating computing architectures.
They found a way to achieve a 1.8-times performance improvement for a standard 1-megawatt data center with about 20% of the load allocated to CPU partitions and about 80% allocated to accelerated partitions, compared to traditional x86 approaches.
Network offloading improvements
Nvidia has also released some new research quantifying the benefits of offloading data management and networking tasks to the Bluefield DPU. The smart network interface controller combines traditional network functionality with accelerated networking, security, storage and control plane functions. The company found that it could reduce overall power usage by about 30% per server. In a large data center with about 10,000 servers, this could save roughly $5 million in energy costs over a three-year lifespan.
“Accelerating computing is a full-stack problem,” Harris explained. So, Nvidia has been optimizing the underlying libraries that help popular scientific computing tools work across multiple GPUs, systems and locations.
An update to the Nvidia Collective Communications Library (NCCL) drove a threefold performance improvement for VASP, a popular data center library, without any hardware changes. The VASP (Vienna Ab initio Simulation Package) supports atomic-scale material modeling.
Improvements in Nvidia CUDA Fast Fourier Transform (cuFFT) enabled a fivefold improvement on GROMACS, a simulation package for biomolecular systems. The new update also makes it easier to efficiently run FFT calculations across a much larger number of systems in parallel.
“This enables large FFTs at the full data-center scale,” Harris said.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot