MS045 - Energy-Aware High-Perfomance Computing
Keywords: Acceleration, Carbon footprint, Energy, GPUs, High-Performance Computing, Neuromorphic computing, Novel model design, SIMD, TPUs
With accelerating climate change, scientists — especially in high-performance computing (HPC) — must address the environmental impact of large-scale simulations, which often produce substantial amounts of carbon dioxide. Although supercomputing infrastructure has become significantly more energy-efficient in recent years, leveraging these advances typically requires major adaptations to algorithms and code bases. With JEDI, the first module of Europe’s exascale computer, ranked 1st in the Green500 list, and similar systems, the use of graphics processing unit (GPU) accelerators is becoming essential to avoid wasting computational resources.
Many researchers have improved energy efficiency by exploiting single instruction, multiple data (SIMD) parallelism, such as via Advanced Vector Extensions (AVX), or by porting applications to GPUs. Fair comparisons with central processing unit (CPU) implementations remain essential to quantify energy savings. Domain-specific redesigns of models and algorithms also play a key role. For example, combining fine-grained agent-based models with coarse-grained metapopulation models has been shown to achieve up to 98% energy savings while still meeting the target outcomes. In general, combining algorithmic and hardware-aware strategies can preserve accuracy while reducing cost.
The idea of algorithm–hardware co-design is particularly advanced in machine learning (ML), which is now one of the main drivers of global computing energy use. As ML models grow in scale and complexity, so does the demand for energy-efficient training and inference. This has led to the development of specialized hardware, such as neuromorphic computing chips and tensor processing units (TPUs), designed in close alignment with the algorithms they support. Other promising directions include field-programmable gate arrays (FPGAs) and mixed-precision computing on GPUs and other accelerators.
This mini-symposium welcomes contributions on applications that enhance energy efficiency through novel hardware usage, algorithm–hardware co-design, or methodological advances. We also invite submissions on measuring and comparing energy consumption (e.g., CPU vs. GPU) and on techniques that reduce energy usage while maintaining results within acceptable precision bounds.
