English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Earth system modeling on modular supercomputing architecture: coupled atmosphere-ocean simulations with ICON 2.6.6-rc

MPS-Authors
/persons/resource/persons37303

Redler,  René       
Computational Infrastructure and Model Development (CIMD), Scientific Computing Lab (ScLab), MPI for Meteorology, Max Planck Society;

/persons/resource/persons37167

Haak,  Helmut       
Director’s Research Group (CVR), Department Climate Variability, MPI for Meteorology, Max Planck Society;

/persons/resource/persons37206

Klocke,  Daniel       
Computational Infrastructure and Model Development (CIMD), Scientific Computing Lab (ScLab), MPI for Meteorology, Max Planck Society;

/persons/resource/persons37214

Kornblueh,  Luis       
Computational Infrastructure and Model Development (CIMD), Scientific Computing Lab (ScLab), MPI for Meteorology, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

gmd-17-261-2024.pdf
(Publisher version), 871KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Bishnoi, A., Stein, O., Meyer, C. I., Redler, R., Eicker, N., Haak, H., et al. (2024). Earth system modeling on modular supercomputing architecture: coupled atmosphere-ocean simulations with ICON 2.6.6-rc. Geoscientific Model Development, 17, 261-273. doi:10.5194/gmd-17-261-2024.


Cite as: https://hdl.handle.net/21.11116/0000-000D-C06B-2
Abstract
The confrontation of complex Earth system model (ESM) codes with novel supercomputing architectures poses challenges to efficient modeling and job submission strategies. The modular setup of these models naturally fits a modular supercomputing architecture (MSA), which tightly integrates heterogeneous hardware resources into a larger and more flexible high-performance computing (HPC) system. While parts of the ESM codes can easily take advantage of the increased parallelism and communication capabilities of modern GPUs, others lag behind due to the long development cycles or are better suited to run on classical CPUs due to their communication and memory usage patterns. To better cope with these imbalances between the development of the model components, we performed benchmark campaigns on the Jülich Wizard for European Leadership Science (JUWELS) modular HPC system. We enabled the weather and climate model Icosahedral Nonhydrostatic (ICON) to run in a coupled atmosphere–ocean setup, where the ocean and the model I/O is running on the CPU Cluster, while the atmosphere is simulated simultaneously on the GPUs of JUWELS Booster (ICON-MSA). Both atmosphere and ocean are running globally with a resolution of 5 km. In our test case, an optimal configuration in terms of model performance (core hours per simulation day) was found for the combination of 84 GPU nodes on the JUWELS Booster module to simulate the atmosphere and 80 CPU nodes on the JUWELS Cluster module, of which 63 nodes were used for the ocean simulation and the remaining 17 nodes were reserved for I/O. With this configuration the waiting times of the coupler were minimized. Compared to a simulation performed on CPUs only, the MSA approach reduces energy consumption by 45 % with comparable runtimes. ICON-MSA is able to scale up to a significant portion of the JUWELS system, making best use of the available computing resources. A maximum throughput of 170 simulation days per day (SDPD) was achieved when running ICON on 335 JUWELS Booster nodes and 268 Cluster nodes.