English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Scaling of the GROMACS Molecular Dynamics Code to 65k CPU Cores on an HPC Cluster

Kutzner, C., Miletić, V., Palacio Rodríguez, K., Rampp, M., Hummer, G., de Groot, B. L., et al. (2025). Scaling of the GROMACS Molecular Dynamics Code to 65k CPU Cores on an HPC Cluster. Journal of Computational Chemistry, 46(5): e70059. doi:10.1002/jcc.70059.

Item is

Files

hide Files
:
J Comput Chem - 2025 - Kutzner - Scaling of the GROMACS Molecular Dynamics Code to 65k CPU Cores on an HPC Cluster.pdf (Any fulltext), 412KB
Name:
J Comput Chem - 2025 - Kutzner - Scaling of the GROMACS Molecular Dynamics Code to 65k CPU Cores on an HPC Cluster.pdf
Description:
-
OA-Status:
Not specified
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

hide
 Creators:
Kutzner, Carsten1, Author
Miletić, Vedran2, Author
Palacio Rodríguez, Karen3, Author                 
Rampp, Markus2, Author
Hummer, Gerhard3, Author                 
de Groot, Bert L.1, Author
Grubmüller, Helmut1, Author
Affiliations:
1Theoretical and Computational Biophysics, Max Planck Institute for Multidisciplinary Sciences, Göttingen, Germany, ou_persistent22              
2Max Planck Computing and Data Facility, Garching, Germany, ou_persistent22              
3Department of Theoretical Biophysics, Max Planck Institute of Biophysics, Max Planck Society, ou_2068292              

Content

hide
Free keywords: benchmark, GROMACS, high performance computing, molecular dynamics, MPI
 Abstract: We benchmarked the performance of the GROMACS 2024 molecular dynamics (MD) code on a modern high-performance computing (HPC) cluster with AMD CPUs on up to 65,536 CPU cores. We used five different MD systems, ranging in size from about 82,000 to 204 million atoms, and evaluated their performance using two different Message Passing Interface (MPI) libraries, Intel-MPI and Open-MPI. The largest system showed near-perfect strong scaling up to 512 nodes or 65,536 cores, maintaining a parallel efficiency above 0.9 even at the highest level of parallelization. Energy efficiency for a given number of nodes was generally equal to or slightly better than parallel efficiency. We achieved peak performances of 687 ns/d for the 82k atom system, 116 ns/d for the 53M atom system, and about 35 ns/d for the largest 204M atom system. These results demonstrate that highly optimized software running on a state-of-the-art HPC cluster provides sufficient computing power to simulate biomolecular systems at the mesoscale of viruses and organelles, and potentially small cells in the near future.

Details

hide
Language(s): eng - English
 Dates: 2024-12-052025-01-262025-02-14
 Publication Status: Issued
 Pages: 5
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1002/jcc.70059
BibTex Citekey: kutzner_scaling_2025
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

hide
Title: Journal of Computational Chemistry
  Abbreviation : J. Comput. Chem.
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: New York : Wiley
Pages: - Volume / Issue: 46 (5) Sequence Number: e70059 Start / End Page: - Identifier: ISSN: 0192-8651
CoNE: https://pure.mpg.de/cone/journals/resource/954925489848