Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  GPU acceleration of a petascale application for turbulent mixing at high Schmidt number using OpenMP 4.5

Clay, M. P., Buaria, D., Yeung, P. K., & Gotoh, T. (2018). GPU acceleration of a petascale application for turbulent mixing at high Schmidt number using OpenMP 4.5. Computer Physics Communications, 228, 100-114. doi:10.1016/j.cpc.2018.02.020.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Zeitschriftenartikel

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Clay, M. P., Autor
Buaria, Dhawal1, Autor           
Yeung, P. K., Autor
Gotoh, T., Autor
Affiliations:
1Laboratory for Fluid Dynamics, Pattern Formation and Biocomplexity, Max Planck Institute for Dynamics and Self-Organization, Max Planck Society, ou_2063287              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Turbulence; High Schmidt number; Compact finite differences; Asynchronous GPU computing; OpenMP 4.5; Titan (ORNL)
 Zusammenfassung: This paper reports on the successful implementation of a massively parallel GPU-accelerated algorithm for the direct numerical simulation of turbulent mixing at high Schmidt number. The work stems from a recent development (Comput. Phys. Commun., vol. 219, 2017, 313-328), in which a low-communication algorithm was shown to attain high degrees of scalability on the Cray XE6 architecture when overlapping communication and computation via dedicated communication threads. An even higher level of performance has now been achieved using OpenMP 4.5 on the Cray XK7 architecture, where on each node the 16 integer cores of an AMD Interlagos processor share a single Nvidia K20X GPU accelerator. In the new algorithm, data movements are minimized by performing virtually all of the intensive scalar field computations in the form of combined compact finite difference (CCD) operations on the GPUs. A memory layout in departure from usual practices is found to provide much better performance for a specific kernel required to apply the CCD scheme. Asynchronous execution enabled by adding the OpenMP 4.5 NOWAIT clause to TARGET constructs improves scalability when used to overlap computation on the GPUs with computation and communication on the CPUs. On the 27-petaflops supercomputer Titan at Oak Ridge National Laboratory, USA, a GPU-to-CPU speedup factor of approximately 5 is consistently observed at the largest problem size of 81923 grid points for the scalar field computed with 8192 XK7 nodes.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2018-03-072018-07
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: Expertenbegutachtung
 Identifikatoren: DOI: 10.1016/j.cpc.2018.02.020
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Computer Physics Communications
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: 228 Artikelnummer: - Start- / Endseite: 100 - 114 Identifikator: -