English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

Buaria, D., & Yeung, P. K. (2017). A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations. Computer Physics Communications, 221, 246-258. doi:10.1016/j.cpc.2017.08.022.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Buaria, Dhawal1, Author           
Yeung, P. K., Author
Affiliations:
1Laboratory for Fluid Dynamics, Pattern Formation and Biocomplexity, Max Planck Institute for Dynamics and Self-Organization, Max Planck Society, ou_2063287              

Content

show
hide
Free keywords: Turbulence; Particle tracking; Parallel interpolation; Partitioned global address space (PGAS) programming; Co-Array Fortran; One-sided communication
 Abstract: A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on major compilers suggests that this algorithm will be of wider applicability on most upcoming supercomputers.

Details

show
hide
Language(s): eng - English
 Dates: 2017-08-312017-12
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1016/j.cpc.2017.08.022
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Computer Physics Communications
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 221 Sequence Number: - Start / End Page: 246 - 258 Identifier: -