English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Deployment of an HPC-Accelerated Research Data Management System: Exemplary Workflow in HeartAndBrain Study

MPS-Authors
/persons/resource/persons173613

Parlitz,  Ulrich
Research Group Biomedical Physics, Max Planck Institute for Dynamics and Self-Organization, Max Planck Society;

/persons/resource/persons173583

Luther,  Stefan
Research Group Biomedical Physics, Max Planck Institute for Dynamics and Self-Organization, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

WSBiosignals2024_Telezki.pdf
(Any fulltext), 416KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Telezki, V., tom Wörden, H., Spreckelsen, F., Nolte, H., Kunkel, J., Parlitz, U., et al. (2024). Deployment of an HPC-Accelerated Research Data Management System: Exemplary Workflow in HeartAndBrain Study. In Proceedings of the Workshop Biosignal 2024. Göttingen: Göttingen Research Online Publications. doi:10.47952/gro-publ-204.


Cite as: https://hdl.handle.net/21.11116/0000-000F-271A-9
Abstract
We present our workflow and research data management (RDM) within the HeartAndBrain research project
of the Department of Neurology at the University Medical
Center Gottingen. Here, we aim to investigate waste clearance ¨
mechanisms in the human brain [1], [2]. Therefore, we collect
(longitudinal) data from multiple sources, in particular from
Magnetic Resonance Imaging (MRI), ECG, SpO2, breathing
belt, laboratory analysis of blood and urine. Our RDM System
(RDMS) allows us to integrate these inhomogeneous data sources
in one data base [3] where it is accessible via structured queries
either via API or GUI. Furthermore, we developed (semi-) automatic post-processing pipelines that take care of routinely used
post-processing steps. Computationally demanding tasks were set
up to utilize high-performance computing (HPC) infrastructure,
with automatic job submission and re-integration into the data
base. Job submission can also be triggered via a GUI, which
allows access to advanced, computationally demanding postprocessing tools for non-expert users.