English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Towards the interpretability of deep learning models for human neuroimaging

Hofmann, S., Beyer, F., Lapuschkin, S., Loeffler, M., Müller, K.-R., Villringer, A., et al. (2021). Towards the interpretability of deep learning models for human neuroimaging. bioRxiv. doi:10.1101/2021.06.25.449906.

Item is

Files

show Files
hide Files
:
Hofmann_Beyer_pre.pdf (Preprint), 9MB
Name:
Hofmann_Beyer_pre.pdf
Description:
-
OA-Status:
Green
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Hofmann, Simon1, 2, 3, Author                 
Beyer, Frauke1, 2, Author           
Lapuschkin, Sebastian2, 4, Author
Loeffler, Markus5, Author
Müller, Klaus-Robert4, 6, 7, 8, 9, Author
Villringer, Arno1, 3, 10, 11, Author           
Samek, Wojciech2, 4, Author
Witte, A. Veronica1, 3, Author           
Affiliations:
1Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634549              
2Department of Artificial Intelligence, Fraunhofer Institute Heinrich Hertz, Berlin, Germany, ou_persistent22              
3Clinic for Cognitive Neurology, University of Leipzig Medical Center, Leipzig, Germany, ou_persistent22              
4BIFOLD - Berlin Institute for the Foundations of Learning and Data , Berlin, Germany, ou_persistent22              
5IMISE, University of Leipzig, Leipzig, Germany, ou_persistent22              
6Machine Learning Group, Technical University Berlin, Berlin, Germany, ou_persistent22              
7Department of Artificial Intelligence, Korea University, Seoul, South Korea, ou_persistent22              
8Brain Team, Google Research , Berlin, Germany, ou_persistent22              
9Max Planck Institute for Informatics , Saarbrücken, Germany, ou_persistent22              
10MindBrainBody Institute, Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany, ou_persistent22              
11Center for Stroke Research, Charité – Universitätsmedizin Berlin, Berlin, Germany, ou_persistent22              

Content

show
hide
Free keywords: Brain-age; Structural MRI; Explainable A.I.; Aging; FLAIR; SWI; Diabetes
 Abstract: Brain-age (BA) estimates based on deep learning are increasingly used as neuroimaging biomarker for brain health; however, the underlying neural features have remained unclear.
We combined ensembles of convolutional neural networks with Layer-wise Relevance Propagation (LRP) to detect which brain features contribute to BA. Trained on magnetic resonance imaging (MRI) data of a population-based study (n=2637, 18-82 years), our models estimated age accurately based on single and multiple modalities, regionally restricted and whole-brain images (mean absolute errors 3.37-3.86 years). We find that BA estimates capture aging at both small and large-scale changes, revealing gross enlargements of ventricles and subarachnoid spaces, as well as lesions, iron accumulations and atrophies that appear throughout the brain. Divergence from expected aging reflected cardiovascular risk
factors and accelerated aging was more pronounced in the frontal lobe. Applying LRP, our study demonstrates how superior deep learning models detect brain-aging in healthy and at-risk individuals throughout adulthood.

Details

show
hide
Language(s): eng - English
 Dates: 2021-08-26
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1101/2021.06.25.449906
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: bioRxiv
Source Genre: Web Page
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -