English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Quantitative Assessment of ML-Based Feature Attribution Methods for ASD Biomarker Discovery in fMRI

MPS-Authors
/persons/resource/persons273224

Mahler,  L
Department High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84187

Scheffler,  K       
Department High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons133483

Lohmann,  G       
Department High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Mahler, L., Scheffler, K., & Lohmann, G. (2024). Quantitative Assessment of ML-Based Feature Attribution Methods for ASD Biomarker Discovery in fMRI. Poster presented at 30th Annual Meeting of the Organization for Human Brain Mapping (OHBM 2024), Seoul, South Korea.


Cite as: https://hdl.handle.net/21.11116/0000-000F-517B-C
Abstract
Introduction: Mental disorders are a pressing global health concern, highlighting the critical need for reliable biomarkers, particularly in conditions such as autism spectrum disorder (ASD). Despite its prevalence, affecting 1 in 160 children worldwide, the lack of objective biomarkers hinders timely and accurate diagnosis, relying solely on behavioral observations. This study explores the convergence of deep learning, explainable AI, and ASD diagnostics. We aim to evaluate the potential of extracting meaningful biomarkers from the learned representations of deep learning models applied to rs-fMRI data. In line with recent advances in explainable AI, our focus is on quantitatively evaluating different methods for interpreting machine learning predictions. The goal is to identify the method that produces the highest quality explanations, thereby improving our understanding of the decision-making process in the model. Beyond prediction, this study undertakes a quantitative exploration of brain imaging-based biomarkers. Using the selected explainable AI method, we aim to uncover biomarkers that contribute to a deeper understanding of ASD, with implications for refining diagnostic strategies and advancing our understanding of psychiatric disorders. Methods: In our investigation, we use METAFormer (Mahler et al. 2023), a multi-atlas transformer model for ASD classification, trained on the ABIDE-I (DiMartino et al., 2013) dataset (406 ASD, 476 TC subjects) and preprocessed using the PCP-DPARSF pipeline. METAFormer achieves 83.7% accuracy with functional connectomes from the AAL, CC200, and DOS160 atlases. To identify critical input ROIs, we use common feature attribution methods - Integrated Gradients, DeepLIFT, Feature Ablation, Gradient SHAP, and DeepLIFT-SHAP (Lundberg & Lee, 2017). Each method provides insight into feature contributions. For quantitative evaluation, we use two metrics (Yeh et al., 2019): 1) Infidelity, measured by the mean squared error between the influence of input perturbations on explanations and corresponding changes in the prediction function, assesses the accuracy of explanations under significant perturbations. 2) Sensitivity measures how attribution is affected by insignificant perturbations from a test point. We randomly sample training, validation, and test sets for model training and attribution generation, calculating infidelity and sensitivity for each data point and attribution method. Since many attribution methods require a manual choice of baseline, we examine its effect on explanation quality by evaluating methods over a baseline range of [-1, 1].