Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Journal Article

Mind the gap: Performance metric evaluation in brain‐age prediction


Draganski,  Bogdan
Laboratoire de Recherche en Neuroimagerie (LREN), Centre hospitalier universitaire vaudois, Lausanne, Switzerland;
Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

(Publisher version), 4MB

Supplementary Material (public)
There is no public supplementary material available

de Lange, A. G., Anatürk, M., Rokicki, J., Han, L. K. M., Franke, K., Alnæs, D., et al. (2022). Mind the gap: Performance metric evaluation in brain‐age prediction. Human Brain Mapping, 43(10), 3113-3129. doi:10.1002/hbm.25837.

Cite as: https://hdl.handle.net/21.11116/0000-000A-50E1-D
Estimating age based on neuroimaging-derived data has become a popular approach to developing markers for brain integrity and health. While a variety of machine-learning algorithms can provide accurate predictions of age based on brain characteristics, there is significant variation in model accuracy reported across studies. We predicted age in two population-based datasets, and assessed the effects of age range, sample size and age-bias correction on the model performance metrics Pearson's correlation coefficient (r), the coefficient of determination (R2 ), Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). The results showed that these metrics vary considerably depending on cohort age range; r and R2 values are lower when measured in samples with a narrower age range. RMSE and MAE are also lower in samples with a narrower age range due to smaller errors/brain age delta values when predictions are closer to the mean age of the group. Across subsets with different age ranges, performance metrics improve with increasing sample size. Performance metrics further vary depending on prediction variance as well as mean age difference between training and test sets, and age-bias corrected metrics indicate high accuracy-also for models showing poor initial performance. In conclusion, performance metrics used for evaluating age prediction models depend on cohort and study-specific data characteristics, and cannot be directly compared across different studies. Since age-bias corrected metrics generally indicate high accuracy, even for poorly performing models, inspection of uncorrected model results provides important information about underlying model attributes such as prediction variance.