English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Sufficient reliability of the behavioral and computational readouts of a probabilistic reversal learning task

MPS-Authors
/persons/resource/persons213894

Waltmann,  Maria
Department of Psychiatry, Psychosomatics and Psychotherapy, University Hospital Würzburg, Germany;
Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons96505

Schlagenhauf,  Florian
Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Department of Psychiatry and Psychotherapy, Charité University Medicine Berlin, Germany;

/persons/resource/persons104604

Deserno,  Lorenz
Department of Psychiatry, Psychosomatics and Psychotherapy, University Hospital Würzburg, Germany;
Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Neuroimaging Center, TU Dresden, Germany;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Waltmann_2022.pdf
(Publisher version), 4MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Waltmann, M., Schlagenhauf, F., & Deserno, L. (2022). Sufficient reliability of the behavioral and computational readouts of a probabilistic reversal learning task. Behavior Research Methods, 54(6), 2993-3014. doi:10.3758/s13428-021-01739-7.


Cite as: https://hdl.handle.net/21.11116/0000-000A-0DD0-D
Abstract
Task-based measures that capture neurocognitive processes can help bridge the gap between brain and behavior. To transfer tasks to clinical application, reliability is a crucial benchmark because it imposes an upper bound to potential correlations with other variables (e.g., symptom or brain data). However, the reliability of many task readouts is low. In this study, we scrutinized the retest reliability of a probabilistic reversal learning task (PRLT) that is frequently used to characterize cognitive flexibility in psychiatric populations. We analyzed data from N = 40 healthy subjects, who completed the PRLT twice. We focused on how individual metrics are derived, i.e., whether data were partially pooled across participants and whether priors were used to inform estimates. We compared the reliability of the resulting indices across sessions, as well as the internal consistency of a selection of indices. We found good to excellent reliability for behavioral indices as derived from mixed-effects models that included data from both sessions. The internal consistency was good to excellent. For indices derived from computational modeling, we found excellent reliability when using hierarchical estimation with empirical priors and including data from both sessions. Our results indicate that the PRLT is well equipped to measure individual differences in cognitive flexibility in reinforcement learning. However, this depends heavily on hierarchical modeling of the longitudinal data (whether sessions are modeled separately or jointly), on estimation methods, and on the combination of parameters included in computational models. We discuss implications for the applicability of PRLT indices in psychiatric research and as diagnostic tools.