English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Tracking human skill learning with a hierarchical Bayesian sequence model

MPS-Authors
/persons/resource/persons242765

Elteto,  N
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons217460

Dayan,  P
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Elteto, N., Nemeth, D., Janacsek, K., & Dayan, P. (2022). Tracking human skill learning with a hierarchical Bayesian sequence model. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2022), Lisboa, Portugal.


Cite as: https://hdl.handle.net/21.11116/0000-000A-033E-E
Abstract
Perceptuo-motor sequences that underlie our everyday skills from walking to language have higher-order dependencies such that the statistics of one sequence element depend on a variably deep window of past elements. We used a non-parametric, hierarchical, forgetful, Bayesian sequence model to characterize the multi-day evolution of human participants’ implicit representation of a serial reaction time task sequence with higher-order dependencies. The model updates trial-by-trial, and seamlessly combines predictive information from shorter and longer windows onto past events, weighting the windows proportionally to their predictive power. We fitted the model to participants’ response times (RTs), assuming that faster responses reflected more certain predictions of the upcoming elements. Already in the first session, the model fit showed that participants had begun to rely on two previous elements (i.e., trigrams) for prediction, thereby successfully adapting to the higher-order task structure. However, at this early stage, local histories influenced their responses, correctly captured by forgetting in the model. With training, forgetting of trigrams was reduced, so that RTs were more robust to local statistical fluctuations – evidence of skilled performance. However, error responses still reflected forgetting-induced volatility of the internal model. By the last training session, a subset of participants shifted their prior further to consider a context even deeper than just two previous elements. Our model was able to predict the degree to which individuals enriched their internal model to represent dependencies of increasing orders.