English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Exploring learning trajectories with dynamic infinite hidden Markov models

MPS-Authors
/persons/resource/persons241091

Bruijns,  S
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons217460

Dayan,  P
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Bruijns, S., & Dayan, P. (2021). Exploring learning trajectories with dynamic infinite hidden Markov models. Poster presented at 43rd Annual Conference of the Cognitive Science Society (CogSci 2021).


Cite as: https://hdl.handle.net/21.11116/0000-0008-E408-E
Abstract
Learning the contingencies of a complex experiment is hard, and animals likely revise their strategies multiple times
during the process. Individuals learn in an idiosyncratic manner and may even end up with different asymptotic strategies.
Modeling such long-run acquisition requires a flexible and extensible structure which can capture radically new
behaviours as well as slow changes in existing ones. To this end, we suggest a dynamic input-output infinite hidden
Markov model whose latent states capture behaviours. We fit this model to data collected from mice who learnt a contrast
detection task over tens of sessions and thousands of trials. Different stages of learning are quantified via the number
and psychometric nature of prevalent behavioural states. Our model indicates that initial learning proceeds via drastic
changes in behavior (i.e. new states), whereas later learning consists of adaptations to existing states, even if the task structure changes notably at this time.