English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Understanding Learning Trajectories With Infinite Hidden Markov Models

Bruijns, S., Dayan, P., & The International Brain Laborartory (2023). Understanding Learning Trajectories With Infinite Hidden Markov Models. In 2023 Conference on Cognitive Computational Neuroscience (pp. 770-772). doi:10.32470/CCN.2023.1632-0.

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files

Locators

show
hide
Description:
-
OA-Status:
Not specified

Creators

show
hide
 Creators:
Bruijns, S1, Author                 
Dayan, P1, Author                 
The International Brain Laborartory, Author
Affiliations:
1Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3017468              

Content

show
hide
Free keywords: -
 Abstract: Learning the contingencies of a new experiment is not an easy task for animals. Individuals learn in an idiosyncratic manner, revising their strategies multiple times as they are shaped, or shape themselves. Long-run learning is therefore a tantalizing target for the sort of quantitatively individualized characterization that sophisticated modelling can provide. However, any such model requires a highly flexible and extensible structure which can capture radically new behaviours as well as slow adaptations in existing ones. Here, we suggest a dynamic input-output infinite hidden Markov model whose latent states are associated with specific, slowly-adapting, behavioural patterns. This model includes a countably infinite number of potential states and so can describe new behaviour by introducing additional states, while the dynamics in the model allow it to capture adaptations to existing behaviours. We fit this model to the choices of mice as they take around 10,000 trials each, across multiple sessions, to learn a contrast detection task. We identify three types of behavioural states which demarcate essential steps in the learning of our task for virtually all mice. Our approach provides in-depth insight into the process of animal learning and offers potentially valuable predictors for analyzing neural data.

Details

show
hide
Language(s):
 Dates: 2023-08
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.32470/CCN.2023.1632-0
 Degree: -

Event

show
hide
Title: Conference on Cognitive Computational Neuroscience (CCN 2023)
Place of Event: Oxford, UK
Start-/End Date: 2023-08-24 - 2023-08-27

Legal Case

show

Project information

show

Source 1

show
hide
Title: 2023 Conference on Cognitive Computational Neuroscience
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: P-2.97 Start / End Page: 770 - 772 Identifier: -