English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

Generative and discriminative reinforcement learning as model-based and model-free control

MPS-Authors
/persons/resource/persons217460

Dayan,  P
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dayan, P. (2022). Generative and discriminative reinforcement learning as model-based and model-free control. Talk presented at Second International Conference on Error-Driven Learning in Language (EDLL 2022). Tübingen, Germany. 2022-08-01 - 2022-08-03.


Cite as: https://hdl.handle.net/21.11116/0000-000A-D612-0
Abstract
Substantial recent work has explored multiple mechanisms of decision-making in humans and other animals. Functionally and anatomically distinct modules have been identified, and their individual properties have been examined using intricate behavioural and neural tools. One critical distinction, which is related to many popular psychological dichotomies, is between model-based or goal-directed control, which is reflective and depends on prospective reasoning, and model-free or habitual control, which is reflexive, and depends on retrospective learning. I will show how to see these two systems in generative and discriminative terms, respectively, and discuss their interaction and integration.