English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

Interactions between Model-free and Model-based Reinforcement Learning

MPS-Authors
There are no MPG-Authors in the publication available
External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dayan, P. (2011). Interactions between Model-free and Model-based Reinforcement Learning. Talk presented at 21st Annual Conference of the Japanese Neural Network Society (JNNS 2011). Okinawa, Japan. 2011-12-15 - 2011-12-17.


Cite as: https://hdl.handle.net/21.11116/0000-0007-4A4D-1
Abstract
Substantial recent work has explored multiple mechanisms of decision-making in humans and other animals. Functionally and anatomically distinct modules have been identified, and their individual properties have been examined using intricate behavioural and neural tools. I will discuss the background of these studies, and show fMRI results that suggest closer and more complex interactions between the mechanisms than originally conceived. In some circumstances, model-free methods seize control after much less experience than would seem normative; in others, temporal difference prediction errors, which are epiphenomenal for the model-based system, are nevertheless present and apparently effective. Finally, I will show that model-free and model-based methods on occasion both cower in the face of Pavlovian influences, and will try and reconcile this as a form of robust control.