English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

When good decisions go bad: Reinforcement learning and computational psychiatry

MPS-Authors
There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dayan, P. (2011). When good decisions go bad: Reinforcement learning and computational psychiatry. Talk presented at Donders Institute for Brain, Cognition and Behaviour: Donders Lecture. Nijmegen, The Netherlands. 2011-09-22.


Cite as: https://hdl.handle.net/21.11116/0000-0004-DA06-0
Abstract
Substantial efforts across the fields of statistics, operations research, economics, computer science and control theory have provided us with a psychologically- and neurobiologically-grounded account of how humans and other animals learn to predict rewards and punishments, and choose actions to maximize the former and minimize the latter. It becomes an obvious idea to try and relate disruptions of these models tothe discontents of decision-making, as seen in neurological andpsychiatric disease. I will describe the reinforcement learning model ofneural decision making, together with our early attempts to look at aspects of depression through the lenses of: (a) an infelicitous prior distribution over decision-making environments which indicates their lack of controllability; and (b) the failure of a serotonergically-mediated crutch which normally inhibits potentially unfortunate choices. This is joint work with Quentin Huys.