English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Decoding future state representations during planning

Kurth-Nelson, Z., Penny, W., Huys, Q., Guitart-Masip, M., Jafarpour, A., Hassabis, D., et al. (2013). Decoding future state representations during planning. Poster presented at 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2013), Princeton, NJ, USA.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Kurth-Nelson, Z, Author
Penny, W, Author
Huys, Q, Author
Guitart-Masip, M, Author
Jafarpour, A, Author
Hassabis, D, Author
Barnes, G, Author
Dolan, R, Author
Dayan, P1, Author           
Affiliations:
1External Organizations, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: Planning enables humans and animals to use their knowledge of the structure of the world to
anticipate the consequences of their actions, even when these consequences have never been experienced.
Yet little is known about the algorithm used by the brain for planning. A possible neural basis for planning
is hinted at in recordings in rodents that have revealed “preplay”, or explicit sequential neural representation
of future states, at decision points. In humans, neuroimaging studies have also identified neural correlates of
future state values, but a direct observation of a neural representation of future states during planning has re-
mained elusive. Directly observing these representations would allow us to disambiguate different possible
planning algorithms. In the present study, we asked subjects to perform a 5-step planning task in a complex
maze. To prevent habitization, one unavailable transition was cued to subjects at the beginning of each trial.
Fitting computational models with different maximum search depths to behavioral data suggested a wide
range of depth between subjects, with some maximizing only immediate rewards, and others taking into ac-
count deep future contingencies. We took advantage of the fast time resolution of magnetoencephalography
(MEG), along with multivariate pattern classification, to study neural representations of future states during
planning. Dimensionality of MEG time-frequency data was reduced with principal components analysis,
and a linear classifier was applied to the low-dimensional data. The classifier was trained by recording MEG
activity while presenting stimuli in random order before the task began. In leave-one-out cross-validation
on the training data, this classifier performed significantly above chance for all subjects. We then applied
this classifier to neural data acquired at choice points during the task, and using this approach we seek to identify future states represented during planning.

Details

show
hide
Language(s):
 Dates: 2013-10
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2013)
Place of Event: Princeton, NJ, USA
Start-/End Date: 2013-10-25 - 2013-10-27

Legal Case

show

Project information

show

Source 1

show
hide
Title: 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2013)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: S16 Start / End Page: 54 - 55 Identifier: -