English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Decoding future state representations during planning

MPS-Authors
There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Kurth-Nelson, Z., Penny, W., Huys, Q., Guitart-Masip, M., Jafarpour, A., Hassabis, D., et al. (2013). Decoding future state representations during planning. Poster presented at 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2013), Princeton, NJ, USA.


Cite as: https://hdl.handle.net/21.11116/0000-0004-DAF5-2
Abstract
Planning enables humans and animals to use their knowledge of the structure of the world to
anticipate the consequences of their actions, even when these consequences have never been experienced.
Yet little is known about the algorithm used by the brain for planning. A possible neural basis for planning
is hinted at in recordings in rodents that have revealed “preplay”, or explicit sequential neural representation
of future states, at decision points. In humans, neuroimaging studies have also identified neural correlates of
future state values, but a direct observation of a neural representation of future states during planning has re-
mained elusive. Directly observing these representations would allow us to disambiguate different possible
planning algorithms. In the present study, we asked subjects to perform a 5-step planning task in a complex
maze. To prevent habitization, one unavailable transition was cued to subjects at the beginning of each trial.
Fitting computational models with different maximum search depths to behavioral data suggested a wide
range of depth between subjects, with some maximizing only immediate rewards, and others taking into ac-
count deep future contingencies. We took advantage of the fast time resolution of magnetoencephalography
(MEG), along with multivariate pattern classification, to study neural representations of future states during
planning. Dimensionality of MEG time-frequency data was reduced with principal components analysis,
and a linear classifier was applied to the low-dimensional data. The classifier was trained by recording MEG
activity while presenting stimuli in random order before the task began. In leave-one-out cross-validation
on the training data, this classifier performed significantly above chance for all subjects. We then applied
this classifier to neural data acquired at choice points during the task, and using this approach we seek to identify future states represented during planning.