hide
Free keywords:
-
Abstract:
Planning enables humans and animals to use their knowledge of the structure of the world to
anticipate the consequences of their actions, even when these consequences have never been experienced.
Yet little is known about the algorithm used by the brain for planning. A possible neural basis for planning
is hinted at in recordings in rodents that have revealed “preplay”, or explicit sequential neural representation
of future states, at decision points. In humans, neuroimaging studies have also identified neural correlates of
future state values, but a direct observation of a neural representation of future states during planning has re-
mained elusive. Directly observing these representations would allow us to disambiguate different possible
planning algorithms. In the present study, we asked subjects to perform a 5-step planning task in a complex
maze. To prevent habitization, one unavailable transition was cued to subjects at the beginning of each trial.
Fitting computational models with different maximum search depths to behavioral data suggested a wide
range of depth between subjects, with some maximizing only immediate rewards, and others taking into ac-
count deep future contingencies. We took advantage of the fast time resolution of magnetoencephalography
(MEG), along with multivariate pattern classification, to study neural representations of future states during
planning. Dimensionality of MEG time-frequency data was reduced with principal components analysis,
and a linear classifier was applied to the low-dimensional data. The classifier was trained by recording MEG
activity while presenting stimuli in random order before the task began. In leave-one-out cross-validation
on the training data, this classifier performed significantly above chance for all subjects. We then applied
this classifier to neural data acquired at choice points during the task, and using this approach we seek to identify future states represented during planning.