Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Vortrag

Replay

MPG-Autoren
/persons/resource/persons217460

Dayan,  P
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Dayan, P. (2020). Replay. Talk presented at 43th Annual Meeting of the Japan Neuroscience Society. Kobe, Japan. 2020-07-29 - 2020-08-01.


Zitierlink: https://hdl.handle.net/21.11116/0000-0007-4A00-6
Zusammenfassung
Animals and humans replay neural patterns encoding trajectories through their environment, both whilst they solve decision-making tasks and during rest. There is also evidence that activity in sensory cortices is regenerated during periods of time without behaviour in a way that resembles its form when animals are actively engaged in perception. Under a common assumption that we build models of the world and recognize and plan actions using those models, such intrinsically generated patterns are ideal for various forms of model inversion, giving us access to fast and effective methods for sensory processing and decision-making. I will discuss our recent investigations using magnetoencephalography to detect replay in human subjects as they perform decision-making tasks. In a simple choice task, we found evidence for various forms of replay, which differed between subjects who flexibly adjusted their choices to changes in temporal,spatial and reward structure and those who were slower to adapt to change. The former group predominantly replayed comparatively less good trajectories during task performance, and subsequently avoided these inefficient choices. The latter replayed comparatively preferred, but suboptimal,trajectories during rest periods between task epochs. We suggest that online and offline replay both contribute to planning, but each are associated with distinct model-based and model-free decision strategies.