Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Automatic Task Decomposition using Compositional Reinforcement Learning

MPG-Autoren
/persons/resource/persons217460

Dayan,  P
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Tano, P., Dayan, P., & Pouget, A. (2022). Automatic Task Decomposition using Compositional Reinforcement Learning. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2022), Lisboa, Portugal.


Zitierlink: https://hdl.handle.net/21.11116/0000-000A-0340-A
Zusammenfassung
Decomposing complex tasks into their simpler components is often the only way for animals to make any meaningful progress at all. We show that reusing the traditional reward prediction error machinery at multiple hierarchical levels allows complex tasks to be automatically decomposed in a compositional manner, leading to fast and flexible reinforcement learning. In this compositional reinforcement learning (CRL) framework, the agent computes a set of predictions for each state in the form of hierarchically organized general value functions (GVFs). Level 0 GVFs predict whether continuing straight along cardinal directions in the state space will lead to a rewarded location; while a level P GVF predicts whether the same simple straight ahead policy leads to any location with a high value in any of the level P-1 GVFs. Learning involves two steps: (1) learning the mapping from state to GVFs and (2) learning the policy from the GVFs. These steps are fast in environments with natural cardinal directions and strong compositional structure. Learning the mapping from states to the GVFs with TD learning is fast because it involves simple policies which have low entropy in their outcomes and are able to efficiently explore the state space; while learning the mapping from GVFs to policy is greatly simplified by the compositional structure of the GVFs and the simple mapping from the cardinal directions to available actions. In rapidly changing environments, as is typical for the real world, CRL leads to remarkably fast learning. For instance, CRL vastly outperforms traditional approaches in a maze task in which the maze changes frequently, or when learning to reach for an object, whose location varies over trials, with a robotic arm. This work provides a biologically plausible framework to study task decomposition in animals confronted with rapidly changing environments.