日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

  Automatic Task Decomposition using Compositional Reinforcement Learning

Tano, P., Dayan, P., & Pouget, A. (2022). Automatic Task Decomposition using Compositional Reinforcement Learning. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2022), Lisboa, Portugal.

Item is

基本情報

表示: 非表示:
アイテムのパーマリンク: https://hdl.handle.net/21.11116/0000-000A-0340-A 版のパーマリンク: https://hdl.handle.net/21.11116/0000-000C-9149-E
資料種別: ポスター

ファイル

表示: ファイル

作成者

表示:
非表示:
 作成者:
Tano, P, 著者
Dayan, P1, 著者           
Pouget, A, 著者
所属:
1Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3017468              

内容説明

表示:
非表示:
キーワード: -
 要旨: Decomposing complex tasks into their simpler components is often the only way for animals to make any meaningful progress at all. We show that reusing the traditional reward prediction error machinery at multiple hierarchical levels allows complex tasks to be automatically decomposed in a compositional manner, leading to fast and flexible reinforcement learning. In this compositional reinforcement learning (CRL) framework, the agent computes a set of predictions for each state in the form of hierarchically organized general value functions (GVFs). Level 0 GVFs predict whether continuing straight along cardinal directions in the state space will lead to a rewarded location; while a level P GVF predicts whether the same simple straight ahead policy leads to any location with a high value in any of the level P-1 GVFs. Learning involves two steps: (1) learning the mapping from state to GVFs and (2) learning the policy from the GVFs. These steps are fast in environments with natural cardinal directions and strong compositional structure. Learning the mapping from states to the GVFs with TD learning is fast because it involves simple policies which have low entropy in their outcomes and are able to efficiently explore the state space; while learning the mapping from GVFs to policy is greatly simplified by the compositional structure of the GVFs and the simple mapping from the cardinal directions to available actions. In rapidly changing environments, as is typical for the real world, CRL leads to remarkably fast learning. For instance, CRL vastly outperforms traditional approaches in a maze task in which the maze changes frequently, or when learning to reach for an object, whose location varies over trials, with a robotic arm. This work provides a biologically plausible framework to study task decomposition in animals confronted with rapidly changing environments.

資料詳細

表示:
非表示:
言語:
 日付: 2022-03
 出版の状態: オンラインで出版済み
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): -
 学位: -

関連イベント

表示:
非表示:
イベント名: Computational and Systems Neuroscience Meeting (COSYNE 2022)
開催地: Lisboa, Portugal
開始日・終了日: 2022-03-17 - 2022-03-20

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: Computational and Systems Neuroscience Meeting (COSYNE 2022)
種別: 会議論文集
 著者・編者:
所属:
出版社, 出版地: -
ページ: - 巻号: - 通巻号: 2-105 開始・終了ページ: 169 識別子(ISBN, ISSN, DOIなど): -