English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Modeling Human Exploration Through Resource-Rational Reinforcement Learning

Binz, M., & Schulz, E. (submitted). Modeling Human Exploration Through Resource-Rational Reinforcement Learning.

Item is

Files

show Files

Locators

show
hide
Locator:
https://arxiv.org/pdf/2201.11817 (Any fulltext)
Description:
-
OA-Status:
Not specified

Creators

show
hide
 Creators:
Binz, M1, Author                 
Schulz, E1, Author           
Affiliations:
1Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3189356              

Content

show
hide
Free keywords: -
 Abstract: Equipping artificial agents with useful exploration mechanisms remains a challenge to this day. Humans, on the other hand, seem to manage the trade-off between exploration and exploitation effortlessly. In the present article, we put forward the hypothesis that they accomplish this by making optimal use of limited computational resources. We study this hypothesis by meta-learning reinforcement learning algorithms that sacrifice performance for a shorter description length (defined as the number of bits required to implement the given algorithm). The emerging class of models captures human exploration behavior better than previously considered approaches, such as Boltzmann exploration, upper confidence bound algorithms, and Thompson sampling. We additionally demonstrate that changing the description length in our class of models produces the intended effects: reducing description length captures the behavior of brain-lesioned patients while increasing it mirrors cognitive development during adolescence.

Details

show
hide
Language(s):
 Dates: 2022-11
 Publication Status: Submitted
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.48550/arXiv.2201.11817
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show