English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Finding structure in multi-armed bandits

Schulz, E., Franklin, N., & Gershman, S. (2020). Finding structure in multi-armed bandits. Cognitive Psychology, 119: 101261, pp. 1-35. doi:10.1016/j.cogpsych.2019.101261.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/21.11116/0000-0005-D582-7 Version Permalink: http://hdl.handle.net/21.11116/0000-0005-D583-6
Genre: Journal Article

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Schulz, E1, Author              
Franklin, NT, Author
Gershman, SJ, Author
Affiliations:
1External Organizations, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: How do humans search for rewards? This question is commonly studied using multi-armed bandit tasks, which require participants to trade off exploration and exploitation. Standard multi-armed bandits assume that each option has an independent reward distribution. However, learning about options independently is unrealistic, since in the real world options often share an underlying structure. We study a class of structured bandit tasks, which we use to probe how generalization guides exploration. In a structured multi-armed bandit, options have a correlation structure dictated by a latent function. We focus on bandits in which rewards are linear functions of an option’s spatial position. Across 5 experiments, we find evidence that participants utilize functional structure to guide their exploration, and also exhibit a learning-to-learn effect across rounds, becoming progressively faster at identifying the latent function. Our experiments rule out several heuristic explanations and show that the same findings obtain with non-linear functions. Comparing several models of learning and decision making, we find that the best model of human behavior in our tasks combines three computational mechanisms: (1) function learning, (2) clustering of reward distributions across rounds, and (3) uncertainty-guided exploration. Our results suggest that human reinforcement learning can utilize latent structure in sophisticated ways to improve efficiency.

Details

show
hide
Language(s):
 Dates: 2020-06
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1016/j.cogpsych.2019.101261
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Cognitive Psychology
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Academic Press
Pages: - Volume / Issue: 119 Sequence Number: 101261 Start / End Page: 1 - 35 Identifier: ISSN: 0010-0285
CoNE: https://pure.mpg.de/cone/journals/resource/954922645010