English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Teaching Inverse Reinforcement Learners via Features and Demonstrations

MPS-Authors
/persons/resource/persons216578

Singla,  Adish       
Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1810.08926.pdf
(Preprint), 613KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Haug, L., Tschiatschek, S., & Singla, A. (2018). Teaching Inverse Reinforcement Learners via Features and Demonstrations. Retrieved from http://arxiv.org/abs/1810.08926.


Cite as: https://hdl.handle.net/21.11116/0000-0003-4E3C-4
Abstract
Learning near-optimal behaviour from an expert's demonstrations typically
relies on the assumption that the learner knows the features that the true
reward function depends on. In this paper, we study the problem of learning
from demonstrations in the setting where this is not the case, i.e., where
there is a mismatch between the worldviews of the learner and the expert. We
introduce a natural quantity, the teaching risk, which measures the potential
suboptimality of policies that look optimal to the learner in this setting. We
show that bounds on the teaching risk guarantee that the learner is able to
find a near-optimal policy using standard algorithms based on inverse
reinforcement learning. Based on these findings, we suggest a teaching scheme
in which the expert can decrease the teaching risk by updating the learner's
worldview, and thus ultimately enable her to find a near-optimal policy.