English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Admissible Policy Teaching through Reward Design

MPS-Authors
/persons/resource/persons269486

Banihashem,  Kiarash
Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons216578

Singla,  Adish
Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons269488

Gan,  Jiarui
Group R. Majumdar, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons246492

Radanovic,  Goran
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2201.02185.pdf
(Preprint), 901KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Banihashem, K., Singla, A., Gan, J., & Radanovic, G. (2022). Admissible Policy Teaching through Reward Design. Retrieved from https://arxiv.org/abs/2201.02185.


Cite as: https://hdl.handle.net/21.11116/0000-0009-F7C5-2
Abstract
We study reward design strategies for incentivizing a reinforcement learning
agent to adopt a policy from a set of admissible policies. The goal of the
reward designer is to modify the underlying reward function cost-efficiently
while ensuring that any approximately optimal deterministic policy under the
new reward function is admissible and performs well under the original reward
function. This problem can be viewed as a dual to the problem of optimal reward
poisoning attacks: instead of forcing an agent to adopt a specific policy, the
reward designer incentivizes an agent to avoid taking actions that are
inadmissible in certain states. Perhaps surprisingly, and in contrast to the
problem of optimal reward poisoning attacks, we first show that the reward
design problem for admissible policy teaching is computationally challenging,
and it is NP-hard to find an approximately optimal reward modification. We then
proceed by formulating a surrogate problem whose optimal solution approximates
the optimal solution to the reward design problem in our setting, but is more
amenable to optimization techniques and analysis. For this surrogate problem,
we present characterization results that provide bounds on the value of the
optimal solution. Finally, we design a local search algorithm to solve the
surrogate problem and showcase its utility using simulation-based experiments.