English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 PreviousNext  

Released

Paper

Corruption Robust Offline Reinforcement Learning with Human Feedback

MPS-Authors
/persons/resource/persons290798

Nika,  Andi
Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons216578

Singla,  Adish       
Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons246492

Radanović,  Goran
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2402.06734.pdf
(Preprint), 489KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Mandal, D., Nika, A., Kamalaruban, P., Singla, A., & Radanović, G. (2024). Corruption Robust Offline Reinforcement Learning with Human Feedback. Retrieved from https://arxiv.org/abs/2402.06734.


Cite as: https://hdl.handle.net/21.11116/0000-000E-7F84-F
Abstract
We study data corruption robustness for reinforcement learning with human
feedback (RLHF) in an offline setting. Given an offline dataset of pairs of
trajectories along with feedback about human preferences, an
$\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or
trajectory features manipulated), capturing an adversarial attack or noisy
human preferences. We aim to design algorithms that identify a near-optimal
policy from the corrupted data, with provable guarantees. Existing theoretical
works have separately studied the settings of corruption robust RL (learning
from scalar rewards directly under corruption) and offline RLHF (learning from
human feedback without corruption); however, they are inapplicable to our
problem of dealing with corrupted data in offline RLHF setting. To this end, we
design novel corruption robust offline RLHF methods under various assumptions
on the coverage of the data-generating distributions. At a high level, our
methodology robustifies an offline RLHF framework by first learning a reward
model along with confidence sets and then learning a pessimistic optimal policy
over the confidence set. Our key insight is that learning optimal policy can be
done by leveraging an offline corruption-robust RL oracle in different ways
(e.g., zero-order oracle or first-order oracle), depending on the data coverage
assumptions. To our knowledge, ours is the first work that provides provable
corruption robust offline RLHF methods.