English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Corruption Robust Offline Reinforcement Learning with Human Feedback

Mandal, D., Nika, A., Kamalaruban, P., Singla, A., & Radanović, G. (2024). Corruption Robust Offline Reinforcement Learning with Human Feedback. Retrieved from https://arxiv.org/abs/2402.06734.

Item is

Files

show Files
hide Files
:
arXiv:2402.06734.pdf (Preprint), 489KB
Name:
arXiv:2402.06734.pdf
Description:
File downloaded from arXiv at 2024-03-01 14:10
OA-Status:
Green
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Mandal, Debmalya1, Author
Nika, Andi2, Author           
Kamalaruban, Parameswaran1, Author
Singla, Adish2, Author                 
Radanović, Goran3, Author           
Affiliations:
1External Organizations, ou_persistent22              
2Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society, ou_2541698              
3Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society, ou_2105291              

Content

show
hide
Free keywords: Computer Science, Learning, cs.LG,Computer Science, Artificial Intelligence, cs.AI
 Abstract: We study data corruption robustness for reinforcement learning with human
feedback (RLHF) in an offline setting. Given an offline dataset of pairs of
trajectories along with feedback about human preferences, an
$\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or
trajectory features manipulated), capturing an adversarial attack or noisy
human preferences. We aim to design algorithms that identify a near-optimal
policy from the corrupted data, with provable guarantees. Existing theoretical
works have separately studied the settings of corruption robust RL (learning
from scalar rewards directly under corruption) and offline RLHF (learning from
human feedback without corruption); however, they are inapplicable to our
problem of dealing with corrupted data in offline RLHF setting. To this end, we
design novel corruption robust offline RLHF methods under various assumptions
on the coverage of the data-generating distributions. At a high level, our
methodology robustifies an offline RLHF framework by first learning a reward
model along with confidence sets and then learning a pessimistic optimal policy
over the confidence set. Our key insight is that learning optimal policy can be
done by leveraging an offline corruption-robust RL oracle in different ways
(e.g., zero-order oracle or first-order oracle), depending on the data coverage
assumptions. To our knowledge, ours is the first work that provides provable
corruption robust offline RLHF methods.

Details

show
hide
Language(s): eng - English
 Dates: 2024-02-092024
 Publication Status: Published online
 Pages: 46 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2402.06734
URI: https://arxiv.org/abs/2402.06734
BibTex Citekey: Mandal2402.06734
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show