Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Corruption Robust Offline Reinforcement Learning with Human Feedback

Mandal, D., Nika, A., Kamalaruban, P., Singla, A., & Radanović, G. (2024). Corruption Robust Offline Reinforcement Learning with Human Feedback. Retrieved from https://arxiv.org/abs/2402.06734.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2402.06734.pdf (Preprint), 489KB
Name:
arXiv:2402.06734.pdf
Beschreibung:
File downloaded from arXiv at 2024-03-01 14:10
OA-Status:
Grün
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Mandal, Debmalya1, Autor
Nika, Andi2, Autor           
Kamalaruban, Parameswaran1, Autor
Singla, Adish2, Autor                 
Radanović, Goran3, Autor           
Affiliations:
1External Organizations, ou_persistent22              
2Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society, ou_2541698              
3Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society, ou_2105291              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Learning, cs.LG,Computer Science, Artificial Intelligence, cs.AI
 Zusammenfassung: We study data corruption robustness for reinforcement learning with human
feedback (RLHF) in an offline setting. Given an offline dataset of pairs of
trajectories along with feedback about human preferences, an
$\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or
trajectory features manipulated), capturing an adversarial attack or noisy
human preferences. We aim to design algorithms that identify a near-optimal
policy from the corrupted data, with provable guarantees. Existing theoretical
works have separately studied the settings of corruption robust RL (learning
from scalar rewards directly under corruption) and offline RLHF (learning from
human feedback without corruption); however, they are inapplicable to our
problem of dealing with corrupted data in offline RLHF setting. To this end, we
design novel corruption robust offline RLHF methods under various assumptions
on the coverage of the data-generating distributions. At a high level, our
methodology robustifies an offline RLHF framework by first learning a reward
model along with confidence sets and then learning a pessimistic optimal policy
over the confidence set. Our key insight is that learning optimal policy can be
done by leveraging an offline corruption-robust RL oracle in different ways
(e.g., zero-order oracle or first-order oracle), depending on the data coverage
assumptions. To our knowledge, ours is the first work that provides provable
corruption robust offline RLHF methods.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2024-02-092024
 Publikationsstatus: Online veröffentlicht
 Seiten: 46 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2402.06734
URI: https://arxiv.org/abs/2402.06734
BibTex Citekey: Mandal2402.06734
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: