Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks

Mohammadi, M., Nöther, J., Mandal, D., Singla, A., & Radanovic, G. (2023). Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks. Retrieved from https://arxiv.org/abs/2302.13851.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2302.13851.pdf (Preprint), 2MB
 
Datei-Permalink:
-
Name:
arXiv:2302.13851.pdf
Beschreibung:
File downloaded from arXiv at 2023-03-06 10:04
OA-Status:
Sichtbarkeit:
Privat
MIME-Typ / Prüfsumme:
application/pdf
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Mohammadi, Mohammad1, Autor           
Nöther, Jonathan2, Autor
Mandal, Debmalya1, Autor           
Singla, Adish3, Autor           
Radanovic, Goran1, Autor           
Affiliations:
1Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society, ou_2105291              
2External Organizations, ou_persistent22              
3Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society, ou_2541698              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Learning, cs.LG,Computer Science, Artificial Intelligence, cs.AI,Computer Science, Cryptography and Security, cs.CR,Computer Science, Multiagent Systems, cs.MA
 Zusammenfassung: In targeted poisoning attacks, an attacker manipulates an agent-environment
interaction to force the agent into adopting a policy of interest, called
target policy. Prior work has primarily focused on attacks that modify standard
MDP primitives, such as rewards or transitions. In this paper, we study
targeted poisoning attacks in a two-agent setting where an attacker implicitly
poisons the effective environment of one of the agents by modifying the policy
of its peer. We develop an optimization framework for designing optimal
attacks, where the cost of the attack measures how much the solution deviates
from the assumed default policy of the peer agent. We further study the
computational properties of this optimization framework. Focusing on a tabular
setting, we show that in contrast to poisoning attacks based on MDP primitives
(transitions and (unbounded) rewards), which are always feasible, it is NP-hard
to determine the feasibility of implicit poisoning attacks. We provide
characterization results that establish sufficient conditions for the
feasibility of the attack problem, as well as an upper and a lower bound on the
optimal cost of the attack. We propose two algorithmic approaches for finding
an optimal adversarial policy: a model-based approach with tabular policies and
a model-free approach with parametric/neural policies. We showcase the efficacy
of the proposed algorithms through experiments.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2023-02-272023
 Publikationsstatus: Online veröffentlicht
 Seiten: 27 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2302.13851
URI: https://arxiv.org/abs/2302.13851
BibTex Citekey: Mohammadi2302.13851
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: