English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks

MPS-Authors
/persons/resource/persons287481

Mohammadi,  Mohammad
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons287474

Mandal,  Debmalya
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons216578

Singla,  Adish       
Group A. Singla, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons246492

Radanovic,  Goran
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Mohammadi, M., Nöther, J., Mandal, D., Singla, A., & Radanovic, G. (2023). Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks. In N. Agmon, B. An, A. Ricci, & W. Yeoh (Eds.), AAMAS '23 (pp. 1835-1844). Liverpool: IFAAMAS.


Cite as: https://hdl.handle.net/21.11116/0000-000C-B6A2-F
Abstract
In targeted poisoning attacks, an attacker manipulates an agent-environment
interaction to force the agent into adopting a policy of interest, called
target policy. Prior work has primarily focused on attacks that modify standard
MDP primitives, such as rewards or transitions. In this paper, we study
targeted poisoning attacks in a two-agent setting where an attacker implicitly
poisons the effective environment of one of the agents by modifying the policy
of its peer. We develop an optimization framework for designing optimal
attacks, where the cost of the attack measures how much the solution deviates
from the assumed default policy of the peer agent. We further study the
computational properties of this optimization framework. Focusing on a tabular
setting, we show that in contrast to poisoning attacks based on MDP primitives
(transitions and (unbounded) rewards), which are always feasible, it is NP-hard
to determine the feasibility of implicit poisoning attacks. We provide
characterization results that establish sufficient conditions for the
feasibility of the attack problem, as well as an upper and a lower bound on the
optimal cost of the attack. We propose two algorithmic approaches for finding
an optimal adversarial policy: a model-based approach with tabular policies and
a model-free approach with parametric/neural policies. We showcase the efficacy
of the proposed algorithms through experiments.