English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Adversarial Training against Location-Optimized Adversarial Patches

MPS-Authors
/persons/resource/persons253144

Rao,  Sukrut
Computer Graphics, MPI for Informatics, Max Planck Society;

Stutz,  David
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

Schiele,  Bernt
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2005.02313.pdf
(Preprint), 9KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Rao, S., Stutz, D., & Schiele, B. (2020). Adversarial Training against Location-Optimized Adversarial Patches. Retrieved from https://arxiv.org/abs/2005.02313.


Cite as: https://hdl.handle.net/21.11116/0000-0007-80D0-C
Abstract
Deep neural networks have been shown to be susceptible to adversarial
examples -- small, imperceptible changes constructed to cause
mis-classification in otherwise highly accurate image classifiers. As a
practical alternative, recent work proposed so-called adversarial patches:
clearly visible, but adversarially crafted rectangular patches in images. These
patches can easily be printed and applied in the physical world. While defenses
against imperceptible adversarial examples have been studied extensively,
robustness against adversarial patches is poorly understood. In this work, we
first devise a practical approach to obtain adversarial patches while actively
optimizing their location within the image. Then, we apply adversarial training
on these location-optimized adversarial patches and demonstrate significantly
improved robustness on CIFAR10 and GTSRB. Additionally, in contrast to
adversarial training on imperceptible adversarial examples, our adversarial
patch training does not reduce accuracy.