English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Adversarial Training against Location-Optimized Adversarial Patches

Rao, S., Stutz, D., & Schiele, B. (2020). Adversarial Training against Location-Optimized Adversarial Patches. Retrieved from https://arxiv.org/abs/2005.02313.

Item is

Files

show Files
hide Files
:
arXiv:2005.02313.pdf (Preprint), 9KB
Name:
arXiv:2005.02313.pdf
Description:
File downloaded from arXiv at 2020-12-03 07:36
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/xhtml+xml / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Rao, Sukrut1, Author           
Stutz, David2, Author
Schiele, Bernt2, Author
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Cryptography and Security, cs.CR,Computer Science, Learning, cs.LG,Statistics, Machine Learning, stat.ML
 Abstract: Deep neural networks have been shown to be susceptible to adversarial
examples -- small, imperceptible changes constructed to cause
mis-classification in otherwise highly accurate image classifiers. As a
practical alternative, recent work proposed so-called adversarial patches:
clearly visible, but adversarially crafted rectangular patches in images. These
patches can easily be printed and applied in the physical world. While defenses
against imperceptible adversarial examples have been studied extensively,
robustness against adversarial patches is poorly understood. In this work, we
first devise a practical approach to obtain adversarial patches while actively
optimizing their location within the image. Then, we apply adversarial training
on these location-optimized adversarial patches and demonstrate significantly
improved robustness on CIFAR10 and GTSRB. Additionally, in contrast to
adversarial training on imperceptible adversarial examples, our adversarial
patch training does not reduce accuracy.

Details

show
hide
Language(s): eng - English
 Dates: 2020-05-052020
 Publication Status: Published online
 Pages: 18 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2005.02313
BibTex Citekey: Rao_arXiv2005.02313
URI: https://arxiv.org/abs/2005.02313
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show