English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Accurate, reliable and fast robustness evaluation

Brendel, W., Rauber, J., Kümmerer, M., Ustyuzhaninov, I., & Bethge, M. (2020). Accurate, reliable and fast robustness evaluation. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 32 (pp. 12817-12827). Red Hook, NY, USA: Curran.

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files
hide Files
:
NEURIPS-2019-Brendel.pdf (Abstract), 16MB
Name:
NEURIPS-2019-Brendel.pdf
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Brendel, W, Author
Rauber, J, Author
Kümmerer, M, Author
Ustyuzhaninov, I, Author
Bethge, M1, 2, Author           
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturbations has moved from a peculiar phenomenon to a core issue in Deep Learning. Despite much attention, however, progress towards more robust models is significantly impaired by the difficulty of evaluating the robustness of neural network models. Today's methods are either fast but brittle (gradient-based attacks), or they are fairly reliable but slow (score- and decision-based attacks). We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning. These findings are carefully validated across a diverse set of six different models and hold for L2 and L-infinity in both targeted as well as untargeted scenarios. Implementations will be available in all major toolboxes (Foolbox, CleverHans and ART). We hope that this class of attacks will make robustness evaluations easier and more reliable, thus contributing to more signal in the search for more robust machine learning models.

Details

show
hide
Language(s):
 Dates: 2019-122020-06
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019)
Place of Event: Vancouver, Canada
Start-/End Date: 2019-12-09 - 2019-12-13

Legal Case

show

Project information

show

Source 1

show
hide
Title: Advances in Neural Information Processing Systems 32
Source Genre: Proceedings
 Creator(s):
Wallach, H, Editor
Larochelle, H, Editor
Beygelzimer , A, Editor
d'Alché-Buc, F, Editor
Fox, E, Editor
Garnett, R, Editor
Affiliations:
-
Publ. Info: Red Hook, NY, USA : Curran
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 12817 - 12827 Identifier: ISBN: 978-1-7138-0793-3