日本語
 
User Manual Privacy Policy ポリシー/免責事項 連絡先
  詳細検索ブラウズ

アイテム詳細


公開

成果報告書

Disentangling Adversarial Robustness and Generalization

MPS-Authors
/persons/resource/persons228449

Stutz,  David
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

URL
There are no locators available
フルテキスト (公開)

arXiv:1812.00740.pdf
(プレプリント), 3MB

付随資料 (公開)
There is no public supplementary material available
引用

Stutz, D., Hein, M., & Schiele, B. (2018). Disentangling Adversarial Robustness and Generalization. Retrieved from http://arxiv.org/abs/1812.00740.


引用: http://hdl.handle.net/21.11116/0000-0002-A285-0
要旨
Obtaining deep networks that are robust against adversarial examples and generalize well is an open problem. A recent hypothesis even states that both robust and accurate models are impossible, i.e., adversarial robustness and generalization are conflicting goals. In an effort to clarify the relationship between robustness and generalization, we assume an underlying, low-dimensional data manifold and show that: 1. regular adversarial examples leave the manifold; 2. adversarial examples constrained to the manifold, i.e., on-manifold adversarial examples, exist; 3. on-manifold adversarial examples are generalization errors, and on-manifold adversarial training boosts generalization; 4. and regular robustness is independent of generalization. These assumptions imply that both robust and accurate models are possible. However, different models (architectures, training strategies etc.) can exhibit different robustness and generalization characteristics. To confirm our claims, we present extensive experiments on synthetic data (with access to the true manifold) as well as on EMNIST, Fashion-MNIST and CelebA.