Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Meeting Abstract

Lack of Robustness in Artificial Neural Networks

There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Bethge, M. (2019). Lack of Robustness in Artificial Neural Networks. Neuroforum, 25(Supplement 1): S23-1, 179.

Cite as: https://hdl.handle.net/21.11116/0000-0003-1F71-C
Deep neural networks have become a ubiquitous tool in a broad range of AI applications. Resembling important aspects of rapid feed-forward visual processing in the ventral stream they can be trained to
match human behavior on standardized pattern recognition tasks. Outside the training distribution, however, decision making of artificial neural networks exhibits large discrepancies to biological vision
systems. I will give an overview on the lack of robustness in deep neural networks and present recent results of my lab to quantify and overcome these discrepancies.