ausblenden:
Schlagwörter:
-
Zusammenfassung:
The recent breakthrough in deep learning has led to a rapid explosion in the evolution of artificial neural networks that successfully perform complex computations such as object recognition or semantic image segmentation. Unlike in the past, the complexity of these networks seems essential for their success and cannot easily be replaced by much simpler architectures. In trying to understand how deep neural networks achieve robust perceptual interpretations of sensory stimuli, we face similar questions as we do in neuroscience even though their full connectome is known and it is easy to obtain the responses of all its neurons to arbitrary stimuli. How can we obtain precise descriptions of neural responses without relying on the specifics of implementation? Can we characterize the knowledge that such networks have acquired about the world and how it is represented? I will present recent results from my lab on assessing the meaning of neural representations in high-performing convolutional neural networks. More generally, I will argue that the rise of deep neural networks offers a particular chance for computational neuroscience to advance its concepts and tools for understanding complex computational neural systems, and I am hoping to spark stimulating discussions on how we could use this opportunity.