English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Probing neural representations of scene perception in a hippocampally dependent task using artificial neural networks

Frey, M., Doeller, C. F., & Barry, C. (2023). Probing neural representations of scene perception in a hippocampally dependent task using artificial neural networks. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/CVPR52729.2023.00210.

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files
hide Files
:
Frey_pre.pdf (Preprint), 8MB
Name:
Frey_pre.pdf
Description:
-
OA-Status:
Green
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show
hide
Locator:
https://arxiv.org/abs/2303.06367 (Preprint)
Description:
-
OA-Status:
Green

Creators

show
hide
 Creators:
Frey, Markus1, 2, Author           
Doeller, Christian F.1, 2, Author                 
Barry, Caswell3, Author
Affiliations:
1Kavli Institute, Norwegian University of Science and Technology, Trondheim, Norway, ou_persistent22              
2Department Psychology (Doeller), MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_2591710              
3Cell & Developmental Biology, University College London, United Kingdom, ou_persistent22              

Content

show
hide
Free keywords: Visualization; Artificial neural networks; Transforms; Object segmentation; Benchmark testing; Visual systems; Brain modeling
 Abstract: Deep artificial neural networks (DNNs) trained through back propagation provide effective models of the mammalian visual system, accurately capturing the hierarchy of neural responses through primary visual cortex to inferior temporal cortex (IT) [41, 43]. However, the ability of these networks to explain representations in higher cortical areas is relatively lacking and considerably less well researched. For example, DNNs have been less successful as a model of the egocentric to allocentric transformation embodied by circuits in retrosplenial and posterior parietal cortex. We describe a novel scene perception benchmark inspired by a hippocampal dependent task, designed to probe the ability of DNNs to transform scenes viewed from different egocentric perspectives. Using a network architecture inspired by the connectivity between temporal lobe structures and the hippocampus, we demonstrate that DNNs trained using a triplet loss can learn this task. Moreover, by enforcing a factorized latent space, we can split information propagation into “what” and “wdere” pathways, which we use to reconstruct the input. This allows us to beat the state-of-the-art for unsupervised object segmentation on the CATER and MOVi-A, B, C benchmarks.

Details

show
hide
Language(s): eng - English
 Dates: 2023-08-22
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1109/CVPR52729.2023.00210
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -