English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
EndNote (UTF-8)
 
DownloadE-Mail
  Evaluating alignment between humans and neural network representations in image-based learning tasks

Demircan, C., Saanum, T., Pettini, L., Binz, M., Baczkowski, B., Doeller, C., et al. (2024). Evaluating alignment between humans and neural network representations in image-based learning tasks. In Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024).

Item is

Basic

hide
Genre: Conference Paper

Files

show Files

Locators

hide
Description:
-
OA-Status:
Not specified

Creators

hide
 Creators:
Demircan, C1, Author           
Saanum, T1, Author           
Pettini, L, Author
Binz, M1, Author                 
Baczkowski, BJ, Author
Doeller, CF, Author
Garvert, MM, Author
Schulz, E1, Author                 
Affiliations:
1Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3189356              

Content

hide
Free keywords: -
 Abstract: Humans represent scenes and objects in rich feature spaces, carrying information that allows us to generalise about category memberships and abstract functions with few examples. What determines whether a neural network model generalises like a human? We tested how well the representations of pretrained neural network models mapped to human learning trajectories across two tasks where humans had to learn continuous relationships and categories of natural images. In these tasks, both human participants and neural networks successfully identified the relevant stimulus features within a few trials, demonstrating effective generalisation. We found that while training dataset size was a core determinant of alignment with human choices, contrastive training with multi-modal data (text and imagery) was a common feature of currently publicly available models that predicted human generalisation. Intrinsic dimensionality of representations had different effects on alignment for different model types. Lastly, we tested three sets of human-aligned representations and found no consistent improvements in predictive accuracy compared to the baselines. In conclusion, pretrained neural networks can serve to extract representations for cognitive models, as they appear to capture some fundamental aspects of cognition that are transferable across tasks. Both our paradigms and modelling approach offer a novel way to quantify alignment between neural networks and humans and extend cognitive science into more naturalistic domains.

Details

hide
Language(s):
 Dates: 2024-12
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

hide
Title: Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)
Place of Event: Vancouver, Canada
Start-/End Date: 2024-12-11 - 2024-12-15

Legal Case

show

Project information

show

Source 1

hide
Title: Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -