English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Distributed representation of behaviorally-relevant object dimensions in the human brain

Contier, O., & Hebart, M. N. (2022). Distributed representation of behaviorally-relevant object dimensions in the human brain. Poster presented at Vision Sciences Society Annual Meeting (VSS), St. Pete Beach, FL, USA.

Item is

Files

show Files

Creators

show
hide
 Creators:
Contier, Oliver1, Author           
Hebart, Martin N.1, Author                 
Affiliations:
1Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_3158378              

Content

show
hide
Free keywords: -
 Abstract: Humans can identify and categorize visually-presented objects rapidly and without much effort, yet for our everyday interactions with the world some object dimensions (e.g. shape or function) matter more than others. While these behaviorally-relevant dimensions are believed to form the basis of our mental representations of objects, their characterization typically depends on small-scale experiments with synthetic stimuli, often with pre-defined dimensions, thus leaving open the large-scale structure of the behavioral representations on which we ground object recognition and categorization. To fill this gap, we used large-scale online crowdsourcing of behavioral choices in a triplet odd-one-out similarity task. Based on natural images of 1,854 distinct objects and ~1.5 million behavioral responses, we developed a data-driven computational model (sparse positive embedding) that identifies object dimensions by learning to predict behavior in this task. Despite this dataset representing only 0.15% of all possible trials, cross-validated performance was excellent, correctly predicting 63% of individual human responses and approaching noise ceiling (67%). Further, the similarity structure between objects derived from those dimensions exhibited a close correspondence to a reference similarity matrix of 48 objects (r = 0.90). The model identified 49 interpretable dimensions, representing degrees of taxonomic membership (e.g. food), function (e.g. transportation), and perceptual properties (e.g. shape, texture, color). The dimensions were predictive of external behavior, including human typicality judgments, category membership, and object feature norms, suggesting that the dimensions reflect mental representations of objects that generalize beyond the similarity task. Further, independent participants (n = 20) were able to assign values to the dimensions of 20 separate objects, reproducing their similarity structure with high accuracy (r = 0.84). Together, these results reveal an interpretable representational space that accurately describes human similarity judgments for thousands of objects, thus offering a pathway towards a generative model of visual similarity judgments based on the comparison of behaviorally-relevant object dimensions.

Details

show
hide
Language(s):
 Dates: 2022-05
 Publication Status: Not specified
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Vision Sciences Society Annual Meeting (VSS)
Place of Event: St. Pete Beach, FL, USA
Start-/End Date: 2022-05-13 - 2022-05-18

Legal Case

show

Project information

show

Source

show