Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Distributed representation of behaviorally-relevant object dimensions in the human brain

MPG-Autoren
/persons/resource/persons266000

Contier,  Oliver
Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons242545

Hebart,  Martin N.       
Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Contier, O., & Hebart, M. N. (2022). Distributed representation of behaviorally-relevant object dimensions in the human brain. Poster presented at Vision Sciences Society Annual Meeting (VSS), St. Pete Beach, FL, USA.


Zitierlink: https://hdl.handle.net/21.11116/0000-000B-1B37-A
Zusammenfassung
Humans can identify and categorize visually-presented objects rapidly and without much effort, yet for our everyday interactions with the world some object dimensions (e.g. shape or function) matter more than others. While these behaviorally-relevant dimensions are believed to form the basis of our mental representations of objects, their characterization typically depends on small-scale experiments with synthetic stimuli, often with pre-defined dimensions, thus leaving open the large-scale structure of the behavioral representations on which we ground object recognition and categorization. To fill this gap, we used large-scale online crowdsourcing of behavioral choices in a triplet odd-one-out similarity task. Based on natural images of 1,854 distinct objects and ~1.5 million behavioral responses, we developed a data-driven computational model (sparse positive embedding) that identifies object dimensions by learning to predict behavior in this task. Despite this dataset representing only 0.15% of all possible trials, cross-validated performance was excellent, correctly predicting 63% of individual human responses and approaching noise ceiling (67%). Further, the similarity structure between objects derived from those dimensions exhibited a close correspondence to a reference similarity matrix of 48 objects (r = 0.90). The model identified 49 interpretable dimensions, representing degrees of taxonomic membership (e.g. food), function (e.g. transportation), and perceptual properties (e.g. shape, texture, color). The dimensions were predictive of external behavior, including human typicality judgments, category membership, and object feature norms, suggesting that the dimensions reflect mental representations of objects that generalize beyond the similarity task. Further, independent participants (n = 20) were able to assign values to the dimensions of 20 separate objects, reproducing their similarity structure with high accuracy (r = 0.84). Together, these results reveal an interpretable representational space that accurately describes human similarity judgments for thousands of objects, thus offering a pathway towards a generative model of visual similarity judgments based on the comparison of behaviorally-relevant object dimensions.