English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Preprint

THINGS+: New norms and metadata for the THINGS database of 1,854 object concepts and 26,107 natural object images

MPS-Authors
/persons/resource/persons242545

Hebart,  Martin N.       
Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Stoinski_pre.pdf
(Preprint), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Stoinski, L. M., Perkuhn, J., & Hebart, M. N. (2022). THINGS+: New norms and metadata for the THINGS database of 1,854 object concepts and 26,107 natural object images. PsyArXiv. doi:10.31234/osf.io/exu9f.


Cite as: https://hdl.handle.net/21.11116/0000-000B-9CBE-0
Abstract
To study visual object processing, the need for well-curated object concepts and images has grown significantly over the past years. To address this we have previously developed THINGS (Hebart et al., 2019), a large-scale database of 1,854 systematically sampled object concepts with 26,107 high-quality naturalistic images of these concepts. With THINGS+ we aim to extend THINGS by adding concept-specific and image-specific norms and metadata. Concept-specific norms were collected for all 1,854 object concepts for the object properties real-world size, manmadeness, preciousness, liveliness, heaviness, naturalness, ability to move, graspability, holdability, ability to be moved, pleasantness, and arousal. Further, we extended high-level categorization to 53 superordinate categories and collected typicality ratings for members of all 53 categories. Image-specific metadata includes measures of nameability and recognizability for objects in all 26,107 images. To this end, we asked participants to provide labels for prominent objects depicted in each of the 26,107 images and measured the alignment with the original object concept. Finally, to present example images in publications without copyright restrictions, we identified one new public domain image per object concept. In this study we demonstrate a high consistency of property (r = 0.92-0.99, M = 0.98, SD = 0.34) and typicality ratings (r = 0.88-0.98; M = 0.96, SD = 0.19), with arousal ratings as the only exception (r = 0.69). Correlations of our data with external norms were moderate to high for object properties (r = 0.44-0.95; M = 0.85, SD = 0.32) and typicality scores (r = 0.72-0.88; M = 0.79, SD = 0.18), again with the lowest validity for arousal (r = 0.30 - 0.52). To summarize, THINGS+ provides a broad, externally-validated extension to existing object norms and an important extension to THINGS as a general resource of object concepts, images, and category memberships. Our norms, metadata, and images provide a detailed selection of stimuli and control variables for a wide range of research interested in object processing and semantic memory.