English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  THINGS-fMRI/MEG: A large-scale multimodal neuroimaging dataset of responses to natural object images

Contier, O., Dickter, A. H., Teichmann, L., & Hebart, M. N. (2022). THINGS-fMRI/MEG: A large-scale multimodal neuroimaging dataset of responses to natural object images. Poster presented at Vision Sciences Society Annual Meeting (V-VSS), Virtual.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Contier, Oliver1, Author           
Dickter, Adam H.2, Author
Teichmann , Lina, Author
Hebart, Martin N.1, Author                 
Affiliations:
1Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_3158378              
2Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda MD, United States of America, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: A detailed understanding of visual object representations in brain and behavior is fundamentally limited by the number of stimuli that can be presented in any one experiment. Ideally, the space of objects should be sampled in a representative manner, with (1) maximal breadth of the stimulus material and (2) minimal bias in the object categories. Such a dataset would allow the detailed study of object representations and provide a basis for testing and comparing computational models of vision and semantics. Towards this end, we recently developed the large-scale object image database THINGS of more than 26,000 images of 1,854 object concepts sampled representatively from the American English language (Hebart et al., 2019). Here we introduce THINGS-fMRI and THINGS-MEG, two large-scale brain imaging datasets using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). Over the course of 12 scanning sessions, 7 participants (fMRI: n = 3, MEG: n = 4) were presented with images from the THINGS database (fMRI: 8,740 images of 720 concepts, MEG: 22,448 images of 1,854 concepts) while they carried out an oddball detection task. To reduce noise, participants’ heads were stabilized and repositioned between sessions using custom head casts. To facilitate the use by other researchers, the data were converted to the Brain Imaging Data Structure format (BIDS; Gorgolewski et al., 2016) and preprocessed with fMRIPrep (Esteban et al., 2018). Estimates of the noise ceiling and general quality control demonstrate overall high data quality, with only small overall displacement between sessions. By carrying out a broad and representative multimodal sampling of object representations in humans, we hope this dataset to be of use for visual neuroscience and computational vision research alike.

Details

show
hide
Language(s):
 Dates: 2022-05
 Publication Status: Not specified
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Vision Sciences Society Annual Meeting (V-VSS)
Place of Event: Virtual
Start-/End Date: 2022-05-13 - 2022-05-18

Legal Case

show

Project information

show

Source

show