English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

THINGS-fMRI/MEG: A large-scale multimodal neuroimaging dataset of responses to natural object images

MPS-Authors
/persons/resource/persons266000

Contier,  Oliver
Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons242545

Hebart,  Martin N.       
Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Contier, O., Dickter, A. H., Teichmann, L., & Hebart, M. N. (2022). THINGS-fMRI/MEG: A large-scale multimodal neuroimaging dataset of responses to natural object images. Poster presented at Vision Sciences Society Annual Meeting (V-VSS), Virtual.


Cite as: https://hdl.handle.net/21.11116/0000-000B-3178-7
Abstract
A detailed understanding of visual object representations in brain and behavior is fundamentally limited by the number of stimuli that can be presented in any one experiment. Ideally, the space of objects should be sampled in a representative manner, with (1) maximal breadth of the stimulus material and (2) minimal bias in the object categories. Such a dataset would allow the detailed study of object representations and provide a basis for testing and comparing computational models of vision and semantics. Towards this end, we recently developed the large-scale object image database THINGS of more than 26,000 images of 1,854 object concepts sampled representatively from the American English language (Hebart et al., 2019). Here we introduce THINGS-fMRI and THINGS-MEG, two large-scale brain imaging datasets using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). Over the course of 12 scanning sessions, 7 participants (fMRI: n = 3, MEG: n = 4) were presented with images from the THINGS database (fMRI: 8,740 images of 720 concepts, MEG: 22,448 images of 1,854 concepts) while they carried out an oddball detection task. To reduce noise, participants’ heads were stabilized and repositioned between sessions using custom head casts. To facilitate the use by other researchers, the data were converted to the Brain Imaging Data Structure format (BIDS; Gorgolewski et al., 2016) and preprocessed with fMRIPrep (Esteban et al., 2018). Estimates of the noise ceiling and general quality control demonstrate overall high data quality, with only small overall displacement between sessions. By carrying out a broad and representative multimodal sampling of object representations in humans, we hope this dataset to be of use for visual neuroscience and computational vision research alike.