Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  THINGS-fMRI/MEG: A large-scale multimodal neuroimaging dataset of responses to natural object images

Contier, O., Dickter, A. H., Teichmann, L., & Hebart, M. N. (2022). THINGS-fMRI/MEG: A large-scale multimodal neuroimaging dataset of responses to natural object images. Poster presented at Vision Sciences Society Annual Meeting (V-VSS), Virtual.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Contier, Oliver1, Autor           
Dickter, Adam H.2, Autor
Teichmann , Lina, Autor
Hebart, Martin N.1, Autor                 
Affiliations:
1Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_3158378              
2Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda MD, United States of America, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: A detailed understanding of visual object representations in brain and behavior is fundamentally limited by the number of stimuli that can be presented in any one experiment. Ideally, the space of objects should be sampled in a representative manner, with (1) maximal breadth of the stimulus material and (2) minimal bias in the object categories. Such a dataset would allow the detailed study of object representations and provide a basis for testing and comparing computational models of vision and semantics. Towards this end, we recently developed the large-scale object image database THINGS of more than 26,000 images of 1,854 object concepts sampled representatively from the American English language (Hebart et al., 2019). Here we introduce THINGS-fMRI and THINGS-MEG, two large-scale brain imaging datasets using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). Over the course of 12 scanning sessions, 7 participants (fMRI: n = 3, MEG: n = 4) were presented with images from the THINGS database (fMRI: 8,740 images of 720 concepts, MEG: 22,448 images of 1,854 concepts) while they carried out an oddball detection task. To reduce noise, participants’ heads were stabilized and repositioned between sessions using custom head casts. To facilitate the use by other researchers, the data were converted to the Brain Imaging Data Structure format (BIDS; Gorgolewski et al., 2016) and preprocessed with fMRIPrep (Esteban et al., 2018). Estimates of the noise ceiling and general quality control demonstrate overall high data quality, with only small overall displacement between sessions. By carrying out a broad and representative multimodal sampling of object representations in humans, we hope this dataset to be of use for visual neuroscience and computational vision research alike.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2022-05
 Publikationsstatus: Keine Angabe
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Vision Sciences Society Annual Meeting (V-VSS)
Veranstaltungsort: Virtual
Start-/Enddatum: 2022-05-13 - 2022-05-18

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: