English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

A standardized framework to test event-based experiments

MPS-Authors
/persons/resource/persons286686

Lepauvre,  Alex       
Research Group Neural Circuits, Consciousness, and Cognition, Max Planck Institute for Empirical Aesthetics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

/persons/resource/persons187736

Melloni,  Lucia       
Research Group Neural Circuits, Consciousness, and Cognition, Max Planck Institute for Empirical Aesthetics, Max Planck Society;
Department of Neurology, NYU Grossman School of Medicine;
Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

ncc-24-lep-02-standardized.pdf
(Publisher version), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Lepauvre, A., Hirschhorn, R., Bendtz, K., Mudrik, L., & Melloni, L. (2024). A standardized framework to test event-based experiments. Behavior Research Methods. doi:10.3758/s13428-024-02508-y.


Cite as: https://hdl.handle.net/21.11116/0000-000F-E883-7
Abstract
The replication crisis in experimental psychology and neuroscience has received much attention recently. This has led to wide acceptance of measures to improve scientific practices, such as preregistration and registered reports. Less effort has been devoted to performing and reporting the results of systematic tests of the functioning of the experimental setup itself. Yet, inaccuracies in the performance of the experimental setup may affect the results of a study, lead to replication failures, and importantly, impede the ability to integrate results across studies. Prompted by challenges we experienced when deploying studies across six laboratories collecting electroencephalography (EEG)/magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and intracranial EEG (iEEG), here we describe a framework for both testing and reporting the performance of the experimental setup. In addition, 100 researchers were surveyed to provide a snapshot of current common practices and community standards concerning testing in published experiments’ setups. Most researchers reported testing their experimental setups. Almost none, however, published the tests performed or their results. Tests were diverse, targeting different aspects of the setup. Through simulations, we clearly demonstrate how even slight inaccuracies can impact the final results. We end with a standardized, open-source, step-by-step protocol for testing (visual) event-related experiments, shared via protocols.io. The protocol aims to provide researchers with a benchmark for future replications and insights into the research quality to help improve the reproducibility of results, accelerate multicenter studies, increase robustness, and enable integration across studies.