English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Addressing publication bias in Meta-Analysis : Empirical findings from community-augmented meta-analyses of infant language development

MPS-Authors
/persons/resource/persons41950

Bergmann,  Christina
Language Development Department, MPI for Psycholinguistics, Max Planck Society;

External Resource

Link to Dataset on PsychArchives
(Supplementary material)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

TsujiCristiaFrankBergmann2020_ZfP.pdf
(Publisher version), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2020). Addressing publication bias in Meta-Analysis: Empirical findings from community-augmented meta-analyses of infant language development. Zeitschrift für Psychologie, 228(1), 50-61. doi:10.1027/2151-2604/a000393.


Cite as: https://hdl.handle.net/21.11116/0000-0004-B703-A
Abstract
Meta-analyses are an indispensable research synthesis tool for characterizing bodies of literature and advancing theories. One important open question concerns the inclusion of unpublished data into meta-analyses. Finding such studies can be effortful, but their exclusion potentially leads to consequential biases like overestimation of a literature’s mean effect. We address two questions about unpublished data using MetaLab, a collection of community-augmented meta-analyses focused on developmental psychology. First, we assess to what extent MetaLab datasets include gray literature, and by what search strategies they are unearthed. We find that an average of 11% of datapoints are from unpublished literature; standard search strategies like database searches, complemented with individualized approaches like including authors’ own data, contribute the majority of this literature. Second, we analyze the effect of including versus excluding unpublished literature on estimates of effect size and publication bias, and find this decision does not affect outcomes. We discuss lessons learned and implications.