The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

README

PublicationBiasBenchmark

PublicationBiasBenchmark is an R package for benchmarking publication bias correction methods through simulation studies. It provides:
- Predefined data-generating mechanisms from the literature
- Functions for running meta-analytic methods on simulated data
- Pre-simulated datasets and pre-computed results for reproducible benchmarks
- Tools for visualizing and comparing method performance

All datasets and results are hosted on OSF: https://doi.org/10.17605/OSF.IO/EXF3M

For the methodology of living synthetic benchmarks please cite:

Bartoš, F., Pawel, S., & Siepe, B. S. (2025). Living synthetic benchmarks: A neutral and cumulative framework for simulation studies. arXiv Preprint. https://doi.org/10.48550/arXiv.2510.19489

For the publication bias benchmark R package please cite:

Bartoš, F., Pawel, S., & Siepe, B. S. (2025). PublicationBiasBenchmark: Benchmark for publication bias correction methods (version 0.1.0). https://github.com/FBartos/PublicationBiasBenchmark

Overviews of the benchmark results are available as articles on the package website:

Contributor guidelines for extending the package with data-generating mechanisms, methods, and results are available at:

Illustrations of how to use the precomputed datasets, results, and measures are available at:

The rest of this file overviews the main features of the package.

Installation

# Install from GitHub
remotes::install_github("FBartos/PublicationBiasBenchmark")

Usage

library(PublicationBiasBenchmark)

Simulating From Existing Data-Generating Mechanisms

# Obtain a data.frame with pre-defined conditions
dgm_conditions("Stanley2017")

# simulate the data from the second condition
df <- simulate_dgm("Stanley2017", 2)

# fit a method
run_method("RMA", df)

Using Pre-Simulated Datasets

# download the pre-simulated datasets
# (the intended location for storing the package resources needs to be specified)
PublicationBiasBenchmark.options(resources_directory = "/path/to/files")
download_dgm_datasets("no_bias")

# retrieve first repetition of first condition from the downloaded datasets
retrieve_dgm_dataset("no_bias", condition_id = 1, repetition_id = 1)

Using Pre-Computed Results

# download the pre-computed results
download_dgm_results("no_bias")

# retrieve results the first repetition of first condition of RMA from the downloaded results
retrieve_dgm_results("no_bias", method = "RMA", condition_id = 1, repetition_id = 1)

# retrieve all results across all conditions and repetitions
retrieve_dgm_results("no_bias")

Using Pre-Computed Measures

# download the pre-computed measures
download_dgm_measures("no_bias")

# retrieve measures of bias the first condition of RMA from the downloaded results
retrieve_dgm_measures("no_bias", measure = "bias", method = "RMA", condition_id = 1)

# retrieve all measures across all conditions and measures
retrieve_dgm_measures("no_bias")

Simulating From an Existing DGM With Custom Settings

# define sim setting
sim_settings <- list(
  n_studies     = 100,
  mean_effect   = 0.3,
  heterogeneity = 0.1
)

# check whether it is feasible
# (defined outside of the function - not to decrease performance during simulation)
validate_dgm_setting("no_bias", sim_settings)

# simulate the data
df <- simulate_dgm("no_bias", sim_settings)

# fit a method
run_method("RMA", df)

Key Functions

Data-Generating Mechanisms

Method Estimation And Results

Performance measures And Results

Available Data-Generating Mechanisms

See methods("dgm") for the full list:

Available Methods

See methods("method") for the full list:

Available Performance Measures

See ?measures for the full list of performance measures and their Monte Carlo standard errors/

DGM OSF Repositories

All DGMs are linked to the OSF repository (https://osf.io/exf3m/) and contain the following elements:

References

Alinaghi, N., & Reed, W. R. (2018). Meta-analysis and publication bias: How well does the FAT-PET-PEESE procedure work? Research Synthesis Methods, 9(2), 285–311. https://doi.org/10.1002/jrsm.1298
Andrews, I., & Kasy, M. (2019). Identification of and correction for publication bias. American Economic Review, 109(8), 2766–2794. https://doi.org/10.1257/aer.20180310
Bartoš, F., Maier, M., Wagenmakers, E.-J., Doucouliagos, H., & Stanley, T. (2023). Robust bayesian meta-analysis: Model-averaging across complementary publication bias adjustment methods. Research Synthesis Methods, 14(1), 99–116. https://doi.org/10.1002/jrsm.1594
Bom, P. R., & Rachinger, H. (2019). A kinked meta-regression model for publication bias correction. Research Synthesis Methods, 10(4), 497–514. https://doi.org/10.1002/jrsm.1352
Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2(2), 115–144. https://doi.org/10.1177/2515245919847196
Duval, S. J., & Tweedie, R. L. (2000). Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56(2), 455–463. https://doi.org/10.1111/j.0006-341X.2000.00455.x
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve and effect size: Correcting for publication bias using only significant results. Perspectives on Psychological Science, 9(6), 666–681. https://doi.org/10.1177/1745691614553988
Stanley, T. D., & Doucouliagos, H. (2014). Meta-regression approximations to reduce publication selection bias. Research Synthesis Methods, 5(1), 60–78. https://doi.org/10.1002/jrsm.1095
Stanley, T. D., & Doucouliagos, H. (2024). Harnessing the power of excess statistical significance: Weighted and iterative least squares. Psychological Methods, 29(2), 407–420. https://doi.org/10.1037/met0000502
Stanley, T. D., Doucouliagos, H., & Ioannidis, J. P. (2017). Finding the power to reduce publication bias. Statistics in Medicine, 36(10), 1580–1598. https://doi.org/10.1002/sim.7228
van Aert, R. C. M., & van Assen, M. A. L. M. (2025). Correcting for publication bias in a meta-analysis with the p-uniform* method. Psychonomic Bulletin & Review. https://osf.io/preprints/metaarxiv/zqjr9/
van Assen, M. A. L. M., van Aert, R. C. M., & Wicherts, J. M. (2015). Meta-analysis using effect size distributions of only statistically significant studies. Psychological Methods, 20(3), 293–309. https://doi.org/10.1037/met0000025
Vevea, J. L., & Hedges, L. V. (1995). A general linear model for estimating effect size in the presence of publication bias. Psychometrika, 60(3), 419–435. https://doi.org/10.1007/BF02294384

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.