The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
metaLong provides a coherent workflow for synthesising
evidence from studies that report outcomes at multiple follow-up
time points. The package covers:
ml_meta() fits a
random-effects model at each time point using robust variance estimation
(RVE) with Tipton small-sample corrections.ml_sens() computes the
time-varying Impact Threshold for a Confounding Variable (ITCV).ml_benchmark()
compares observed covariate partial correlations against the ITCV
threshold.ml_spline() fits a
natural cubic spline over time.ml_fragility() identifies
how many study removals flip significance.library(metaLong)
dat <- sim_longitudinal_meta(
k = 10,
times = c(0, 6, 12, 24),
mu = c("0" = 0.30, "6" = 0.50, "12" = 0.42, "24" = 0.20),
tau = 0.20,
seed = 42
)
head(dat, 6)
#> study time vi yi pub_year quality n
#> 1 s01 0 0.11040314 0.7255240 2003 0.58 372
#> 2 s01 6 0.09375956 0.8372688 2003 0.58 372
#> 3 s01 12 0.05795592 0.7716921 2003 0.58 372
#> 4 s01 24 0.05334272 0.3893750 2003 0.58 372
#> 5 s02 0 0.03387102 0.3167792 2001 0.77 223
#> 6 s02 6 0.10110551 0.2722546 2001 0.77 223The data are in long format: one row per study x time combination.
ml_meta()ml_meta() fits an intercept-only random-effects model at
each time point with CR2 sandwich variance and Satterthwaite degrees of
freedom.
meta <- ml_meta(dat, yi = "yi", vi = "vi", study = "study", time = "time")
#> Registered S3 method overwritten by 'clubSandwich':
#> method from
#> bread.mlm sandwich
print(meta)
#>
#> === metaLong: Longitudinal Pooled Effects ===
#> Engine: rma.uni
#> Time points: 4
#> Small-sample correction: TRUE
#>
#> time k theta se df p_val ci_lb ci_ub tau2
#> 0 10 0.315 0.077 6.565 0.005 0.131 0.498 0.000
#> 6 10 0.602 0.073 5.638 0.000 0.419 0.784 0.001
#> 12 10 0.534 0.097 7.705 0.001 0.308 0.761 0.023
#> 24 10 0.417 0.081 7.769 0.001 0.229 0.606 0.010ml_sens()ml_sens() computes ITCV_alpha(t) — the minimum partial
correlation an omitted confounder must have with both treatment and
outcome to render the result non-significant.
sens <- ml_sens(dat, meta, yi = "yi", vi = "vi",
study = "study", time = "time")
print(sens)
#>
#> === metaLong: Longitudinal Sensitivity (ITCV) ===
#> delta (fragility benchmark): 0.15
#> ITCV_alpha range: [ 0.691 , 0.961 ]
#> Fragile proportion: 0
#>
#> time theta sy itcv itcv_alpha fragile
#> 0 0.315 0.242 0.890 0.691 FALSE
#> 6 0.602 0.174 0.980 0.961 FALSE
#> 12 0.534 0.298 0.935 0.848 FALSE
#> 24 0.417 0.220 0.941 0.849 FALSEKey trajectory summaries:
ml_spline()spl <- ml_spline(meta, df = 2)
print(spl)
#>
#> === metaLong: Spline Time Trend ===
#> Spline df: 2
#> Weighted R-squared: 0.807
#> Nonlinearity test p-value: 0.291
#>
#> Prediction range: time in [ 0 , 24 ]ml_plot()ml_benchmark()ml_benchmark() regresses each observed study-level
covariate and flags those whose partial correlation exceeds the
ITCV_alpha threshold.
bench <- ml_benchmark(
dat, meta, sens,
yi = "yi", vi = "vi", study = "study", time = "time",
covariates = c("pub_year", "quality")
)
print(bench)
#>
#> === metaLong: Benchmark Calibration ===
#>
#> Fragility summary by time:
#> time itcv_alpha n_covariates n_beats prop_beats any_beats
#> 0 0.691 2 0 0 FALSE
#> 6 0.961 2 0 0 FALSE
#> 12 0.848 2 0 0 FALSE
#> 24 0.849 2 0 0 FALSE
#>
#> time covariate k r_partial itcv_alpha beats_threshold p_val
#> 0 pub_year 10 -0.331 0.691 FALSE 0.477
#> 0 quality 10 -0.495 0.691 FALSE 0.380
#> 6 pub_year 10 -0.279 0.961 FALSE 0.577
#> 6 quality 10 0.598 0.961 FALSE 0.242
#> 12 pub_year 10 0.167 0.848 FALSE 0.702
#> 12 quality 10 0.068 0.848 FALSE 0.900
#> 24 pub_year 10 -0.119 0.849 FALSE 0.800
#> 24 quality 10 0.236 0.849 FALSE 0.664ml_fragility()The fragility index is the minimum number of study removals that flip significance at a given time point.
frag <- ml_fragility(dat, meta,
yi = "yi", vi = "vi", study = "study", time = "time",
max_k = 1L, seed = 1)
print(frag)
#>
#> === metaLong: Fragility Analysis ===
#>
#> time k_studies p_original sig_original fragility_index fragility_quotient
#> 0 10 0.005 TRUE NA NA
#> 6 10 0.000 TRUE NA NA
#> 12 10 0.001 TRUE NA NA
#> 24 10 0.001 TRUE NA NA
#> study_removed
#> <NA>
#> <NA>
#> <NA>
#> <NA>Frank, K. A. (2000). Impact of a confounding variable on a regression coefficient. Sociological Methods & Research, 29(2), 147–194. doi:10.1177/0049124100029002003
Hedges, L. V., Tipton, E., & Johnson, M. C. (2010). Robust variance estimation in meta-regression with dependent effect size estimates. Research Synthesis Methods, 1(1), 39–65. doi:10.1002/jrsm.5
Tipton, E. (2015). Small sample adjustments for robust variance estimation with meta-regression. Psychological Methods, 20(3), 375–393. doi:10.1037/met0000011
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.