The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

Introduction to the appraise Package

Introduction

appraise is an R package for bias-aware evidence synthesis in systematic reviews. It quantifies uncertainty in effect estimates by explicitly modeling multiple sources of bias and combining study-specific posterior distributions using a posterior mixture model.

Unlike traditional meta-analysis, appraise does not assume a single pooled likelihood. Instead, uncertainty due to bias, random error, and study relevance is propagated directly into posterior inference.

Study-level bias specification and prior simulation

Biases are explicitly modeled using user-specified prior distributions. The same data structure used internally by the Shiny application can be constructed programmatically.

bias_spec <- build_bias_specification(
  num_biases = 2,
  b_types = "Confounding",
  s_types = "Selection Bias",
  ab_params = list(Confounding = c(2, 5)),
  skn_params = list(`Selection Bias` = c(0, 0.2, 5))
)

if (requireNamespace("sn", quietly = TRUE)) {
  xi_samples <- simulate_bias_priors(bias_spec, n_draws = 2000)
} else {
  xi_samples <- NULL
  message("Package 'sn' not available; skipping skew-normal bias simulation.")
}

The resulting samples represent uncertainty due to bias alone and form the building blocks of posterior inference.

Study-level posterior inference

Given an observed estimate and standard error, appraise fits a Bayesian model that combines sampling uncertainty with bias uncertainty. To ensure the vignette runs on CRAN without requiring CmdStan, we illustrate posterior inference using simulated posterior draws

set.seed(123)

# Mock posterior draws representing a study-level posterior
theta_draws <- rnorm(2000, mean = -0.5, sd = 0.15)

mid_draws <- theta_draws  # midpoint samples used downstream

Probability of exceeding a clinically or policy meaningful threshold

Users must specify a threshold \(\tau\) representing a clinically or policy-relevant effect size. The posterior probability

\[ P(\theta > \tau) \]

is computed directly from posterior draws.

posterior_probability(mid_draws)
#> [1] -0.4956055

Evidence synthesis via posterior mixture models

When multiple studies are available, appraise combines study-specific posteriors using a weighted mixture model.

\[ p(\theta \mid \text{evidence}) = \sum_{k=1}^K w_k \, p_k(\theta \mid \text{data}_k) \]

where \(w_k\) reflects the relevance or credibility of study \(k\).

theta_list <- list(
  theta_draws,
  rnorm(2000, -0.4, 0.2)
)

weights <- c(0.6, 0.4)

mix <- posterior_mixture(theta_list, weights)
mix$summary
#>        mean        2.5%       97.5% 
#> -0.45853423 -0.79474735 -0.08603389

Relationship to the Shiny application

The AppRaise Shiny application provides a graphical interface to the same functions described in this vignette. All statistical computations are performed using exported package functions; the app adds interactivity, visualization, and workflow support.

References

Kabali C (2025). AppRaise: Software for quantifying evidence uncertainty in systematic reviews using a posterior mixture model. Journal of Evaluation in Clinical Practice, 31, 1-12. https://doi.org/10.1111/jep.70272.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.