The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

causens: an R package for causal sensitivity analysis methods

Introduction

Methods to estimate causal quantities often rely no unmeasured confounding. However, in practice, this is often not the case. causens is an R package that provides methods to estimate causal quantities in the presence of unmeasured confounding. In particular, causens implements methods from the three following school of thoughts:

All implemented methods are accessible via the causens function with the method parameter. The package also provides a simulate_data function to generate synthetic data on which to test the methods.

Installation

The package can be installed from GitHub via devtools.

library(devtools)
devtools::install_github("Kuan-Liu-Lab/causens")

Methods

Summary of the Unmeasured Confounder Problem

In causal inference, the potential outcome framework is often used to define causal quantities. For instance, if a treatment \(Z\) is binary, we respectively define \(Y(1)\) and \(Y(0)\) as the outcomes under treatment and control as if we can intervene on them a priori. However, in observational settings, this is often not the case, but we can still estimate estimands, e.g. the average treatment effect \(\tau = \mathbb{E}[Y(1) - Y(0)]\), using observational data using causal assumptions that relate potential outcome variables to their observational counterpart:

  1. Consistency: \(Y = Y(1)Z + Y(0)(1 - Z)\)
  2. Conditional exchangeability: \(Y(1), Y(0) \perp Z | X\)
  3. Positivity: \(0 < P(Z = 1 | X) < 1\)

where \(X\) is a set of observed confounders that can be used to adjust for confounding. In practice, it is often difficult to know whether these assumptions hold, and in particular, whether all confounding variables are contained in \(X\).

Hereon, we define \(U\), the set of unmeasured confounders, although, in causens, we assume \(U\) to be univariate for simplicity.

Simulated Data Mechanism

We posit the following data generating process:

\[\begin{align*} X_1 &\sim \text{Normal}(0, 1) \\ X_2 &\sim \text{Normal}(0, 1) \\ X_3 &\sim \text{Normal}(0, 1) \\ U &\sim \text{Bern}(0.5) \\ Z &\sim \text{Bern}(\text{logit}^{-1}(\alpha_{uz} U + \alpha_{xz} X_3)) \\ Y1 &\sim \text{Normal}(\beta_{uy} U + 0.5 X_1 + 0.5 X_2 - 0.5 X_3) \\ Y0 &\sim \text{Normal}(0.5 X_1 + 0.5 X_2 - 0.5 X_3) \, . \end{align*}\]

Using consistency, we define \(Y = Y_1 Z + Y_0 (1 - Z)\). Note that, in simData.R, we provide some options to allow the simulation procedure to be more flexible. Parameters \(\alpha_{uz}\) and \(\beta_{uy}\) dictate how the unmeasured confounder \(U\) affects the treatment \(Z\) and the outcome \(Y\), respectively; by default, they are set to 0.2 and 0.5.

Frequentist Methods (Brumback et al. 2004, Li et al. 2011)

Building on the potential outcome framework, the primary assumption that requires adjustment is conditional exchangeability. First, the latent conditional exchangeability can be articulated as follows to include \(U\):

\[\begin{align*} Y(1), Y(0) \perp Z | X, U \end{align*}\]

Secondly, we can define the sensitivity function \(c(z, e)\), with \(z\) being a valid treatment indicator and \(e\) being a propensity score value.

\[\begin{align*} c(z, e) = \mathbb{E}\Big[Y(z) \mid Z = 1, e\Big] - \mathbb{E}\Big[Y(z) \mid Z = 0, e\Big] \end{align*}\]

When there are no unmeasured confounders, \(c(z, e) = 0\) for all \(z\) and \(e\) since \(U = \emptyset\).

Given a valid sensitivity function, we can estimate the average treatment effect via a weighting approach akin to inverse probability weighting.

For parsimony, we provide an API for constant and linear sensitivity functions, but we eventually will allow any valid user-defined sensitivity function.

# Simulate data
data <- simulate_data(
  N = 10000, seed = 123, alpha_uz = 1,
  beta_uy = 1, treatment_effects = 1
)

# Treatment model is incorrect since U is "missing"
causens_sf(Z ~ X.1 + X.2 + X.3, "Y", data = data,
           c1 = 0.25, c0 = 0.25)$estimated_ate
#> [1] 1.005025
plot_causens(Z ~ X.1 + X.2 + X.3, data, "Y",
             c1_upper = 0.5, c1_lower = 0, r = 1, by = 0.01)

Bayesian Methods

In Bayesianism, we can adjust for unmeasured confounding by explicitly modelling \(U\) and its relationship with \(Z\) and \(Y\). Using a JAGS backend to carry out the MCMC procedure, we can estimate the average treatment effect by then marginalizing over \(U\). We assume the following data-generating mechanism and Bayesian model in {causens}:

\[\begin{align*} X_1, X_2, X_3 &\stackrel{iid}{\sim} \text{Normal}(0, 1) \\ U \mid X &\sim \operatorname{Bern}\left(\text{logit}^{-1}\gamma_{ux} X\right) \\ Z \mid U, X &\sim \text{Bern}\left(\text{logit}^{-1}(\alpha_{uz} U + \alpha_{xz} X)\right) \\ Y \mid U, X, Z &\sim \text{Normal}(\beta_{uy} U + \beta_{xy} X + \tau Z) \\ \end{align*}\]

data <- simulate_data(
  N = 1000, alpha_uz = 0.5, beta_uy = 0.2,
  seed = 123, treatment_effects = 1,
  y_type = "continuous"
)

bayesian_causens(
  Z ~ X.1 + X.2 + X.3, Y ~ X.1 + X.2 + X.3,
  U ~ X.1 + X.2 + X.3, data
)

Monte Carlo Approach to Causal Sensitivity Analysis

Warning Only works for binary outcomes, for now.

The Monte Carlo approach proceeds through the following steps:

data <- simulate_data(
  N = 1000, alpha_uz = 0.2, beta_uy = 0.5,
  seed = 123, treatment_effects = 1, y_type = "binary",
  informative_u = FALSE
)

causens_monte_carlo("Y", "Z", c("X.1", "X.2", "X.3"), data)$estimated_ate
#> [1] 0.9929688

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.