The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

likelihood.model

Likelihood-based statistical inference in the Fisherian tradition for R.

Why likelihood?

Classical statistics often jumps from data to p-values or posterior distributions. The likelihood approach takes a different path: the likelihood function is the complete summary of what the data say about the parameters. No priors, no long-run frequency guarantees – just the evidence in the data.

This package makes it easy to:

Installation

# From CRAN
install.packages("likelihood.model")

# Development version from GitHub
devtools::install_github("queelius/likelihood.model")

Quick start

Fit a distribution

library(likelihood.model)

# Any R distribution works: "norm", "weibull", "exp", "gamma", ...
model <- likelihood_name("weibull", ob_col = "time", censor_col = "status")
mle <- fit(model)(df, par = c(shape = 1, scale = 1))

coef(mle)      # parameter estimates
se(mle)        # standard errors
confint(mle)   # confidence intervals

Handle censored data properly

Ignoring censoring biases your estimates. likelihood.model handles it correctly:

df <- data.frame(
  time   = c(1.2, 3.4, 5.0, 2.1, 5.0),
  status = c("exact", "exact", "right", "exact", "right")
)

model <- likelihood_name("weibull", ob_col = "time", censor_col = "status")
mle <- fit(model)(df, par = c(shape = 1, scale = 1))

Right-censored observations contribute through the survival function rather than the density – the likelihood framework handles this seamlessly.

Fisherian inference

Instead of confidence intervals that rely on repeated-sampling arguments, likelihood intervals identify parameter values well-supported by the data:

# Likelihood interval: {theta : L(theta)/L(theta_hat) >= 1/8}
li <- likelihood_interval(mle, df, model, k = 8)

# How well does the data support theta1 over theta2?
evidence(model, df, theta1 = c(2, 3), theta2 = c(1, 5))

Three ways to build a model

Approach Use when Example
likelihood_name() Standard R distribution likelihood_name("norm", "x")
Specialized model You need analytical derivatives or closed-form MLE weibull_uncensored("x"), exponential_lifetime("t")
likelihood_contr_model Heterogeneous observation types (mixed exact/censored) Custom likelihood contributions

likelihood_name() – the quick path

Wraps any R distribution that has d<name> and p<name> functions. Supports exact, left-censored, right-censored, and interval-censored observations automatically.

model <- likelihood_name("norm", ob_col = "x", censor_col = "censor")

Specialized models – the fast path

When you have analytical derivatives, estimation is 10-100x faster than numerical differentiation:

# Weibull with hand-derived score and Hessian
model <- weibull_uncensored("x")

# Exponential with closed-form MLE (bypasses optim entirely)
model <- exponential_lifetime("t", censor_col = "status")
mle <- fit(model)(df)  # no initial guess needed

likelihood_contr_model – the flexible path

For complex models with different observation types, define type-specific log-likelihood contributions:

model <- likelihood_contr_model$new(
  obs_type = function(df) df$type,
  logliks = list(exact = loglik_exact, censored = loglik_censored)
)

The likelihood calculus

Every likelihood model provides a consistent set of operations:

loglik(model)       # log-likelihood function
score(model)        # score function (gradient)
hess_loglik(model)  # Hessian matrix
fim(model)          # Fisher information matrix

If you provide only loglik(), the package computes score() and hess_loglik() via numerical differentiation (Richardson extrapolation). Provide analytical versions for better speed and accuracy.

Inference toolkit

Maximum likelihood estimation

mle <- fit(model)(df, par = c(1, 1))
coef(mle)           # point estimates
vcov(mle)           # variance-covariance matrix
confint(mle)        # Wald confidence intervals
summary(mle)        # full summary

Bootstrap

boot <- sampler(model, df = df, par = c(1, 1))(n = 1000)
se(boot)            # bootstrap standard errors
confint(boot)       # bootstrap confidence intervals (BCa)
bias(boot)          # bootstrap bias estimate

Model comparison

lrt(null_model, alt_model, df,            # likelihood ratio test
    null_par = p0, alt_par = p1, dof = 1)
aic(mle)                                   # Akaike information criterion
bic(mle)                                   # Bayesian information criterion

Fisherian likelihood inference

support(mle, theta, df, model)             # S(theta) = logL(theta) - logL(theta_hat)
relative_likelihood(mle, theta, df, model) # R(theta) = L(theta) / L(theta_hat)
likelihood_interval(mle, df, model, k = 8) # {theta : R(theta) >= 1/k}
profile_loglik(mle, df, model, param = 1)  # profile out nuisance parameters
evidence(model, df, theta1, theta2)        # log-likelihood ratio

Ecosystem

likelihood.model interoperates with two companion packages:

Learn more

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.