The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
This vignette explains how to interpret the diagnostic tools provided by trafficCAR. These diagnostics are designed to answer three questions:
The diagnostics are intentionally simple and global. They are meant to flag problems early, not to replace detailed model criticism.
The residuals() method for a traffic_fit
object provides three types of residuals:
Raw residuals: \[ r_i = y_i - \hat{\mu}_i \]
Structured residuals (spatial effect): \[ r_i^{(s)} = \hat{x}_i \]
Unstructured residuals: \[ r_i^{(u)} = y_i - (\hat{\mu}_i - \hat{x}_i) \]
Raw residuals reflect overall lack of fit. Unstructured residuals are particularly important: they represent the portion of the data that should be approximately independent if the spatial model is adequate.
Typical usage:
r_raw <- residuals(fit, type = "raw")
r_un <- residuals(fit, type = "unstructured")
summary(r_raw)
summary(r_un)Interpretation guidelines:
Spatial autocorrelation in residuals is assessed using Moran’s I via
moran_residuals().
Interpretation depends on the residual type:
Permutation-based p-values should be interpreted as global diagnostics. A small p-value for unstructured residuals is a strong indication of model misspecification (e.g., missing covariates or inappropriate neighborhood structure).
If residual variance is zero, Moran’s I is undefined and returned as
NA. This typically occurs in saturated or near-saturated
models.
Posterior predictive checks (PPCs) compare observed summary statistics to their distribution under replicated data generated from the fitted model.
The following statistics are reported:
Each statistic is accompanied by a posterior predictive p-value:
\[ \text{p-value} = P(T(y^{rep}) \ge T(y) \mid y) \]
Interpretation guidelines:
PPCs are not formal hypothesis tests. They are descriptive tools intended to highlight discrepancies between the model and the data.
A recommended diagnostic workflow is:
Consistent signals across these diagnostics provide strong evidence for or against model adequacy.
The diagnostics provided here are intentionally conservative:
These tools are best viewed as a first line of model checking rather than a complete diagnostic framework.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.