The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
SteppedPower
SteppedPower
SteppedPower
SteppedPower offers tools for power and sample size calculation as well as design diagnostics for longitudinal mixed model settings, with a focus on stepped wedge designs. Other implemented study design types are parallel, parallel with baseline period(s) and crossover designs. Further design types can be easily defined by the user.
Currently, normal outcomes and binomial outcomes with identity link are implemented. The following random effects can be specified: random cluster intercept, random treatment effect, random subject specific intercept and random time effect. The covariance structure can be compound symmetry or autoregressive.
This package is modularised in order to be flexible and easy to use
(and hopefully to maintain as well). At the same time, the use of sparse
matrix classes of the matrix
package makes computation of
large designs feasible.
A common approach to model the correlation in longitudinal studies are random effects (Hussey and Hughes 2007; Li et al. 2020). Such a model has the form
\[y_{ijk}= T_{ij} \theta + c_i + \mu_j + e_{ijk}\]
with
For power calculation, the standard deviation of random effects is assumed to be known. Let’s define \(\beta:=(\theta,\mu')'\) and \(\omega_{ijk}:=c_i+e_{ijk}\). This leads to a compact and more general notation of the above equation:
\[\begin{align} y_{ijk}&= X_{ij}\beta + \omega_{ijk}\\ \text{or, in matrix notation:} \qquad \\ y&=X\beta + \omega \end{align}\]
Where \(X\) is the corresponding design matrix and \(\omega\sim N(0,\Omega)\), where \(\Omega\) is a compound-symmetry (syn. exchangeable) variance matrix defined by \(\tau\) and \(\sigma\). We are thus in a weighted least squares setting, so the variance of \(\beta\) is
\[ \text{Var}(\hat\beta) = {(X'\Omega^{-1}X)^{-1}}\]
We can then calculate the power of a z-test
\[ \text{power} = \Phi\left(\frac{\theta_A-\theta_0}{\sqrt{\text{Var}(\hat \theta)}}- Z_{1-\frac{\alpha}{2}}\right) \]
where \(\text{Var}(\hat \theta)\) is the diagonal element of \(\Omega\) that corresponds to \(\hat\theta\).
Extensions to the above formula implemented in this package are
with leads to the following extended model formula:
\[y_{ijk}= T (\theta_{ij} + d_i) + c_i + \mu_j + t_{ij} + s_{ijk} + e_{ijk}\] with
A different approach to establish a covariance structure is to directly assume cluster cell correlation as proposed by and introduced to the context of stepped wedge designs in (Li, Turner, and Preisser 2018). They use \(\boldsymbol{\alpha}:=(\alpha_0, \alpha_1, \alpha_2)\) where \(\alpha_0:=\operatorname{corr}({y_{ijk},y_{ijk'}})\) for $ kk’$ defines the within-period correlation; \(\alpha_1:=\operatorname{corr}({y_{ijk},y_{ij'k'}})\) for \(j\neq j', \, k\neq k'\) defines the inter-period correlation and \(\alpha_0:=\operatorname{corr}({y_{ijk},y_{ij'k}})\) for \(j\neq j'\) defines the within-individual correlation.
One receives an easily interpretable correlation matrix, albeit at the cost of some flexibility. We can translate \(\boldsymbol{\alpha}\) into a representation with random effects, given a marginal variance \(\sigma^2_{\text{marg}}\).
\[ \begin{align*} \tau^2 &:= {\sigma^2_{\text{marg}} \alpha_1} \\ \gamma^2 &:= {\sigma^2_{\text{marg}} (\alpha_0-\alpha_1)} \\ \psi^2 &:= {\sigma^2_{\text{marg}} (\alpha_2-\alpha_2)} \\ \sigma^2_{\text{res}} &:= {\sigma^2_{\text{marg}} (1-\alpha_0 - \alpha_2 + \alpha_1)} \end{align*} \]
Note that \(\boldsymbol{\alpha}\) as defined above cannot specify a random treatment effect.
For most users, the probably most important function is
glsPower
. It calls several auxiliary functions which will
be shortly discussed here. This section is not essential for the usage
of SteppedPower
, it might be helpful to design non-standard
user defined settings.
glsPower
is essentially just a flexible wrapper for the
function compute_glsPower
, which does the actual
computation.
compute_glsPower
then calls construct_DesMat
and construct_CovMat
.
construct_DesMat
builds the design matrix which consists
of the treatment status, usually built byconstruct_trtMat
and the time adjustment, usually built by
construct_timeadjust
. There is also the option to pass a
user defined definition of the treatment status to
construct_DesMat
. If not specified, the number of
timepoints is guessed as the fewest number of periods (timepoints)
possible with the given design, i.e. two for cross-over designs or the
number of waves plus one for stepped wedge designs.
construct_CovMat
builds the covariance matrix
(explicitly). It uses construct_CovBlk
to construct the
blocks for each cluster which are then combined to a block diagonal
matrix.
The plot.glsPower
method makes up to four plots,
visualising the influence of cluster-period cells, the information
content of cluster-period cells, the treatment matrix \(T\) and the covariance matrix (selectable
by which=
).
By default, only the first plot is returned. The information content is also plotted if it the object to be plotted contains the corresponding matrix (which is usually the case). It is constructed by exploiting the approximation to a weighted least squares setting. Therefore, the estimator \(\hat \beta\) is a linear function of the data \(y\)
\[\hat\beta = \underbrace{(X'\Omega^{-1}X)^{-1}(X'\Omega^{-1})}_{=:\text{M}}\cdot y\]
with \(X\) the design matrix and
\(\Omega\) the covariance matrix as
above. The matrix \(H\) gives an
impression of the importance of clusters and time periods with regard to
the estimated coefficients \(\hat\beta\). The first row of \(H\) corresponds to the coefficient of the
treatment status, i.e. the treatment effect.
The plot.glsPower
method visualises this first row of \(M\) as a matrix where rows and columns
correspond to clusters and time periods, respectively.
Furthermore, to give a rough comparison of importance between clusters (or between time periods), the sum of absolute weights per row (or per column) is also shown.
glsPwr <- glsPower(Cl=c(3,2,3), mu0=0, mu1=1, sigma=1, tau=.5, verbose=2)
plot(glsPwr,which=1, show_colorbar=FALSE)$WgtPlot
The information content, as described in (Kasza and Forbes 2019), looks like this:
CAVE: The influence plot (above) yields out-of-bag estimates for the influence of observations on \(\hat\theta\), but not for \(\text{Var}(\hat\theta)\). The information content (below) visualises the relative change in \(\text{Var}(\hat\theta)\) when one particular cluster period is omitted.
When the argument power
is passed to
glsPower
, the sample size needed is calculated, under the
assumption of equally sized clusters and periods.
glsPower(Cl=c(3,3,3), mu0=0, mu1=.2, sigma=1, tau=0, power=.8)
#> Power = 0.8074
#> Significance level (two sided) = 0.05
#> Needed N per cluster per period = 50
So in this setting, you would need \(50\) individuals per period in each cluster to achieve a target power of \(80\%\)
This might be a proof of concept rather than an example with practical relevance, but let’s try to compare the mean in two groups. For two groups of 10 observations each, the power of a Z-test can be calculated as follows:
glsPower(Cl=c(10,10), mu0=0,mu1=1.2,sigma=1, tau=0, N=1,
dsntype="parallel", timepoints=1)$power
#> [1] 0.7652593
## the same:
glsPower(Cl=c(1,1), mu0=0,mu1=1.2, sigma=1, tau=0, N=10,
dsntype="parallel", timepoints=1)$power
#> [1] 0.7652593
A quick note on t-tests: It is much more challenging to use
SteppedPower
to reproduce settings in which the variance is assumed to be unknown, most prominently the well known t-test. In this package, you find implemented some (experimental) heuristics for guessing the denominator degrees of freedom, but they yield rather scaled Wald tests than t-tests. The main difference is that the distribution under the alternative is assumed to be symmetric, whereas the t-test assumes a non-central (hence skewed) t-distribution.
Periods in which no cluster switches to the intervention are
specified by inserting zeros into the Cl
argument,
i.e. Cl=c(4,4,4,0)
.
The treatment matrix is then stored under
mod1$DesignMatrix$trtMat
:
0 | 1 | 1 | 1 | 1 |
0 | 0 | 1 | 1 | 1 |
0 | 0 | 0 | 1 | 1 |
The argument N
defines the cluster size. N
can be
Suppose you do not plan to observe all clusters over the whole study period. Rather, clusters that switch early to the intervention are not observed until the end. Analogous, observation starts later in clusters that switch towards the end of the study. This is sometimes called ‘incomplete SWD’ (Hemming et al. 2015).
There are two ways to achieve this in SteppedPower
, both
by using the incomplete
argument. One can either scalar,
which then defines the number of observed periods before and after the
switch from control to intervention in each cluster.
If for example the study consists of eight clusters in four sequences (i.e. five timepoints), and we observe two timepoints before and after the switch, then we receive
incompletePwr <- glsPower(Cl=rep(2,4), sigma=2, tau=.6, mu0=0,mu1=.5, N=80,
incomplete=2, verbose=2)
incompletePwr
#> Power = 0.8221
#> Significance level (two sided) = 0.05
A slightly more tedious, but more flexible way is to define a matrix
where each row corresponds to either a cluster or a wave of clusters and
each column corresponds to a timepoint. If a cluster is not observed at
a specific timepoint, set the value in the corresponding cell to
0
. For the example above, such a matrix would look like
this:
TM <- toeplitz(c(1,1,0,0))
incompleteMat1 <- cbind(TM[,1:2],rep(1,4),TM[,3:4])
incompleteMat2 <- incompleteMat1[rep(1:4,each=2),]
A matrix where each row represents a wave of clusters
1 | 1 | 1 | 0 | 0 |
1 | 1 | 1 | 1 | 0 |
0 | 1 | 1 | 1 | 1 |
0 | 0 | 1 | 1 | 1 |
or each row represents a cluster
1 | 1 | 1 | 0 | 0 |
1 | 1 | 1 | 0 | 0 |
1 | 1 | 1 | 1 | 0 |
1 | 1 | 1 | 1 | 0 |
0 | 1 | 1 | 1 | 1 |
0 | 1 | 1 | 1 | 1 |
0 | 0 | 1 | 1 | 1 |
0 | 0 | 1 | 1 | 1 |
Now all that’s left to do is to plug that into the main function:
incompletePwr1 <- glsPower(Cl=rep(2,4), sigma=2, tau=.6, mu0=0, mu1=.5, N=80,
incomplete=incompleteMat1, verbose=2)
incompletePwr2 <- glsPower(Cl=rep(2,4), sigma=2, tau=.6, mu0=0, mu1=.5, N=80,
incomplete=incompleteMat2, verbose=2)
all.equal(incompletePwr,incompletePwr1)
#> [1] "Component \"DesignMatrix\": Component \"incompMat\": 'is.NA' value mismatch: 0 in current 12 in target"
all.equal(incompletePwr,incompletePwr2)
#> [1] "Component \"DesignMatrix\": Component \"incompMat\": 'is.NA' value mismatch: 0 in current 12 in target"
We can also have a quick look at the projection matrix where we see that the clusters have a weight of exactly zero at the timepoints where they are not observed
The argument
incomplete
with matrix input works also for other design types, but makes (supposedly) most sense in the context of stepped wedge designs
The most usual method for the modelling of potential secular trends is to take time period as a factor into the analysis model (Hussey and Hughes 2007; Hemming et al. 2015; Hemming and Taljaard 2020).
For diagnostic purposes (or for bold users), some other adjustment options are implemented.
TimeAdj1 <- glsPower(Cl=rep(2,4), mu0=0, mu1=1, sigma=1, tau=0,
timeAdjust="linear", verbose=2)
TimeAdj2 <- glsPower(Cl=rep(2,4), mu0=0, mu1=1, sigma=1, tau=0,
timeAdjust="factor", verbose=2)
Design matrix of the first cluster with linear adjustment for secular trend:
trt | ||
---|---|---|
0 | 1 | 0.2 |
1 | 1 | 0.4 |
1 | 1 | 0.6 |
1 | 1 | 0.8 |
1 | 1 | 1.0 |
Design matrix of the first cluster with categorical adjustment for secular trend:
trt | |||||
---|---|---|---|---|---|
0 | 1 | 0 | 0 | 0 | 0 |
1 | 1 | 1 | 0 | 0 | 0 |
1 | 1 | 0 | 1 | 0 | 0 |
1 | 1 | 0 | 0 | 1 | 0 |
1 | 1 | 0 | 0 | 0 | 1 |
In a closed cohort the patients are observed over the whole study
period. The same correlation structure arises in cross sectional stepped
wedge designs if subclusters exist (such as wards within clinics). The
argument psi
denotes the standard deviation of a random
subject (or subcluster) specific intercept.
The power is calculated on aggregated cluster means:
Closed1 <- glsPower(mu0=0, mu1=5, Cl=rep(3,3),
sigma=5, tau=1, psi=3,
N=3, verbose=2)
Closed1
#> Power = 0.8524
#> Significance level (two sided) = 0.05
With INDIV_LVL=TRUE
, the calculation is done on the
individual level. This yields the same results but is far more
computationally expensive and is mainly intended for diagnostic
purposes.
The AR
argument mentioned above specifies the
autocorrelation of random effects. It offers a convenient way to model
open cohort designs. The third element of AR
can be seen as
the estimated probability that a subject included in period \(j\) is also included in period \(j+1\), instead of being replaced by a new
subject.
Expanding on the closed cohort example above, one could assume that subjects have a 75% chance to reappear in the next study period.
Open1 <- glsPower(mu0=0, mu1=5, Cl=rep(3,3),
sigma=5, tau=1, psi=3, AR=c(1,1,.75), N=3)
Closed1$power
#> [1] 0.8524223
Open1$power
#> [1] 0.8284796
So this would result in a power loss of roughly 2.4% compared to a closed cohort design in this case.
This proposed covariance structure is then a mixture of exchangeable (on cluster level) and autoregressive (on subject level). Let’s have a look at a toy example with three clusters, each consisting of three subjects. The probability that an individual reappears in two consecutive periods is estimated to be 60%. The covariance matrix (on subject level) could be visualised as follows:
Open2Indiv <- glsPower(mu0=0, mu1=10, Cl=c(1,1,1,0),
sigma=1, tau=5, psi=10, AR=c(1,1,.60),
N=3, verbose=2, INDIV_LVL=TRUE)
plot(Open2Indiv, which=4, show_colorbar=FALSE)$CMplot
Note that you can zoom to get a better look at the structure. The autoregressive structure on the subject level in the small boxes along the main diagonal and the block exchangeable structure on cluster level becomes evident.
There exist some formulae that do not explicitly construct design and covariance matrices, most notably (Kasza et al. 2020) and (Li 2020). Both include the one presented in (Hussey and Hughes 2007) as a special case.
These formulae are also implemented in SteppedPower
. As
a showcase, here is an example that is loosely based on the example in
(Hussey and Hughes 2007):
trtMat <- construct_DesMat(c(6,6,6,6))$trtMat
mu0 <- 0.05 ; mu1 <- 0.032 ; N <- 100
tau <- .025 ; sigma <- sqrt(.041*.959)
gamma <- 0.01 ; psi <- .1 ; chi <- .5 ; AR <- .5
VarClosed_Li
computes the treatment variance for stepped
wedge designs with a autoregressive (syn. proportional decay) structure.
It can be used to compute a variance for the treatment effect and thus
the power for the setting in question:
tmp <- VarClosed_Li(trtMat, tau=tau, psi=psi, N=N, AR=AR)
tTestPwr(mu0-mu1, se=sqrt(tmp), df=Inf)
#> [1] 0.7870855
Explicit calculation yields the same:
Kasza et al. offer an extension to open cohort designs by the use of
an attrition or churn rate chi
(Kasza et al. 2020). It can be understood as the
estimated probability that a subject included in period \(j\) is also included in period \(j'\) for \(j\ne j'\). Note that this differs from
the approach mentioned above, as it results in a mixture of exchangeable
covariance matrices.
Since it differs from the way SteppedPower
intents to
model open cohort designs, it requires a bit of tinkering to reproduce
the results of the closed formula with this package. It can be done with
the help of the random time effect (gamma
).
tmp <- VarClosed_Kasza(trtMat, sigma=sigma, tau=tau, gamma=gamma, psi=psi, N=N, chi=0)
tTestPwr(mu0-mu1, se=sqrt(tmp), df = Inf)
#> [1] 0.7145816
glsPower(Cl = rep(6,4), N=N, mu0=mu0, mu1=mu1, verbose=0,
sigma=sigma, tau=tau, gamma=gamma, psi=psi)
#> [1] 0.7145816
tmp <- VarClosed_Kasza(trtMat, sigma=sigma, tau=tau, gamma=gamma, psi=psi, N=N, chi=1)
tTestPwr(mu0-mu1, sqrt(tmp), df = Inf)
#> [1] 0.6451082
glsPower(Cl = rep(6,4), N=N, mu0=mu0, mu1=mu1, verbose=0,
sigma=sigma, tau=tau,
gamma=sqrt(gamma^2+psi^2/N), psi=0)
#> [1] 0.6451082
tmp <- VarClosed_Kasza(trtMat, sigma=sigma, tau=tau, gamma=gamma, psi=psi, N=N, chi=chi)
tTestPwr(mu0-mu1, sqrt(tmp), df = Inf)
#> [1] 0.6778561
glsPower(Cl = rep(6,4), N=N, mu0=mu0, mu1=mu1, verbose=0,
sigma=sigma, tau=tau,
gamma=sqrt(gamma^2+chi*psi^2/N), psi=sqrt(1-chi)*psi)
#> [1] 0.6778561
#> R version 4.4.0 (2024-04-24 ucrt)
#> Platform: x86_64-w64-mingw32/x64
#> Running under: Windows 10 x64 (build 19045)
#>
#> Matrix products: default
#>
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] plotly_4.10.4 ggplot2_3.5.1 Matrix_1.7-0 SteppedPower_0.3.5
#> [5] knitr_1.46
#>
#> loaded via a namespace (and not attached):
#> [1] sass_0.4.9 utf8_1.2.4 generics_0.1.3 tidyr_1.3.1
#> [5] lattice_0.22-6 digest_0.6.35 magrittr_2.0.3 evaluate_0.23
#> [9] grid_4.4.0 pwr_1.3-0 fastmap_1.1.1 jsonlite_1.8.8
#> [13] httr_1.4.7 purrr_1.0.2 fansi_1.0.6 crosstalk_1.2.1
#> [17] viridisLite_0.4.2 scales_1.3.0 lazyeval_0.2.2 jquerylib_0.1.4
#> [21] cli_3.6.2 rlang_1.1.3 munsell_0.5.1 withr_3.0.0
#> [25] cachem_1.0.8 yaml_2.3.8 tools_4.4.0 parallel_4.4.0
#> [29] dplyr_1.1.4 colorspace_2.1-0 Rfast_2.1.0 RcppZiggurat_0.1.6
#> [33] vctrs_0.6.5 R6_2.5.1 lifecycle_1.0.4 htmlwidgets_1.6.4
#> [37] pkgconfig_2.0.3 RcppParallel_5.1.7 pillar_1.9.0 bslib_0.7.0
#> [41] gtable_0.3.5 glue_1.7.0 data.table_1.15.4 Rcpp_1.0.12
#> [45] xfun_0.43 tibble_3.2.1 tidyselect_1.2.1 rstudioapi_0.16.0
#> [49] farver_2.1.1 htmltools_0.5.8.1 rmarkdown_2.26 compiler_4.4.0
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.