The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
The package contains functions to obtain the operational characteristics (power, type I error, percentage of studies proceeding to the second stage, average and quantiles of total sample sizes) of bioequivalence studies in adaptive sequential Two-Stage Designs (TSD) via simulations.
Version 0.5.4 built 2021-11-20 with R 4.1.2 (stable release on CRAN NA).
Since the many letters denoting the methods given by various authors might be confusing, I classified the methods as two ‘types’:
It should be noted that the adjusted alphas do not necessarily have to be the same in both stages. Below a summary of conditions used in the decision schemes of the published methods.
Golkowski et al. (2014).
Kieser and Rauch (2015).
König et al. (2014), Kieser and Rauch (2015), Wassmer and Brannath (2016), Maurer et al. (2018).
Defaults employed if not specified in the function call:
function | theta0 |
target power |
usePE |
Nmax |
max.n |
fCrit |
fClower |
---|---|---|---|---|---|---|---|
power.tsd() |
0.95 |
0.80 |
FALSE |
Inf |
– | – | – |
power.tsd.fC() |
0.95 |
0.80 |
FALSE |
– | Inf |
"PE" |
0.80 |
power.tsd.KM() |
0.95 |
0.80 |
– | 150 |
– | – | – |
power.tsd.ssr() |
0.95 |
0.80 |
FALSE |
– | Inf |
– | – |
power.tsd.GS() |
0.95 |
– | – | – | – | "PE" |
0.80 |
power.tsd.in() |
0.95 |
0.80 |
FALSE |
– | Inf |
"CI" |
0.95 |
power.tsd.p() |
0.95 |
0.80 |
FALSE |
Inf |
– | – | – |
All functions are for a 2×2×2 crossover design except
power.tsd.p()
, which is for a two-group parallel
design.
If usePE = TRUE
the point estimate in the interim is used
in sample size estimation of the second stage.
If the estimated total sample size exceeds max.n
the second
stage will be forced to max.n - n1
(i.e., it is
not a futility criterion).
The method used for interim power and sample size estimation is
specified by the argument pmethod
. It defaults to
"nct"
(approximation by the noncentral
t-distribution) except in power.tsd.GS()
, where
the total sample size is already fixed.
The BE limits are specified by the arguments theta1
and
theta2
(default to 0.80 and 1.25).
The number of simulations is specified by the argument
nsims
. It defaults to 1e5 if simulating power and to 1e6 if
simulating the empiric type I error (i.e., theta0
set to the value of theta1
or theta2
).
Futility Criteria in the Interim
Nmax
: The study will stop if the estimated total sample
size exceeds Nmax
.fCrit
("PE"
or "CI"
): The
study will stop if outside fClower
and
1/fClower
.
"PE"
: fClower
defaults to 0.80."CI"
: fClower
defaults to 0.925 (except in
function power.tsd.in()
, where it defaults to 0.95).sampleN2.TOST()
interim.tsd.in()
final.tsd.in()
Before running the examples attach the library.
library(Power2Stage)
If not noted otherwise, defaults are employed.
Power estimation by the shifted central t-distribution.
power.tsd(CV = 0.20, n1 = 12, pmethod = "shifted")
# TSD with 2x2 crossover
# Method B: alpha (s1/s2) = 0.0294 0.0294
# Target power in power monitoring and sample size est. = 0.8
# Power calculation via shifted central t approx.
# CV1 and GMR = 0.95 in sample size est. used
# No futility criterion
# BE acceptance range = 0.8 ... 1.25
#
# CV = 0.2; n(stage 1) = 12; GMR = 0.95
#
# 1e+05 sims at theta0 = 0.95 (p(BE) = 'power').
# p(BE) = 0.84454
# p(BE) s1 = 0.41333
# Studies in stage 2 = 56.45%
#
# Distribution of n(total)
# - mean (range) = 20.7 (12 ... 82)
# - percentiles
# 5% 50% 95%
# 12 18 40
Explore the empiric type I error at the upper BE-limit.
power.tsd(CV = 0.20, n1 = 12, pmethod = "shifted",
theta0 = 1.25)[["pBE"]]
# [1] 0.046352
Power estimation by the shifted central t-distribution.
power.tsd(method = "C", CV = 0.20, n1 = 12, pmethod = "shifted")
# TSD with 2x2 crossover
# Method C: alpha0 = 0.05, alpha (s1/s2) = 0.0294 0.0294
# Target power in power monitoring and sample size est. = 0.8
# Power calculation via shifted central t approx.
# CV1 and GMR = 0.95 in sample size est. used
# No futility criterion
# BE acceptance range = 0.8 ... 1.25
#
# CV = 0.2; n(stage 1) = 12; GMR = 0.95
#
# 1e+05 sims at theta0 = 0.95 (p(BE) = 'power').
# p(BE) = 0.8496
# p(BE) s1 = 0.42656
# Studies in stage 2 = 53.7%
#
# Distribution of n(total)
# - mean (range) = 20.6 (12 ... 82)
# - percentiles
# 5% 50% 95%
# 12 18 40
Slightly better than ‘Method B’ in terms of power in both stages and fewer studies are expected to proceed to the second stage.
Explore the empiric type I error at the upper BE-limit (1 milion simulations).
power.tsd(method = "C", CV = 0.20, n1 = 12, pmethod = "shifted",
theta0 = 1.25)[["pBE"]]
# [1] 0.051238
Slight inflation of the type I error (although considered negligible by the authors). However, more adjustment (adjusted α 0.0280) controls the type I error.
power.tsd(method = "C", alpha = rep(0.0280, 2), CV = 0.20,
n1 = 12, pmethod = "shifted", theta0 = 1.25)[["pBE"]]
# [1] 0.049903
Data given by Potvin et al. in Example 2: 12 subjects in stage 1, PE 1.0876, CV 0.18213, all defaults of the function used.
interim.tsd.in(GMR = 0.95, GMR1 = 1.0876, CV1 = 0.18213, n1 = 12)
# TSD with 2x2 crossover
# Inverse Normal approach
# - Maximum combination test with weights for stage 1 = 0.5 0.25
# - Significance levels (s1/s2) = 0.02635 0.02635
# - Critical values (s1/s2) = 1.93741 1.93741
# - BE acceptance range = 0.8 ... 1.25
# - Observed point estimate from stage 1 is not used for SSR
# - With conditional error rates and conditional estimated target power
#
# Interim analysis after first stage
# - Derived key statistics:
# z1 = 3.10000, z2 = 1.70344
# Repeated CI = (0.92491, 1.27891)
# Median unbiased estimate = NA
# - No futility criterion met
# - Test for BE not positive (not considering any futility rule)
# - Calculated n2 = 6
# - Decision: Continue to stage 2 with 6 subjects
The second stage should be initiated with 6 subjects. Note that with
interim.tsd.in(..., fCrit = "No", ssr.conditional = "no")
8
subjects would be required like in the Methods of Potvin et
al.
The second stage is performed in 8 subjects, PE 0.9141, CV 0.25618.
final.tsd.in(GMR1 = 1.0876, CV1 = 0.18213, n1 = 12,
GMR2 = 0.9141, CV2 = 0.25618, n2 = 8)
# TSD with 2x2 crossover
# Inverse Normal approach
# - Maximum combination test with weights for stage 1 = 0.5 0.25
# - Significance levels (s1/s2) = 0.02635 0.02635
# - Critical values (s1/s2) = 1.93741 1.93741
# - BE acceptance range = 0.8 ... 1.25
#
# Final analysis after second stage
# - Derived key statistics:
# z1 = 2.87952, z2 = 2.60501
# Repeated CI = (0.87690, 1.17356)
# Median unbiased estimate = 1.0135
# - Decision: BE achieved
The study passed with a (repeated) CI of 87.69–117.36%. Although slightly more conservative, same conclusion like based on the 94.12% CI of 88.45–116.38% reported by Potvin et al.
Performed on a Xeon E3-1245v3 3.4 GHz, 8 MB cache, 16 GB RAM, R 4.1.2 64 bit on Windows 7.
‘Method B’ (CV 0.20, n1 12).
# method power seconds
# shifted 0.84454 1.09
# nct 0.84266 1.61
# exact 0.84260 31.98
Despite being the fastest, the shifted central t-distribution should only be used in order to compare with published methods. The noncentral t-distribution is a good compromise between speed and accuracy and hence, the default in all functions. The exact method based on Owen’s Q-function is time-consuming and therefore, not recommended in validating a custom method in a narrow grid of n1/CV-combinations. However, in designing a new study it is the method of choice.
Blinded sample size re-estimation (α 0.03505, CV 0.239, n1 10, target power 0.90), 1 million simulations for the empiric type I error.
# method TIE seconds
# ls 0.049054 3.67
# shifted 0.046106 12.85
# nct 0.046319 18.24
# exact 0.046319 429.10
The crude large sample approximation (pmethod = "ls"
)
should only be used to compare with the published method.
You can install the released version of Power2Stage from CRAN with …
<- "Power2Stage"
package <- package %in% installed.packages()
inst if (length(package[!inst]) > 0) install.packages(package[!inst])
… and the development version from GitHub with
# install.packages("devtools")
devtools::install_github("Detlew/Power2Stage")
Skips installation from a github remote if the SHA-1 has not changed
since last install. Use force = TRUE
to force
installation.
Inspect this information for reproducibility. Of particular
importance are the versions of R and the packages used to create this
workflow. It is considered good practice to record this information with
every analysis.
Version 0.5.4 built 2021-11-20 with R 4.1.2.
options(width = 80)
::session_info()
devtools# - Session info --------------------------------------------------------------
# hash: bow and arrow, play or pause button, registered
#
# setting value
# version R version 4.1.2 (2021-11-01)
# os Windows 10 x64 (build 19043)
# system x86_64, mingw32
# ui RTerm
# language en
# collate German_Germany.1252
# ctype German_Germany.1252
# tz Europe/Berlin
# date 2021-11-20
# pandoc 2.14.0.3 @ C:/Program Files/RStudio/bin/pandoc/ (via rmarkdown)
#
# - Packages -------------------------------------------------------------------
# package * version date (UTC) lib source
# cachem 1.0.6 2021-08-19 [1] CRAN (R 4.1.1)
# callr 3.7.0 2021-04-20 [1] CRAN (R 4.1.1)
# cli 3.1.0 2021-10-27 [1] CRAN (R 4.1.1)
# crayon 1.4.2 2021-10-29 [1] CRAN (R 4.1.1)
# cubature 2.0.4.2 2021-05-13 [1] CRAN (R 4.1.0)
# desc 1.4.0 2021-09-28 [1] CRAN (R 4.1.1)
# devtools 2.4.2 2021-06-07 [1] CRAN (R 4.1.0)
# digest 0.6.28 2021-09-23 [1] CRAN (R 4.1.1)
# ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.1.1)
# evaluate 0.14 2019-05-28 [1] CRAN (R 4.1.1)
# fastmap 1.1.0 2021-01-25 [1] CRAN (R 4.1.1)
# fs 1.5.0 2020-07-31 [1] CRAN (R 4.1.1)
# glue 1.4.2 2020-08-27 [1] CRAN (R 4.1.1)
# htmltools 0.5.2 2021-08-25 [1] CRAN (R 4.1.1)
# knitr 1.36 2021-09-29 [1] CRAN (R 4.1.1)
# lifecycle 1.0.1 2021-09-24 [1] CRAN (R 4.1.1)
# magrittr 2.0.1 2020-11-17 [1] CRAN (R 4.1.1)
# memoise 2.0.0 2021-01-26 [1] CRAN (R 4.1.1)
# mvtnorm 1.1-3 2021-10-08 [1] CRAN (R 4.1.1)
# pkgbuild 1.2.0 2020-12-15 [1] CRAN (R 4.1.1)
# pkgload 1.2.3 2021-10-13 [1] CRAN (R 4.1.1)
# Power2Stage * 0.5-4 2021-11-20 [1] local
# PowerTOST 1.5-3 2021-01-18 [1] CRAN (R 4.1.1)
# prettyunits 1.1.1 2020-01-24 [1] CRAN (R 4.1.1)
# processx 3.5.2 2021-04-30 [1] CRAN (R 4.1.1)
# ps 1.6.0 2021-02-28 [1] CRAN (R 4.1.1)
# purrr 0.3.4 2020-04-17 [1] CRAN (R 4.1.1)
# R6 2.5.1 2021-08-19 [1] CRAN (R 4.1.1)
# Rcpp 1.0.7 2021-07-07 [1] CRAN (R 4.1.1)
# remotes 2.4.1 2021-09-29 [1] CRAN (R 4.1.1)
# rlang 0.4.12 2021-10-18 [1] CRAN (R 4.1.1)
# rmarkdown 2.11 2021-09-14 [1] CRAN (R 4.1.1)
# rprojroot 2.0.2 2020-11-15 [1] CRAN (R 4.1.1)
# rstudioapi 0.13 2020-11-12 [1] CRAN (R 4.1.1)
# sessioninfo 1.2.1 2021-11-02 [1] CRAN (R 4.1.2)
# stringi 1.7.5 2021-10-04 [1] CRAN (R 4.1.1)
# stringr 1.4.0 2019-02-10 [1] CRAN (R 4.1.1)
# TeachingDemos 2.12 2020-04-07 [1] CRAN (R 4.1.1)
# testthat 3.1.0 2021-10-04 [1] CRAN (R 4.1.1)
# usethis 2.1.3 2021-10-27 [1] CRAN (R 4.1.1)
# withr 2.4.2 2021-04-18 [1] CRAN (R 4.1.1)
# xfun 0.27 2021-10-18 [1] CRAN (R 4.1.1)
# yaml 2.2.1 2020-02-01 [1] CRAN (R 4.1.1)
#
# [1] C:/Program Files/R/library
# [2] C:/Program Files/R/R-4.1.2/library
#
# ------------------------------------------------------------------------------
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.