The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
This article explores a method of approximating a design using the exact binomial method of Chan and Bohidar (1998) by a time-to-event design using the method of Lachin and Foulkes (1986). This allows use of spending functions to derive boundaries for the exact method. The time-to-event design can not only be used to set boundaries for the Chan and Bohidar (1998) method, but to allow specification of enrollment duration and study duration to determine enrollment rates and sample size required. This vignette also illustrates the concept of super-superiority often used in prevention studies. Finally, since this procedure is new as of November, 2023 we suggest checks and potential revisions to spending function choices to optimize design boundaries.
We begin with the assumption that we will a require a large sample size due to an endpoint with a small incidence rate. This could apply to a vaccine study or other prevention study with a relatively small number of events expected.
Paralleling the notation of Chan and Bohidar (1998), we assume \(N_C, P_C\) to be binomial sample size and probability of an event for each participant assigned to control; for the experimental treatment group, these are labeled \(N_E, P_E\). Vaccine efficacy is defined as
\[\pi = 1 - P_E/P_C.\]
The parameter \(\pi\) is often labeled as VE for vaccine efficacy. Taking into account the randomization ratio \(r\) (experimental / control) the approximate probability that any given event is in the experimental group is
\[ \begin{aligned} p &= rP_E/(rP_E+ P_C)\\ &= r/(r + P_C/P_E)\\ &= r/(r + (1-\pi)^{-1}). \end{aligned} \]
As noted, this approximation is dependent on a large sample size and small probability of events. The above can be inverted to obtain
\[\pi = 1 - \frac{1}{r(1/p-1)}. \]
For our example of interest, we begin with an alternate hypothesis vaccine efficacy \(\pi_1 = 0.7\) and experimental:control randomization ratio \(r=3\). This converts to an alternate hypothesis (approximate) probability that any event is in the experimental group of
#> [1] 0.4736842
We use the inversion formula to revert this to \(\pi_1 = 0.7\)
#> [1] 0.7
Letting the null hypothesis vaccine efficacy be \(\pi_0 = 0.3\), our exact binomial null hypothesis probability that an event is in the experimental group is
#> [1] 0.6774194
We also translate several vaccine efficacy values to proportion of events in the experimental group:
ve <- c(.5, .6, .65, .7, .75, .8)
prob_experimental <- ratio / (ratio + 1 / (1 - ve))
tibble(VE = ve, "P(Experimental)" = prob_experimental) %>%
gt() %>%
tab_options(data_row.padding = px(1)) %>%
fmt_number(columns = 2, decimals = 3)
VE | P(Experimental) |
---|---|
0.50 | 0.600 |
0.60 | 0.545 |
0.65 | 0.512 |
0.70 | 0.474 |
0.75 | 0.429 |
0.80 | 0.375 |
Chapter 12 of Jennison and Turnbull (2000) walks through how to design and analyze such a study using a fixed or group sequential design. The time-to-event approximation provides an initial approximation to computing bounds; more importantly, it provides sample size and study duration approximations that are not given by the Jennison and Turnbull approach.
For a time-to-event formulation with exponential failure rates \(\lambda_C\) for control and \(\lambda_E\) for experimental group assigned participants, we would define
\[\pi = 1 - \lambda_E / \lambda_C\]
which is 1 minus the hazard ratio often used in time-to-event studies. In the following we examine how closely the time-to-event method using asymptotic distributional assumptions can approximate an appropriate exact binomial design. We will also define a planned number of events at each of \(K\) planned analyses by \(D_k, 1\le k\le K\).
We begin by specifying parameters. The alpha
and
beta
parameters will not be met exactly due to the discrete
group sequential probability calculations performed. The current version
includes only designs that use non-binding futility bounds or no
futility bounds. The design is generated by first using asymptotic
theory for a time-to-event design with specified spending functions.
This design is then adapted to a design using the exact binomial method
of Chan and Bohidar (1998). The
randomization ratio (experimental/control) was assumed to be 3:1 as in
the Logunov et al. (2021) trial.
alpha <- 0.025 # Type I error
beta <- 0.1 # Type II error (1 - power)
k <- 3 # number of analyses in group sequential design
timing <- c(.45, .7) # Relative timing of interim analyses compared to final
sfu <- sfHSD # Efficacy bound spending function (Hwang-Shih-DeCani)
sfupar <- -3 # Parameter for efficacy spending function
sfl <- sfHSD # Futility bound spending function (Hwang-Shih-DeCani)
sflpar <- -3 # Futility bound spending function parameter
timename <- "Month" # Time unit
failRate <- .002 # Exponential failure rate
dropoutRate <- .0001 # Exponential dropout rate
enrollDuration <- 8 # Enrollment duration
trialDuration <- 24 # Planned trial duration
VE1 <- .7 # Alternate hypothesis vaccine efficacy
VE0 <- .3 # Null hypothesis vaccine efficacy
ratio <- 3 # Experimental/Control enrollment ratio
test.type <- 4 # 1 for one-sided, 4 for non-binding futility
Now we generate the design. If resulting alpha and beta do not satisfy requirements, adjust parameters above until a satisfactory result is obtained.
# Derive Group Sequential Design
# This determines final sample size
x <- gsSurv(
k = k, test.type = test.type, alpha = alpha, beta = beta, timing = timing,
sfu = sfu, sfupar = sfupar, sfl = sfl, sflpar = sflpar,
lambdaC = failRate, eta = dropoutRate,
# Translate vaccine efficacy to HR
hr = 1 - VE1, hr0 = 1 - VE0,
R = enrollDuration, T = trialDuration,
minfup = trialDuration - enrollDuration, ratio = ratio
)
Now we convert this to a design with integer event counts at
analyses. This is achieved by rounding interim analysis event counts
from the above design and rounding up the final analysis event count.
This will result in a slight change in event fractions at interim
analyses as well as a slight change from the targeted 90% power. We now
explain the rationale behind the spending function choices. Recall that
the hazard ratio (HR) is 1 minus the VE. The ~HR at bound
represents the approximate hazard ratio required to cross a bound. Thus,
small HR’s at the interim analyses along with small cumulative \(\alpha\)-spending suggest crossing an
interim efficacy bound would provide a result strong enough to
potentially justify the new treatment. The hazard ratio of ~0.69 (VE ~
0.31) for the interim 1 futility bound mean that the efficacy trend
would be essentially no better than the null hypothesis if the futility
bound were crossed. The second analysis futility bound with approximate
VE of 0.5 would be worth discussion with a data monitoring committee as
well as other planners for the trial; a custom spending function could
be used to set both the first and second interim bounds to desired
levels.
xx <- toInteger(x)
gsBoundSummary(xx,
tdigits = 1, logdelta = TRUE, deltaname = "HR", Nname = "Events",
exclude = c("B-value", "CP", "CP H1", "PP")
) %>%
gt() %>%
tab_header(
title = "Initial group sequential approximation",
subtitle = "Integer event counts at analyses"
) %>%
tab_options(data_row.padding = px(1))
Initial group sequential approximation | |||
Integer event counts at analyses | |||
Analysis | Value | Efficacy | Futility |
---|---|---|---|
IA 1: 44% | Z | 2.6864 | 0.0424 |
Events: 3606 | p (1-sided) | 0.0036 | 0.4831 |
Events: 30 | ~HR at bound | 0.2255 | 0.6876 |
Month: 12.8 | Spending | 0.0036 | 0.0144 |
P(Cross) if HR=0.7 | 0.0036 | 0.5169 | |
P(Cross) if HR=0.3 | 0.3231 | 0.0144 | |
IA 2: 69% | Z | 2.4494 | 0.9143 |
Events: 3606 | p (1-sided) | 0.0072 | 0.1803 |
Events: 47 | ~HR at bound | 0.3067 | 0.5144 |
Month: 17.9 | Spending | 0.0055 | 0.0220 |
P(Cross) if HR=0.7 | 0.0091 | 0.8287 | |
P(Cross) if HR=0.3 | 0.6455 | 0.0364 | |
Final | Z | 2.0296 | 2.0296 |
Events: 3606 | p (1-sided) | 0.0212 | 0.0212 |
Events: 68 | ~HR at bound | 0.3965 | 0.3965 |
Month: 24.2 | Spending | 0.0159 | 0.0636 |
P(Cross) if HR=0.7 | 0.0239 | 0.9761 | |
P(Cross) if HR=0.3 | 0.9022 | 0.0978 |
A textual summary for the design is:
Asymmetric two-sided group sequential design with non-binding futility bound, 3 analyses, time-to-event outcome with sample size 3606 and 68 events required, 90 percent power, 2.5 percent (1-sided) Type I error to detect a hazard ratio of 0.3 with a null hypothesis hazard ratio of 0.7. Enrollment and total study durations are assumed to be 8 and 24.2 months, respectively. Efficacy bounds derived using a Hwang-Shih-DeCani spending function with gamma = -3. Futility bounds derived using a Hwang-Shih-DeCani spending function with gamma = -3.
We now convert this to an exact binomial design. The bound counts are
described in this initial table displayed. N
is the total
event count, a
the maximum number of events in the
experimental group to cross the efficacy bound. For example, if 12 or
fewer of 30 events at interim 1 are in the experimental group the
efficacy bound has been crossed. The futility bound is in
b
; at the first interim, if 21 or more of 30 total events
are in the experimental group then the futility bound would be crossed
and the alternate hypothesis could be rejected. The second and third
tables below give probabilities of crossing the upper (futility) and
lower (efficacy) bounds under the null (theta
= 0.6774) and
alternate (theta
= 0.4737) hypotheses, respectively; these
calculations are done under the exact binomial distribution
assumptions.
We produce a summary table. Code for this is provided in the R markdown for this vignette provided with the package. This combines information from the time-to-event design for calendar timing of analyses (Time) and expected sample size at each analysis (N) along with bounds and operating characteristics for the design.
Design Bounds and Operating Characteristics | |||||||||||||||
Analysis | Time | N | Cases |
Experimental Cases at Bound1
|
Vaccine Efficacy at Bound2
|
Error Spending3
|
Power by Vaccine Efficacy4
|
||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Success | Futility | Efficacy | Futility | alpha5 | beta | 50% | 60% | 65% | 70% | 75% | 80% | ||||
1 | 12.9 | 3602 | 30 | 12 | 21 | 0.78 | 0.22 | 0.0016 | 0.0103 | 0.02 | 0.08 | 0.15 | 0.27 | 0.45 | 0.69 |
2 | 17.9 | 3602 | 47 | 23 | 30 | 0.68 | 0.41 | 0.0064 | 0.0223 | 0.09 | 0.27 | 0.44 | 0.65 | 0.84 | 0.96 |
3 | 24.0 | 3602 | 68 | 37 | 38 | 0.60 | 0.58 | 0.0174 | 0.0994 | 0.21 | 0.54 | 0.74 | 0.90 | 0.98 | 1.00 |
1 Experimental case counts to cross between success and futility counts do not stop trial | |||||||||||||||
2 Exact vaccine efficacy required to cross bound | |||||||||||||||
3 Cumulative spending at each analysis | |||||||||||||||
4 Cumulative power at each analysis by underlying vaccine efficacy | |||||||||||||||
5 alpha-spending for efficacy ignores non-binding futility bound |
The initial approximation of bounds for the exact binomial design was generated from the time-to-event design as follows. First, we computed nominal p-value 1-sided bounds under the null hypothesis for the efficacy bounds using the normal approximation that the time-to-event design used:
#> [1] 0.003610924 0.007155713 0.021200261
Then we took the inverse binomial distribution for these p-values assuming the targeted total number of cases to obtain:
#> [1] 12 23 37
This is actually the same as the final bounds computed above:
#> [1] 12 23 37
This and the initial approximation for the futility bound are
returned from toBinomialExact()
:
#> [1] 12 23 37
#> [1] 20 29 38
For the futility bound, only a slight adjustment was required for the final bound:
#> [1] 21 30 38
The vaccine efficacy at bounds should be checked to see if the evidence is convincing enough to be accepted as a clinically relevant benefit in addition to statistical benefit (efficacy bounds) or a less than relevant benefit for futility bounds.
Next, we look at \(\alpha\)- and \(\beta\)-spending for the time-to-event design to compare to the exact bound restricted by the discrete possible counts at bounds. For both \(\alpha\)- and \(\beta\)-spending, the exact binomial design has no more spending at each analysis than allowed by each spending function. We also confirm that changing any of the exact bounds by 1 exceeds the targeting spending.
# Exact design cumulative alpha-spending at efficacy bounds
# (non-binding)
nb <- gsBinomialExact(k = xb$k, theta = xb$theta, n.I = xb$n.I, b= xb$n.I + 1, a = xb$lower$bound)
cumsum(nb$lower$prob[,1])
#> Analysis 1 Analysis 2 Analysis 3
#> 0.001619438 0.006447739 0.017397214
#> [1] 0.003610924 0.009107476 0.025000000
Above we see the achieved \(\alpha\)-spending ignoring the futility bound is controlled at the targeted level at each analysis; because of the exact bound, control is below the target. For this particular example, we show below that increasing any efficacy bound by 1 not only increases above the targeted cumulative spend compared to above at each analysis (diagonal elements), but also exceeds the targeted total spend of 0.025 at the final analysis (final column). The latter property will not always hold. Choosing your spending function carefully (in this case, using the spending function parameter) to ensure cumulative spending for the exact design is close to the targeted \(\alpha\) is worth careful examination. For this case, choosing the Hwang-Shih-DeCani efficacy spending parameter as \(\gamma=-4\) instead of \(\gamma=-3\) second property would not hold.
# Check that increasing any bound goes above cumulative spend
excess_alpha_spend <- matrix(0, nrow = nb$k, ncol=nb$k)
for(i in 1:xb$k){
a <- xb$lower$bound
a[i] <- a[i] + 1
excess_alpha_spend[i,] <-
cumsum(gsBinomialExact(k = xb$k, theta = xb$theta, n.I = xb$n.I, b= xb$n.I + 1, a = a)$lower$prob[,1])
}
excess_alpha_spend
#> [,1] [,2] [,3]
#> [1,] 0.004979222 0.008826964 0.01933025
#> [2,] 0.001619438 0.013174419 0.02171228
#> [3,] 0.001619438 0.006447739 0.02856667
#> Analysis 1 Analysis 2 Analysis 3
#> 0.01033516 0.02225609 0.09941943
#> [1] 0.0144437 0.0364299 0.1000000
Since the futility bound at the final analysis is only 1 plus the efficacy bound, it cannot be lowered in the following. However, we see that changing either interim futility bound by 1 would exceed targeted interim spending (diagonal elements) and total Type II error spending (third column). Again, the spending function had to be carefully chosen to ensure the second of these properties; e.g., with the O’Brien-Flemining-like spending function the second property did not hold.
# Check that increasing any bound goes above cumulative spend
excess_beta_spend <- matrix(0, nrow = nb$k - 1, ncol=nb$k)
for(i in 1:(xb$k - 1)){
b <- xb$upper$bound
b[i] <- b[i] - 1
excess_beta_spend[i,] <-
cumsum(as.numeric(gsBinomialExact(k = xb$k, theta = xb$theta, n.I = xb$n.I, b = b, a = xb$lower$bound)$upper$prob[,2]))
}
excess_beta_spend
#> [,1] [,2] [,3]
#> [1,] 0.02618462 0.03428681 0.1059802
#> [2,] 0.01033516 0.03746257 0.1034467
We now use the second interim analysis outcome from the SPUTNIK trial to show how to update bounds from above when event counts differ from planned. The first database was on November 18, 2020 and included 20 endpoints. The second database lock with 78 endpoint cases was one week later on November 24, 2020. These are slightly different than the targeted event counts above and bounds must be updated using the spending functions for the trial based on the altered information fraction compared to plan; see table below. At the time of analysis of 78 endpoint events, 16 were in the experimental group, comfortably crossing the efficacy bound requiring 44 or fewer experimental events. Given the rapid accrual of endpoints, the futility bound would likely have been irrelevant for the first interim. Since Type I error assumed a non-binding bound, the first analysis could be ignored; i.e., since it was non-binding the futility bound could be ignored for decision-making purposes without inflating Type I error. The VE = -0.33 at the futility bound favored placebo. Since we have overrun the targeted event count at the second analysis we exceed the targeted power and never reach the allowed \(\beta\)-spending for Type II error. Note that for this table, the expected sample size and calendar timing are no longer needed.
We hide the code to produce the table; this is available in package vignette code.
Updated Bounds for Actual Analyses from SPUTNIK trial | ||||||||||
Analysis | Cases |
Cases at Bound1
|
VE at Bound2
|
Error Spending3
|
Power by VE4
|
|||||
---|---|---|---|---|---|---|---|---|---|---|
Success | Futility | Efficacy | Futility | alpha5 | beta | 65% | 75% | 85% | ||
1 | 20 | 6 | 16 | 0.86 | −0.33 | 0.0006 | 0.0030 | 0.05 | 0.18 | 0.57 |
2 | 78 | 44 | 45 | 0.57 | 0.55 | 0.0239 | 0.0450 | 0.85 | 0.99 | 1.00 |
1 Experimental case counts; counts between success and futility bounds do not stop trial | ||||||||||
2 Exact vaccine efficacy required to cross bound | ||||||||||
3 Cumulative spending at each analysis | ||||||||||
4 Cumulative power at each analysis by underlying vaccine efficacy | ||||||||||
5 Efficacy spending ignores non-binding futility bound |
We have provided an extended example to show that a Chan and Bohidar (1998) exact binomial using spending function bounds can be derived in a two-step process that delivers sample size and bounds by 1) deriving a related time-to-event design using asymptotic methods and then 2) converting to an exact binomial design. Adjustments were made to target Type I and Type II error probabilities in the asymptotic approximation to ensure the exact binomial Type I and Type II error rates were achieved. The method seems a reasonable and straightforward approach to develop a complete design that accounts for the impact of enrollment, failure rates dropout rates, and trial duration.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.