The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
For time-to-event data, the piecewise constant hazard function for k different constant hazards is
\(\begin{eqnarray} h(t) &=& \begin{cases} \lambda_1 & 0\leq t<\tau_1\\ \lambda_2 & \tau_1\leq t <\tau_2\\ \vdots & \vdots \\ \lambda_{k} & t\geq\tau_{k - 1} \end{cases} \end{eqnarray}\).
We know that survival function \(S(t)\), cumulative distribution function \(F(t)\) and cumulative hazard function \(H(t)\) are related. \(S(t) = \exp(-H(t))\), \(exp(-H(t)) = 1 - F(t)\). If \(U \sim Unif(0,1)\), then \(F(X) = U\) is equivalent to \(1-\exp(-H(X)) = U\), so \(X=H^{-1}(-\log(1-U))\). Note that \(-\log(1-U) \sim Exp(1)\). Thus, we can simulate \(x\) in exponential distribution and substitute it into the inverse hazard function.
\(\begin{eqnarray} H^{-1}(t)&=&\begin{cases} \frac{t}{\lambda_1} & 0\leq t < \lambda_1\tau_1\\ \frac{t - \lambda_1\tau_1}{\lambda_2}+\tau_1 & \lambda_1\tau_1 \leq t < \lambda_1\tau_1 + \lambda_2(\tau_2-\tau_1)\\ \vdots & \vdots \\ \frac{t - (\lambda_1\tau_1 + \sum_{i = 2}^{k - 2} \lambda_i (\tau_k - \tau_{k - 1}))}{\lambda_{k - 1}}+\tau_{k - 2} & \lambda_1\tau_1 + \sum_{i = 2}^{k - 2} \lambda_i (\tau_k - \tau_{k - 1}) \leq t < \lambda_1\tau_1 + \sum_{i = 2}^{k - 1} \lambda_i (\tau_k - \tau_{k - 1})\\ \frac{t - (\lambda_1\tau_1 + \sum_{i = 2}^{k - 1} \lambda_i (\tau_k - \tau_{k - 1}))}{\lambda_k}+\tau_{k - 1} & t \geq \lambda_1\tau_1 + \sum_{i = 2}^{k - 1} \lambda_i (\tau_k - \tau_{k - 1})\\ \end{cases} \end{eqnarray}\)
The pw_exp_sim
function simulates time-to-event outcomes using piecewise constant hazard function.
Let \(d_{ij}\) and \(t_{ij}\) denote the the event indicator and event time or censoring time for the \(i\)th subject in the \(j\)th interval of the current data, respectively. Let \(a_0\) and \(b_0\) denote the shape and rate parameters of a gamma distribution, respectively. Then, the posterior distributions of the \(j\)th piecewise hazard rates for current and historical data, under vague (flat) priors are \[\lambda_{j} \sim \mathcal{G}amma\left(a_0+D_j,\,b_0+T_j\right),\] where \(D_j=\sum_id_{ij}\), \(T_j=\sum_it_{ij}\). When historical data is present, let \(d_{0ij}\) and \(t_{0ij}\) denote the the event indicator and event time or censoring time for the \(i\)th subject in the \(j\)th interval of the historical data, respectively. The weight of the historical data included in the study design and analysis is denoted by \(\hat\alpha\). The computation of \(\alpha\) is done using the discount prior approach . The posterior distribution of the piecewise hazard rate for the historical data is \[\lambda_{j} \sim \mathcal{G}amma\left(a_0+D_j + \alpha D_{0j},\,b_0+T_j + \alpha T_{0j}\right),\] where \(D_j=\sum_id_{ij}\), \(T_j=\sum_it_{ij}\), \(D_{0j}=\sum_id_{0ij}\), and \(T_{j0}=\sum_it_{0ij}\). Even though there is a closed-form solution for the difference in gamma distributed random variables, we use Monte Carlo simulations to estimate the treatment difference. The estimation of posterior chain of log-hazard rate comparing treatment and control groups is \(\tilde{\lambda_{jT}} - \tilde{\lambda_{jT}}\), where \(\lambda_{jT}\) is the posterior chain of log-hazard in the treatment group and \(\lambda_C\) is the posterior chain of log-hazard in the control group.
Unlike traditional R functions, the bayesCT
package depends on pipe inputs with different wrapper functions. All the wrapper functions are illustrated below along with details on the arguments for the simulation and analysis.
interim_look
value must be smaller than total_sample_size
.time
. Specified as patients per day. Can be a scalar or vector. Default is 0.3, i.e. 0.3 patients per day.weibull
, scaledweibull
, and identity
. The discount function scaledweibull
scales the output of the Weibull CDF to have a max value of 1. The identity
discount function uses the posterior probability directly as the discount weight. Default value is "identity"
.discount_function = "identity"
.discount_function = "identity"
.mc
estimates alpha for each Monte Carlo iteration. Alternate value fixed
estimates alpha once and holds it fixed throughout the analysis. See the the bdpsurvival
vignette vignette("bdpsurvival-vignette", package="bayesDP"
for more details.type="survival"
for a trial with time-to-event outcomes.In the following section, we will discuss the design of adaptive trials using bayesCT
for time-to-events outcomes. We illustrate an example for one-arm trial and two-arm trials using the wrapper functions described above.
In the example below, we will illustrate how to compute power, type 1 error, and other clinical trial characteristics for an objective performance criterion (OPC) trial with hazard rates and hypothesis described as follows, \[H_0: P(S_{treatment} > 70) \leq 0.5 \qquad H_A: P(S_{treatment} > 70) > 0.5.\]
The most important wrapper functions are study_details and survival_outcome (especially since there are no default values).
The piecewise constant hazard rates are simulated using 0.015 and 0.012 with the cutpoint at time 30. The total sample size is 200 with a study length of 70 days. A 10% loss to follow up is assumed. Based on this information, the adaptive trials are simulated 2 times to obtain the following output (NOTE: for the purpose of reproducing the vignette quickly, we reduce the number of simulations to 5, you should use a much larger value, e.g., 10000). The aforementioned inputs were chosen for illustration purposes only.
value <- survival_outcome(hazard_treatment = c(0.012, 0.008),
cutpoint = 30) %>%
study_details(total_sample_size = 200,
study_period = 70,
interim_look = NULL,
prop_loss_to_followup = 0.1)
# Simulate 2 trials
output <- value %>%
simulate(no_of_sim = 2)
# Structure of the simulation output
str(output)
#> List of 8
#> $ input :List of 5
#> ..$ hazard_treatment : num [1:2] 0.012 0.008
#> ..$ cutpoint : num 30
#> ..$ N_total : num 200
#> ..$ EndofStudy : num 70
#> ..$ prop_loss_to_followup: num 0.1
#> $ power :'data.frame': 1 obs. of 2 variables:
#> ..$ interim_looks: num 200
#> ..$ power : num 0
#> $ type1_error : num 0
#> $ est_final : num [1:2] 0.528 0.5
#> $ post_prob_accept_alternative: num [1:2] 0.782 0.498
#> $ N_enrolled : num [1:2] 200 200
#> $ stop_expect_success : num [1:2] 0 0
#> $ stop_futility : num [1:2] 0 0
To allow for early stopping for success or futility, we can add interim looks to the design. We’ll check for success or futility at the enrollment of the 600th, 700th and 800th subject. Upon adding this interim look requirement, the trial is simulated 2 times to obtain the output.
# Adding interim looks
value <- value %>%
study_details(total_sample_size = 200,
study_period = 70,
interim_look = 180,
prop_loss_to_followup = 0.10)
# Simulate 2 trials
output <- value %>%
simulate(no_of_sim = 2)
# Structure of the simulation output
str(output)
#> List of 8
#> $ input :List of 6
#> ..$ hazard_treatment : num [1:2] 0.012 0.008
#> ..$ cutpoint : num 30
#> ..$ N_total : num 200
#> ..$ EndofStudy : num 70
#> ..$ prop_loss_to_followup: num 0.1
#> ..$ interim_look : num 180
#> $ power :'data.frame': 2 obs. of 2 variables:
#> ..$ interim_looks: num [1:2] 180 200
#> ..$ power : num [1:2] 0 0
#> $ type1_error : num 0
#> $ est_final : num [1:2] 0.553 0.49
#> $ post_prob_accept_alternative: num [1:2] 0.912 0.401
#> $ N_enrolled : num [1:2] 180 180
#> $ stop_expect_success : num [1:2] 1 0
#> $ stop_futility : num [1:2] 0 1
Patient enrollment is assumed to follow a Poisson process. The default enrollment rate is 0.3 patients per day. In this simulation we’ll introduce a step-wise Poisson process with rate \(\lambda\) as follows:
\[ \lambda = \left\{ \begin{array}{ll} 0.25 & \text(time) \in [0, 40) \\ 0.8 & \text(time) \in [40, \infty) \\ \end{array} \right. \]
This enrollment scheme is illustrated below.
value <- value %>%
enrollment_rate(lambda = c(0.25, 0.8),
time = 40)
output <- value %>%
simulate(no_of_sim = 2)
str(output)
#> List of 8
#> $ input :List of 8
#> ..$ hazard_treatment : num [1:2] 0.012 0.008
#> ..$ cutpoint : num 30
#> ..$ N_total : num 200
#> ..$ EndofStudy : num 70
#> ..$ prop_loss_to_followup: num 0.1
#> ..$ interim_look : num 180
#> ..$ lambda : num [1:2] 0.25 0.8
#> ..$ lambda_time : num 40
#> $ power :'data.frame': 2 obs. of 2 variables:
#> ..$ interim_looks: num [1:2] 180 200
#> ..$ power : num [1:2] 0.5 0.5
#> $ type1_error : num 0
#> $ est_final : num [1:2] 0.505 0.564
#> $ post_prob_accept_alternative: num [1:2] 0.553 0.954
#> $ N_enrolled : num [1:2] 180 180
#> $ stop_expect_success : num [1:2] 1 1
#> $ stop_futility : num [1:2] 0 0
The hypothesis is an important wrapper function which controls the probability of futility, probability of accepting the alternative hypothesis, probability of early success, the alternative hypothesis, and the treatment difference margin.
We’ll further set the futility probability to 0.05, the expected success probability for early stopping to 0.90, and the final probability of accepting the alternative to 0.95. The alternative is "less"
due to the hypothesis function specified above.
value <- value %>%
hypothesis(delta = 0.50,
futility_prob = 0.05,
prob_accept_ha = 0.95,
expected_success_prob = 0.90,
alternative = "less")
output <- value %>%
simulate(no_of_sim = 2)
str(output)
#> List of 8
#> $ input :List of 13
#> ..$ hazard_treatment : num [1:2] 0.012 0.008
#> ..$ cutpoint : num 30
#> ..$ N_total : num 200
#> ..$ EndofStudy : num 70
#> ..$ prop_loss_to_followup: num 0.1
#> ..$ interim_look : num 180
#> ..$ lambda : num [1:2] 0.25 0.8
#> ..$ lambda_time : num 40
#> ..$ h0 : num 0.5
#> ..$ futility_prob : num 0.05
#> ..$ prob_ha : num 0.95
#> ..$ expected_success_prob: num 0.9
#> ..$ alternative : chr "less"
#> $ power :'data.frame': 2 obs. of 2 variables:
#> ..$ interim_looks: num [1:2] 180 200
#> ..$ power : num [1:2] 0 0
#> $ type1_error : num 1
#> $ est_final : num [1:2] 0.529 0.595
#> $ post_prob_accept_alternative: num [1:2] 0.2204 0.0065
#> $ N_enrolled : num [1:2] 180 180
#> $ stop_expect_success : num [1:2] 0 0
#> $ stop_futility : num [1:2] 1 1
Next, we’ll illustrate imputations for imputing outcomes for subjects loss to follow up. We’ll carry out 10 imputations and draw 2000 values from the posterior of each imputation.
value <- value %>%
impute(no_of_impute = 10,
number_mcmc = 2000)
output <- value %>%
simulate(no_of_sim = 2)
str(output)
#> List of 8
#> $ input :List of 15
#> ..$ hazard_treatment : num [1:2] 0.012 0.008
#> ..$ cutpoint : num 30
#> ..$ N_total : num 200
#> ..$ EndofStudy : num 70
#> ..$ prop_loss_to_followup: num 0.1
#> ..$ interim_look : num 180
#> ..$ lambda : num [1:2] 0.25 0.8
#> ..$ lambda_time : num 40
#> ..$ h0 : num 0.5
#> ..$ futility_prob : num 0.05
#> ..$ prob_ha : num 0.95
#> ..$ expected_success_prob: num 0.9
#> ..$ alternative : chr "less"
#> ..$ N_impute : num 10
#> ..$ number_mcmc : num 2000
#> $ power :'data.frame': 2 obs. of 2 variables:
#> ..$ interim_looks: num [1:2] 180 200
#> ..$ power : num [1:2] 0 0
#> $ type1_error : num 1
#> $ est_final : num [1:2] 0.481 0.422
#> $ post_prob_accept_alternative: num [1:2] 0.678 0.981
#> $ N_enrolled : num [1:2] 180 180
#> $ stop_expect_success : num [1:2] 0 0
#> $ stop_futility : num [1:2] 1 1
The default non-informative gamma prior used in the simulation is \(\mathcal{G}amma(.1, .1)\). In our OPC trial simulation, we’ll change the default to \(\mathcal{G}amma(.2, .2)\). This will increase the weight of the non-informative prior in the simulation. This non-informative gamma prior is implemented using gamma_prior wrapper function.
value <- value %>%
gamma_prior(a0 = .2,
b0 = .2)
output <- value %>%
simulate(no_of_sim = 2)
str(output)
#> List of 8
#> $ input :List of 16
#> ..$ hazard_treatment : num [1:2] 0.012 0.008
#> ..$ cutpoint : num 30
#> ..$ N_total : num 200
#> ..$ EndofStudy : num 70
#> ..$ prop_loss_to_followup: num 0.1
#> ..$ interim_look : num 180
#> ..$ lambda : num [1:2] 0.25 0.8
#> ..$ lambda_time : num 40
#> ..$ h0 : num 0.5
#> ..$ futility_prob : num 0.05
#> ..$ prob_ha : num 0.95
#> ..$ expected_success_prob: num 0.9
#> ..$ alternative : chr "less"
#> ..$ N_impute : num 10
#> ..$ number_mcmc : num 2000
#> ..$ prior : num [1:2] 0.2 0.2
#> $ power :'data.frame': 2 obs. of 2 variables:
#> ..$ interim_looks: num [1:2] 180 200
#> ..$ power : num [1:2] 0 0
#> $ type1_error : num 0.5
#> $ est_final : num [1:2] 0.601 0.47
#> $ post_prob_accept_alternative: num [1:2] 0.0055 0.772
#> $ N_enrolled : num [1:2] 180 180
#> $ stop_expect_success : num [1:2] 0 0
#> $ stop_futility : num [1:2] 1 1
Historical data is not required to compute the simulation. However, if historical data is avaialble, it can be incorporated into the analysis using the discount prior approach as implemented in the bayesDP
R package.
In our OPC trial, we’ll illustrate historical data incorporation. We’ll simulate the historical data as follows.
hist_data <- data.frame(time = rexp(100, 0.011),
event = rbinom(100, 1, 0.8),
treatment = rep(1, 100))
str(hist_data)
#> 'data.frame': 100 obs. of 3 variables:
#> $ time : num 92.19 140.54 9.18 131.6 380.19 ...
#> $ event : int 0 1 0 1 1 1 0 1 1 1 ...
#> $ treatment: num 1 1 1 1 1 1 1 1 1 1 ...
We’ll incorporate this historical data using the weilbull discount function. For more details on the historical data incorporation method and computation, please see https://CRAN.R-project.org/package=bayesDP.
value <- value %>%
historical_survival(time = hist_data$time,
treatment = hist_data$treatment,
event = hist_data$event,
discount_function = "weibull",
alpha_max = 1,
fix_alpha = FALSE,
weibull_scale = 0.135,
weibull_shape = 3,
method = "mc")
output <- value %>%
simulate(no_of_sim = 2)
str(output)
#> List of 8
#> $ input :List of 25
#> ..$ hazard_treatment : num [1:2] 0.012 0.008
#> ..$ cutpoint : num 30
#> ..$ N_total : num 200
#> ..$ EndofStudy : num 70
#> ..$ prop_loss_to_followup: num 0.1
#> ..$ interim_look : num 180
#> ..$ lambda : num [1:2] 0.25 0.8
#> ..$ lambda_time : num 40
#> ..$ h0 : num 0.5
#> ..$ futility_prob : num 0.05
#> ..$ prob_ha : num 0.95
#> ..$ expected_success_prob: num 0.9
#> ..$ alternative : chr "less"
#> ..$ N_impute : num 10
#> ..$ number_mcmc : num 2000
#> ..$ prior : num [1:2] 0.2 0.2
#> ..$ time0 : num [1:100] 92.19 140.54 9.18 131.6 380.19 ...
#> ..$ treatment0 : num [1:100] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$ event0 : int [1:100] 0 1 0 1 1 1 0 1 1 1 ...
#> ..$ discount_function : chr "weibull"
#> ..$ alpha_max : num 1
#> ..$ fix_alpha : logi FALSE
#> ..$ weibull_scale : num 0.135
#> ..$ weibull_shape : num 3
#> ..$ method : chr "mc"
#> $ power :'data.frame': 2 obs. of 2 variables:
#> ..$ interim_looks: num [1:2] 180 200
#> ..$ power : num [1:2] 0 0
#> $ type1_error : num 0.5
#> $ est_final : num [1:2] 0.591 0.561
#> $ post_prob_accept_alternative: num [1:2] 0.007 0.042
#> $ N_enrolled : num [1:2] 180 180
#> $ stop_expect_success : num [1:2] 0 0
#> $ stop_futility : num [1:2] 1 1
he above flow was for illustrative purposes. Instead of inputting parameters step by step, the trial parameters can be filled in all at once as illustrated below. The pipe function connects all inputs together and the trial is simulated 2 times to obtain results.
value <- survival_outcome(hazard_treatment = c(0.012, 0.008),
cutpoint = 30) %>%
enrollment_rate(lambda = c(0.25, 0.8),
time = 40) %>%
study_details(total_sample_size = 200,
study_period = 70,
interim_look = 180,
prop_loss_to_followup = 0.10) %>%
hypothesis(delta = 0.50,
futility_prob = 0.05,
prob_accept_ha = 0.95,
expected_success_prob = 0.90,
alternative = "less") %>%
impute(no_of_impute = 10,
number_mcmc = 2000) %>%
gamma_prior(a0 = .2,
b0 = .2) %>%
historical_survival(time = hist_data$time,
treatment = hist_data$treatment,
event = hist_data$event,
discount_function = "weibull",
alpha_max = 1,
fix_alpha = FALSE,
weibull_scale = 0.135,
weibull_shape = 3,
method = "mc") %>%
simulate(no_of_sim = 2)
str(value)
#> List of 8
#> $ input :List of 25
#> ..$ hazard_treatment : num [1:2] 0.012 0.008
#> ..$ cutpoint : num 30
#> ..$ lambda : num [1:2] 0.25 0.8
#> ..$ lambda_time : num 40
#> ..$ N_total : num 200
#> ..$ EndofStudy : num 70
#> ..$ interim_look : num 180
#> ..$ prop_loss_to_followup: num 0.1
#> ..$ h0 : num 0.5
#> ..$ futility_prob : num 0.05
#> ..$ prob_ha : num 0.95
#> ..$ expected_success_prob: num 0.9
#> ..$ alternative : chr "less"
#> ..$ N_impute : num 10
#> ..$ number_mcmc : num 2000
#> ..$ prior : num [1:2] 0.2 0.2
#> ..$ time0 : num [1:100] 92.19 140.54 9.18 131.6 380.19 ...
#> ..$ treatment0 : num [1:100] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$ event0 : int [1:100] 0 1 0 1 1 1 0 1 1 1 ...
#> ..$ discount_function : chr "weibull"
#> ..$ alpha_max : num 1
#> ..$ fix_alpha : logi FALSE
#> ..$ weibull_scale : num 0.135
#> ..$ weibull_shape : num 3
#> ..$ method : chr "mc"
#> $ power :'data.frame': 2 obs. of 2 variables:
#> ..$ interim_looks: num [1:2] 180 200
#> ..$ power : num [1:2] 0 0
#> $ type1_error : num 1
#> $ est_final : num [1:2] 0.515 0.482
#> $ post_prob_accept_alternative: num [1:2] 0.351 0.663
#> $ N_enrolled : num [1:2] 180 180
#> $ stop_expect_success : num [1:2] 0 0
#> $ stop_futility : num [1:2] 1 1
In this section, we will illustrate how to perform the design of a two-arm trial without the incorporation of historical data. The example will compute the type 1 error, power, and other outputs for a superiority trial. The study hypothesis is \[H_0: \lambda_{treatment} - \lambda_{control} \geq 0 \qquad H_A: \lambda_{treatment} - \lambda_{control} < 0.\]
Unlike the OPC trial above, we will not include interim looks. The hazard rates of the treatment group are 0.01 and 0.012 and the hazard rates for the control group are 0.015 and 0.017 with a cutpoint at 25 days. The total sample size is 250 with a study length of 30 days. A 10% loss to follow up is assumed. The following code simulates a trial 2 times using the piping procedure.
value <- survival_outcome(hazard_treatment = c(0.01, 0.012),
hazard_control = c(0.015, 0.017),
cutpoint = 25) %>%
study_details(total_sample_size = 250,
study_period = 100,
interim_look = NULL,
prop_loss_to_followup = 0.10) %>%
hypothesis(delta = 0,
futility_prob = 0,
prob_accept_ha = 0.95,
expected_success_prob = 1,
alternative = "less") %>%
impute(no_of_impute = 25,
number_mcmc = 5000) %>%
enrollment_rate(lambda = c(0.8),
time = NULL) %>%
randomize(block_size = c(4, 6),
randomization_ratio = c(1, 1)) %>%
simulate(no_of_sim = 2)
str(value)
#> List of 8
#> $ input :List of 16
#> ..$ hazard_treatment : num [1:2] 0.01 0.012
#> ..$ cutpoint : num 25
#> ..$ hazard_control : num [1:2] 0.015 0.017
#> ..$ N_total : num 250
#> ..$ EndofStudy : num 100
#> ..$ prop_loss_to_followup: num 0.1
#> ..$ h0 : num 0
#> ..$ futility_prob : num 0
#> ..$ prob_ha : num 0.95
#> ..$ expected_success_prob: num 1
#> ..$ alternative : chr "less"
#> ..$ N_impute : num 25
#> ..$ number_mcmc : num 5000
#> ..$ lambda : num 0.8
#> ..$ block : num [1:2] 4 6
#> ..$ rand_ratio : num [1:2] 1 1
#> $ power :'data.frame': 1 obs. of 2 variables:
#> ..$ interim_looks: num 250
#> ..$ power : num 1
#> $ type1_error : num 0
#> $ est_final : num [1:2] -0.301 -0.714
#> $ post_prob_accept_alternative: num [1:2] 0.967 1
#> $ N_enrolled : num [1:2] 250 250
#> $ stop_expect_success : num [1:2] 0 0
#> $ stop_futility : num [1:2] 0 0
In this section, we will demonstrate how to run an adaptive Bayesian trial using bayesCT. A sample dataset is provided in the package. The dataset survivaldata contains the results of 100 subjects from a two-arm trial with time-to-event outcomes.
data(survivaldata)
head(survivaldata)
#> id time event treatment
#> 1 Patient1 3.7525645 0 1
#> 2 Patient2 6.8190574 1 1
#> 3 Patient3 18.9363831 0 1
#> 4 Patient4 26.8340478 0 1
#> 5 Patient5 0.6007468 1 1
#> 6 Patient6 17.4164113 1 1
The minimum input needed to run an adaptive Bayesian trial is the data itself. The data_survival input allows the input of the data. The treatment group (0 for control, 1 for treatment) and time input are essential for the analysis. However, if the event input is not provided, the function assumes the events have occurred. A default analysis is carried out below.
input <- data_survival(time = survivaldata$time,
treatment = survivaldata$treatment,
event = survivaldata$event)
out <- input %>%
analysis(type = "survival")
str(out)
#> List of 15
#> $ prob_of_accepting_alternative: num 0.95
#> $ margin : num 0
#> $ alternative : chr "greater"
#> $ alpha_max : num 1
#> $ N_treatment : num 50
#> $ event_treatment : num 17
#> $ N_control : int 50
#> $ event_control : num 21
#> $ N_enrolled : num 100
#> $ N_complete : int 70
#> $ alpha_discount : NULL
#> $ post_prob_accept_alternative : num 0.0041
#> $ est_final : num -0.928
#> $ stop_futility : num 1
#> $ stop_expected_success : num 0
We’ll now illustrate using piping to carry out the complete analysis. First, we’ll assume the following hypothesis: \[H_0:\theta_{treatment} - \theta_{control} <= 0.02 \quad H_A: \theta_{treatment} - \theta_{control} > 0.02\] The delta and alternative used to analyze the trial is 0.02 and “greater” respectively. The probability of accepting the alternative is 0.95, the probability of stopping for futility is 0.05, and the probability of stopping for success is 0.90. We will carry out imputations on subjects loss to follow up. Additionally, we will incorporate historical data on the treatment arm.
out <- data_survival(time = survivaldata$time,
treatment = survivaldata$treatment,
event = survivaldata$event) %>%
hypothesis(delta = 0.02,
futility_prob = 0.05,
prob_accept_ha = 0.95,
expected_success_prob = 0.90,
alternative = "less") %>%
impute(no_of_impute = 50,
number_mcmc = 10000) %>%
gamma_prior(a0 = .2,
b0 = .2) %>%
analysis(type = "survival")
str(out)
#> List of 15
#> $ prob_of_accepting_alternative: num 0.95
#> $ margin : num 0.02
#> $ alternative : chr "less"
#> $ alpha_max : num 1
#> $ N_treatment : num 50
#> $ event_treatment : num 17
#> $ N_control : int 50
#> $ event_control : num 21
#> $ N_enrolled : num 100
#> $ N_complete : int 70
#> $ alpha_discount : NULL
#> $ post_prob_accept_alternative : num 0.994
#> $ est_final : num -0.916
#> $ stop_futility : num 0
#> $ stop_expected_success : num 1
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.