The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

Bayesian Response-Adaptive Design Analysis with BRADA

Riko Kelter



Introduction

The getting started vignette illustrated the basic features of the brada package. In this vignette, we illustrate how to monitor a running trial with the brada package.

Note that there are more vignettes which illustrate
  • how to apply and calibrate the predictive evidence value design with the brada package. This vignette is hosted at the Open Science Foundation.
  • how to monitor a running clinical trial with a binary endpoint by means of the brada package

Monitoring a trial

To apply the package, first, load the package:

library(brada)

Monitoring a trial with the brada package is straightforward through the monitor function. Suppose we have analyzed and calibrated a design according to our requirements, and end up with the following design:

design = brada(Nmax = 30, batchsize = 5, nInit = 10, 
                   p_true = 0.4, p0 = 0.4, p1 = 0.4, 
                   nsim = 100, 
                   theta_T = 0.90, theta_L = 0.1, theta_U = 1, 
                   method = "PP",
                   cores = 2)

Now, suppose the trial is performed and the first ten patients show the response pattern \((0,1,0,0,0,0,0,1,0,0)\), where \(1\) encodes a response and \(0\) no response. Thus, there are \(2\) responses out of nInit=10 observations. To check whether the trial can be stopped for futility or efficacy based on theta_L=0.1 and theta_U=1, we run the monitor function as follows:

monitor(design, obs = c(0,1,0,0,0,0,0,1,0,0))
## --------- BRADA TRIAL MONITORING ---------
## Primary endpoint: binary
## Test of H_0: p <= 0.4 against H_1: p > 0.4
## Trial design: Predictive probability design
## Maximum sample size: 30
## First interim analysis at: 10
## Interim analyses after each 5 observations 
## Last interim analysis at: 25 observations 
## -----------------------------------------
## Current trial size: 10 patients 
## --------------- RESULTS -----------------
## Predictive probability of trial success: 0.00768
##  Futility threshold: 0.1
## Decision: Stop for futility

Thus, the results indicate that we should stop for efficacy. This is intuitively in agreement with the notion that \(2\) responses out of \(10\) observations are quite unlikely if \(H_1:p>0.4\) would hold.

Note that it is not important which value the p_true or nsim arguments had in the brada call which returned the object design. We could also have simulated data under p_true=0.2 and nsim=3000 or some other values, the monitor function only takes the brada object and applies the design specified in the method argument of the object, in this case, the predictive probability design. All necessary arguments are identified by the monitor function automatically. The predictive evidence value design can be monitored analogue, for details on the design and its calibration see the Open Science Foundation.


References

Berry, S. M. (2011). Bayesian Adaptive Methods for Clinical Trials. CRC Press.

Kelter, R. (2022). The Evidence Interval and the Bayesian Evidence Value - On a unified theory for Bayesian hypothesis testing and interval estimation. British Journal of Mathematical and Statistical Psychology (2022). https://doi.org/10.1111/bmsp.12267

Kelter, R. (2021b). Bayesian Hodges-Lehmann tests for statistical equivalence in the two-sample setting: Power analysis, type I error rates and equivalence boundary selection in biomedical research. BMC Medical Research Methodology, 21(171). https://doi.org/10.1186/s12874-021-01341-7

Kelter, R. (2021a). fbst: An R package for the Full Bayesian Significance Test for testing a sharp null hypothesis against its alternative via the e-value. Behav Res (2021). https://doi.org/10.3758/s13428-021-01613-6

Kelter, R. (2020). Analysis of Bayesian posterior significance and effect size indices for the two-sample t-test to support reproducible medical research. BMC Medical Research Methodology, 20(88). https://doi.org/https://doi.org/10.1186/s12874-020-00968-2

Morris, T. P., White, I. R., & Crowther, M. J. (2019). Using simulation studies to evaluate statistical methods. Statistics in Medicine, 38(11), 2074–2102. https://doi.org/10.1002/SIM.8086

Pereira, C. A. d. B., & Stern, J. M. (2020). The e-value: a fully Bayesian significance measure for precise statistical hypotheses and its research program. São Paulo Journal of Mathematical Sciences, 1–19. https://doi.org/10.1007/s40863-020-00171-7

Rosner, G. L. (2021). Bayesian Thinking in Biostatistics. Chapman and Hall/CRC.

Rosner, G. L. (2020). Bayesian Adaptive Designs in Drug Development. In E. Lesaffre, G. Baio, & B. Boulanger (Eds.), Bayesian Methods in Pharmaceutical Research (pp. 161–184). CRC Press.

Rouder, Jeffrey N., Paul L. Speckman, Dongchu Sun, Richard D. Morey, and Geoffrey Iverson. 2009. “Bayesian t tests for accepting and rejecting the null hypothesis.” Psychonomic Bulletin and Review 16 (2): 225–37. https://doi.org/10.3758/PBR.16.2.225




These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.