Bayesian network meta analysis

Michael Seo and Christopher Schmid

2020-04-20

We describe how to run Bayesian network meta analysis using this package. First we’ll need to load the package.

#install.packages("bnma")
#or devtools::install_github("MikeJSeo/bnma")
library(bnma)

Preprocessing

It is essential to specify the input data in a correct format. We have chosen to use arm-level data with following input variable names: Outcomes, N or SE, Study, and Treat. Outcomes is the trial results. N is the number of respondents used for binary or multinomial model. SE is the standard error used for normal model. Study is the study indicator for the meta analysis. Lastly, Treat is the treatment indicator for each arm. We use a dataset parkinsons for illustration.

parkinsons
#> $Outcomes
#>  [1] -1.22 -1.53 -0.70 -2.40 -0.30 -2.60 -1.20 -0.24 -0.59 -0.73 -0.18 -2.20
#> [13] -2.50 -1.80 -2.10
#> 
#> $SE
#>  [1] 0.504 0.439 0.282 0.258 0.505 0.510 0.478 0.265 0.354 0.335 0.442 0.197
#> [13] 0.190 0.200 0.250
#> 
#> $Treat
#>  [1] "Placebo"       "Ropinirole"    "Placebo"       "Pramipexole"  
#>  [5] "Placebo"       "Pramipexole"   "Bromocriptine" "Ropinirole"   
#>  [9] "Bromocriptine" "Ropinirole"    "Bromocriptine" "Bromocriptine"
#> [13] "Cabergoline"   "Bromocriptine" "Cabergoline"  
#> 
#> $Study
#>  [1] 1 1 2 2 3 3 3 4 4 5 5 6 6 7 7
#> 
#> $Treat.order
#> [1] "Placebo"       "Pramipexole"   "Ropinirole"    "Bromocriptine"
#> [5] "Cabergoline"

In order to run network meta analysis in JAGS, we need to relabel study names into to a numeric sequence, i.e. 1 to total number of studies, and relabel the treatment into a numeric sequence according to treatment order specified. If treatment order is not specified, default is to use alphabetical order. In the example below, we set placebo as the baseline treatment followed by Pramipexole, Ropinirole, Bromocriptine, and Cabergoline as the treatment order.

network <- with(parkinsons, network.data(Outcomes = Outcomes, Study = Study, Treat = Treat, SE = SE, response = "normal", Treat.order = Treat.order))
network$Treat.order 
#>               1               2               3               4               5 
#>       "Placebo"   "Pramipexole"    "Ropinirole" "Bromocriptine"   "Cabergoline"
network$Study.order
#> 1 2 3 4 5 6 7 
#> 1 2 3 4 5 6 7

Another important preprocessing step that is done through network.data function is to change the arm-level data into study-level data. We store the study-level data of Outcomes as r, Treat as t, N or SE as n or se. We can see how Outcomes changed into a matrix given below. Similarly, if the Outcomes are multinomial, it will change to a 3 dimensional array.

network$r
#>       [,1]  [,2] [,3]
#> [1,] -1.22 -1.53   NA
#> [2,] -0.70 -2.40   NA
#> [3,] -0.30 -2.60 -1.2
#> [4,] -0.24 -0.59   NA
#> [5,] -0.73 -0.18   NA
#> [6,] -2.20 -2.50   NA
#> [7,] -1.80 -2.10   NA

Priors

Priors can be set in the network.data function. Please take a look at the function description for the prior specifications. For heterogeneity parameters of the random effects model, we follow the data format from a similar Bayesian network meta-analysis R package gemtc. It should be a list of length 3 where first element should be the distribution (one of dunif, dgamma, dhnorm, dwish) and the next two are the parameters associated with the distribution. Here is an example.

network <- with(smoking, network.data(Outcomes = Outcomes, Study = Study, Treat = Treat, N = N, response = "binomial", mean.d = 0.1, hy.prior = list("dhnorm", 0, 5)))

Running the model

Now to run the network, we use the function network.run. There are many parameters that can be modified, but the most important is the parameter n.run which determines how many final observations the user wants. Gelman-Rubin statitics is checked automatically every setsize number of iterations and once the series have converged we store the last half of the sequence. If the number of iteration is less than the number of observations user wanted (n.runs), it will sample more to fill the requirement. One of the nice feature of this package is that it checks for convergence automatically and will give an error if the sequence has not converged. The parameters tested for convergence are the relative treatment effects, baseline effect, and heterogeneity parameter. The number that is printed during the running of the model is the point estimate of the Gelman-Rubin statistics.

result <- network.run(network, n.run = 30000)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 50
#>    Unobserved stochastic nodes: 54
#>    Total graph size: 1129
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.002525
#> [1] 1.001945

Model Summary

Package includes many summary tools that can be used. One of the more useful summary might be the forest plot. Please look over the R package guide for more possible options.

network.forest.plot(result)

# draw.network.graph(network)
# network.autocorr.diag(result)
# network.autocorr.plot(result)
# network.cumrank.tx.plot(result)
# network.deviance.plot(result)
# network.gelman.plot(result)

Multinomial model

Another nice feature of this network meta analysis package is that multinomial outcome dataset can be analyzed. Here is an example.

network <- with(cardiovascular, network.data(Outcomes, Study, Treat, N, response = "multinomial"))
result <- network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 34
#>    Unobserved stochastic nodes: 37
#>    Total graph size: 1301
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.003068
#> [1] 1.002375
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>                 Mean      SD  Naive SE Time-series SE
#> d[1,1]      0.000000 0.00000 0.000e+00      0.0000000
#> d[2,1]     -0.104189 0.15246 3.937e-04      0.0011592
#> d[3,1]     -0.049053 0.11274 2.911e-04      0.0007009
#> d[1,2]      0.000000 0.00000 0.000e+00      0.0000000
#> d[2,2]     -0.188921 0.15422 3.982e-04      0.0011739
#> d[3,2]     -0.269899 0.11180 2.887e-04      0.0006535
#> sigma[1,1]  0.114946 0.05089 1.314e-04      0.0002477
#> sigma[2,1]  0.004592 0.03446 8.898e-05      0.0001639
#> sigma[1,2]  0.004592 0.03446 8.898e-05      0.0001639
#> sigma[2,2]  0.114287 0.05007 1.293e-04      0.0002292
#> 
#> 2. Quantiles for each variable:
#> 
#>                2.5%      25%       50%       75%    97.5%
#> d[1,1]      0.00000  0.00000  0.000000  0.000000  0.00000
#> d[2,1]     -0.40219 -0.20462 -0.105494 -0.004691  0.20001
#> d[3,1]     -0.27341 -0.12191 -0.048654  0.024403  0.17304
#> d[1,2]      0.00000  0.00000  0.000000  0.000000  0.00000
#> d[2,2]     -0.49446 -0.29021 -0.187330 -0.087036  0.11185
#> d[3,2]     -0.49332 -0.34205 -0.269110 -0.196654 -0.04965
#> sigma[1,1]  0.05203  0.08048  0.103799  0.136253  0.24247
#> sigma[2,1] -0.06287 -0.01429  0.003852  0.022660  0.07630
#> sigma[1,2] -0.06287 -0.01429  0.003852  0.022660  0.07630
#> sigma[2,2]  0.05197  0.08029  0.103549  0.135462  0.24019
#> 
#> 
#> $Treat.order
#> 1 2 3 
#> 1 2 3 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 31.17863 60.34814 91.52677 
#> 
#> $total_n
#> [1] 34
#> 
#> attr(,"class")
#> [1] "summary.network.result"

Adding covariates

We can add continuous or discrete covariates to fit a network meta regression. If the covariate is continuous, it is centered. Discrete variables need to be 0-1 dummy format. There are three different assumptions for covariate effect: “common”, “independent”, and “exchangeable”.

network <- with(statins, network.data(Outcomes, Study, Treat, N=N, response = "binomial", Treat.order = c("Placebo", "Statin"), covariate = covariate, covariate.type = "discrete", covariate.model = "common"))
result <- network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 38
#>    Unobserved stochastic nodes: 41
#>    Total graph size: 877
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.021729
#> [1] 1.023726
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>              Mean     SD  Naive SE Time-series SE
#> beta1[1]  0.00000 0.0000 0.0000000       0.000000
#> beta1[2] -0.30766 0.2700 0.0006970       0.003645
#> d[1]      0.00000 0.0000 0.0000000       0.000000
#> d[2]     -0.06084 0.2107 0.0005440       0.002642
#> sd        0.25309 0.2011 0.0005194       0.005036
#> 
#> 2. Quantiles for each variable:
#> 
#>              2.5%     25%      50%      75%  97.5%
#> beta1[1]  0.00000  0.0000  0.00000  0.00000 0.0000
#> beta1[2] -0.91080 -0.4430 -0.28981 -0.15221 0.2056
#> d[1]      0.00000  0.0000  0.00000  0.00000 0.0000
#> d[2]     -0.47785 -0.1759 -0.06675  0.05008 0.3896
#> sd        0.01255  0.1030  0.20518  0.35357 0.7616
#> 
#> 
#> $Treat.order
#>         1         2 
#> "Placebo"  "Statin" 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 42.52369 25.26461 67.78831 
#> 
#> $total_n
#> [1] 38
#> 
#> attr(,"class")
#> [1] "summary.network.result"

Covariate plot shows you how the relative effect changes as the covariate varies.

network.covariate.plot(result, base.treatment = "Placebo", comparison.treatment = "Statin")

Baseline risk

Another useful addition to this network package is the ability to add baseline risk. We can have “common”, “independent”, or “exchangeable” assumption on the baseline slopes and “independenet” and “exchangeable” assumption on the baseline risk. Here we demonstrate a common slope and exchangeable baseline risk model.

network <- with(certolizumab, network.data(Outcomes = Outcomes, Treat = Treat, Study = Study, N = N, response = "binomial", Treat.order = Treat.order, baseline = "common", baseline.risk = "exchangeable"))
result <- network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 24
#>    Unobserved stochastic nodes: 34
#>    Total graph size: 670
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.017221
#> [1] 1.006261
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>            Mean     SD  Naive SE Time-series SE
#> B       -0.7761 0.2819 0.0007279       0.004051
#> b_bl[1]  0.0000 0.0000 0.0000000       0.000000
#> b_bl[2] -0.7761 0.2819 0.0007279       0.004051
#> b_bl[3] -0.7761 0.2819 0.0007279       0.004051
#> b_bl[4] -0.7761 0.2819 0.0007279       0.004051
#> b_bl[5] -0.7761 0.2819 0.0007279       0.004051
#> b_bl[6] -0.7761 0.2819 0.0007279       0.004051
#> b_bl[7] -0.7761 0.2819 0.0007279       0.004051
#> d[1]     0.0000 0.0000 0.0000000       0.000000
#> d[2]     1.8647 0.2340 0.0006041       0.001847
#> d[3]     2.1372 0.2125 0.0005487       0.002078
#> d[4]     2.0289 0.4301 0.0011106       0.005668
#> d[5]     1.6336 0.2066 0.0005335       0.002054
#> d[6]     0.3237 0.5851 0.0015108       0.009665
#> d[7]     2.1471 0.2959 0.0007641       0.002956
#> sd       0.1927 0.1819 0.0004697       0.003743
#> 
#> 2. Quantiles for each variable:
#> 
#>              2.5%      25%     50%     75%   97.5%
#> B       -1.288579 -0.92681 -0.7898 -0.6374 -0.1837
#> b_bl[1]  0.000000  0.00000  0.0000  0.0000  0.0000
#> b_bl[2] -1.288579 -0.92681 -0.7898 -0.6374 -0.1837
#> b_bl[3] -1.288579 -0.92681 -0.7898 -0.6374 -0.1837
#> b_bl[4] -1.288579 -0.92681 -0.7898 -0.6374 -0.1837
#> b_bl[5] -1.288579 -0.92681 -0.7898 -0.6374 -0.1837
#> b_bl[6] -1.288579 -0.92681 -0.7898 -0.6374 -0.1837
#> b_bl[7] -1.288579 -0.92681 -0.7898 -0.6374 -0.1837
#> d[1]     0.000000  0.00000  0.0000  0.0000  0.0000
#> d[2]     1.401265  1.75332  1.8633  1.9770  2.3188
#> d[3]     1.746155  2.01498  2.1323  2.2483  2.5730
#> d[4]     1.215907  1.77573  2.0264  2.2799  2.8729
#> d[5]     1.228598  1.52377  1.6344  1.7430  2.0379
#> d[6]    -0.884015 -0.04099  0.3480  0.7147  1.3918
#> d[7]     1.588295  1.99024  2.1455  2.2997  2.7357
#> sd       0.006733  0.07093  0.1494  0.2578  0.6528
#> 
#> 
#> $Treat.order
#>             1             2             3             4             5 
#>     "Placebo"         "CZP"  "Adalimumab"  "Etanercept"  "Infliximab" 
#>             6             7 
#>   "Rituximab" "Tocilizumab" 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 27.86369 18.61288 46.47657 
#> 
#> $total_n
#> [1] 24
#> 
#> attr(,"class")
#> [1] "summary.network.result"

Contrast data

We also added model to analyze contrast-level data instead of arms-level data. Contrast level format uses treatment differences relative to the control arms.

network <- with(parkinsons_contrast, {
  contrast.network.data(Outcomes, Treat, SE, na, V, type = "random", mean.d = 0.01, prec.d = 0.1, hy.prior = list("dhnorm", 0.01, 0.01))
})
result <- contrast.network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 7
#>    Unobserved stochastic nodes: 13
#>    Total graph size: 197
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.022223
#> [1] 1.067457
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>         Mean     SD Naive SE Time-series SE
#> d[1]  0.0000 0.0000 0.000000       0.000000
#> d[2] -1.7872 0.5022 0.001297       0.004382
#> d[3] -0.4394 0.6008 0.001551       0.007742
#> d[4] -0.4737 0.5799 0.001497       0.008324
#> d[5] -0.7646 0.6995 0.001806       0.008774
#> sd    0.3819 0.4357 0.001125       0.010351
#> 
#> 2. Quantiles for each variable:
#> 
#>          2.5%     25%     50%      75%   97.5%
#> d[1]  0.00000  0.0000  0.0000  0.00000  0.0000
#> d[2] -2.73879 -2.0748 -1.7949 -1.50857 -0.8039
#> d[3] -1.59245 -0.8114 -0.4459 -0.07332  0.7363
#> d[4] -1.59142 -0.8356 -0.4845 -0.12836  0.6815
#> d[5] -2.08822 -1.1844 -0.7753 -0.36634  0.6488
#> sd    0.01063  0.1190  0.2634  0.49820  1.4360
#> 
#> 
#> $deviance
#>      Dbar        pD       DIC 
#>  6.505550  5.368292 11.873842 
#> 
#> $total_n
#> [1] 15
#> 
#> attr(,"class")
#> [1] "summary.contrast.network.result"

Unrelated Means Model

Unrelated mean effects (UME) model estimates separate, unrelated basic parameters. We do not assume consistency in this model. We can compare this model with the standard consistency model. If the parameter estimates are similar for both models, and there is considerable overlap in the 95% credible interval, we can conclude that there is no evidence of inconsistency in the network.

network <- with(smoking, {
  ume.network.data(Outcomes, Study, Treat, N = N, response = "binomial", type = "random")
})
result <- ume.network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 50
#>    Unobserved stochastic nodes: 57
#>    Total graph size: 1020
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.004331
#> [1] 1.003862
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>            Mean     SD  Naive SE Time-series SE
#> d[1,2]  0.33903 0.5789 0.0014948       0.002026
#> d[1,3]  0.85811 0.2727 0.0007042       0.001522
#> d[2,3] -0.05116 0.7415 0.0019146       0.002957
#> d[1,4]  1.41768 0.8776 0.0022659       0.006800
#> d[2,4]  0.65113 0.7317 0.0018892       0.003434
#> d[3,4]  0.20039 0.7773 0.0020071       0.003206
#> 
#> 2. Quantiles for each variable:
#> 
#>           2.5%      25%      50%    75% 97.5%
#> d[1,2] -0.8076 -0.03164  0.33672 0.7073 1.500
#> d[1,3]  0.3389  0.67888  0.85066 1.0284 1.423
#> d[2,3] -1.5353 -0.52661 -0.04852 0.4288 1.406
#> d[1,4] -0.2136  0.83337  1.37822 1.9623 3.251
#> d[2,4] -0.7906  0.17838  0.64714 1.1191 2.109
#> d[3,4] -1.3515 -0.30108  0.20550 0.7074 1.724
#> 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 53.48148 44.87569 98.35717 
#> 
#> $total_n
#> [1] 50
#> 
#> attr(,"class")
#> [1] "summary.ume.network.result"

Inconsistency model

We included another inconsistency model that can be used to test consistency assumption. Here we can specify a pair where we want to nodesplit and test the inconsistency assumptions. For instance if we specify treatment pair = c(3, 9), we are finding the difference in the direct and indirect evidence of treatment 3 and 9. Inconsistency estimate and the corresponding p-value are reported in the summary. If the p-value is small, it means that we can reject the null hypothesis that direct and indirect evidence agree. We can repeat for all the pairs in the network and identify pairs that might be inconsistent. Refer to Dias et al. 2010 (i.e. Checking consistency in mixed treatment comparison meta-analysis) for more details.

network <- with(thrombolytic, nodesplit.network.data(Outcomes, Study, Treat, N, response = "binomial", pair = c(3,9), type = "fixed"))
result <- nodesplit.network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 102
#>    Unobserved stochastic nodes: 59
#>    Total graph size: 2263
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.001664
#> [1] 1.000705
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>                   Mean      SD  Naive SE Time-series SE
#> d[1]          0.000000 0.00000 0.000e+00      0.0000000
#> d[2]         -0.001632 0.03012 7.776e-05      0.0001867
#> d[3]         -0.160106 0.04325 1.117e-04      0.0003501
#> d[4]         -0.043972 0.04657 1.202e-04      0.0002421
#> d[5]         -0.112524 0.06012 1.552e-04      0.0004706
#> d[6]         -0.154435 0.07711 1.991e-04      0.0005689
#> d[7]         -0.464322 0.10049 2.595e-04      0.0005644
#> d[8]         -0.201179 0.22047 5.693e-04      0.0012689
#> d[9]          0.003942 0.03675 9.488e-05      0.0002142
#> diff          1.245755 0.42146 1.088e-03      0.0040217
#> direct        1.409803 0.41776 1.079e-03      0.0040097
#> oneminusprob  0.000700 0.02645 6.829e-05      0.0001187
#> prob          0.999300 0.02645 6.829e-05      0.0001187
#> 
#> 2. Quantiles for each variable:
#> 
#>                  2.5%      25%       50%      75%     97.5%
#> d[1]          0.00000  0.00000  0.000000  0.00000  0.000000
#> d[2]         -0.06057 -0.02195 -0.001750  0.01866  0.057675
#> d[3]         -0.24496 -0.18929 -0.160058 -0.13114 -0.075094
#> d[4]         -0.13575 -0.07549 -0.043969 -0.01236  0.046797
#> d[5]         -0.23016 -0.15325 -0.112521 -0.07214  0.005507
#> d[6]         -0.30519 -0.20676 -0.154438 -0.10253 -0.002451
#> d[7]         -0.66253 -0.53178 -0.464010 -0.39619 -0.267353
#> d[8]         -0.63507 -0.34937 -0.200726 -0.05281  0.229222
#> d[9]         -0.06797 -0.02084  0.004045  0.02872  0.075790
#> diff          0.46512  0.95607  1.229759  1.51695  2.121032
#> direct        0.63595  1.12180  1.393070  1.67972  2.276020
#> oneminusprob  0.00000  0.00000  0.000000  0.00000  0.000000
#> prob          1.00000  1.00000  1.000000  1.00000  1.000000
#> 
#> 
#> $deviance
#> NULL
#> 
#> $`Inconsistency estimate`
#> [1] 1.245755
#> 
#> $p_value
#> [1] 0.0014
#> 
#> attr(,"class")
#> [1] "summary.nodesplit.network.result"

Finding risk difference and relative risk instead of odds ratio with Binomial outcomes

We estimate odds ratios using our NMA model, and then we apply these odds ratios to a baseline risk (i.e. risk in placebo) to estimate absolute event rates for all treatments. Using these absolute event rates, we are then able to find number needed to treat, risk difference, and relative risk. Potential caveat of this approach is that we are assuming that relative effects are independent of the absolute effects for the reference treatment.

#Using metaprop function from meta package, we meta-analyze placebo event proportion.
#library(meta)
#placebo_index <- which(certolizumab$Treat == "Placebo")
#meta.pla <- metaprop(event = certolizumab$Outcomes[placebo_index], n = certolizumab$N[placebo_index], method = "GLMM", sm = "PLOGIT")
#mean.A = meta.pla$TE.random; prec.A = 1/meta.pla$tau^2

network <- with(certolizumab, network.data(Outcomes = Outcomes, Treat = Treat, Study = Study, N = N, response = "binomial", mean.A = -2.27, prec.A = 2.53))
result <- network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 24
#>    Unobserved stochastic nodes: 32
#>    Total graph size: 649
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.011156
#> [1] 1.005806
summary(result, extra.pars = c("RD", "RR"))
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>            Mean      SD  Naive SE Time-series SE
#> RD[1]  0.000000 0.00000 0.0000000      0.0000000
#> RD[2]  0.070699 0.14675 0.0003789      0.0009537
#> RD[3]  0.263177 0.28061 0.0007245      0.0025460
#> RD[4]  0.008845 0.12748 0.0003292      0.0011310
#> RD[5] -0.085987 0.05522 0.0001426      0.0002252
#> RD[6] -0.067262 0.09787 0.0002527      0.0005721
#> RD[7] -0.011156 0.10644 0.0002748      0.0006734
#> RR[1]  1.000000 0.00000 0.0000000      0.0000000
#> RR[2]  1.775129 1.76620 0.0045603      0.0114626
#> RR[3]  4.071673 4.11618 0.0106279      0.0316064
#> RR[4]  1.141033 1.51683 0.0039164      0.0128433
#> RR[5]  0.194868 0.28604 0.0007385      0.0020572
#> RR[6]  0.391111 1.06760 0.0027565      0.0064376
#> RR[7]  0.930839 1.22642 0.0031666      0.0079224
#> d[1]   0.000000 0.00000 0.0000000      0.0000000
#> d[2]   0.354207 1.11818 0.0028871      0.0066660
#> d[3]   1.489499 1.90566 0.0049204      0.0161439
#> d[4]  -0.231749 1.05606 0.0027267      0.0082425
#> d[5]  -1.991867 0.70349 0.0018164      0.0054202
#> d[6]  -1.984631 1.52260 0.0039313      0.0090483
#> d[7]  -0.492933 1.11085 0.0028682      0.0067660
#> sd     0.942261 0.66094 0.0017065      0.0104908
#> 
#> 2. Quantiles for each variable:
#> 
#>            2.5%      25%      50%      75%    97.5%
#> RD[1]  0.000000  0.00000  0.00000  0.00000  0.00000
#> RD[2] -0.101279 -0.01292  0.02957  0.10710  0.50530
#> RD[3] -0.097338  0.02694  0.18451  0.47075  0.85234
#> RD[4] -0.128777 -0.04841 -0.02057  0.01786  0.40019
#> RD[5] -0.214319 -0.11197 -0.07645 -0.05082 -0.01904
#> RD[6] -0.214442 -0.10662 -0.06911 -0.04089  0.13017
#> RD[7] -0.146071 -0.05714 -0.02691  0.00348  0.27457
#> RR[1]  1.000000  1.00000  1.00000  1.00000  1.00000
#> RR[2]  0.167443  0.84313  1.34117  2.08997  6.27160
#> RR[3]  0.154427  1.32019  2.89650  5.45415 14.89009
#> RR[4]  0.141632  0.47682  0.73159  1.20613  4.92293
#> RR[5]  0.035642  0.10740  0.15252  0.21253  0.58419
#> RR[6]  0.007274  0.06739  0.15291  0.33843  2.34806
#> RR[7]  0.066989  0.39033  0.65818  1.04271  3.70212
#> d[1]   0.000000  0.00000  0.00000  0.00000  0.00000
#> d[2]  -1.883565 -0.18987  0.33718  0.88235  2.65412
#> d[3]  -1.965639  0.31962  1.36055  2.54410  5.58998
#> d[4]  -2.052482 -0.80223 -0.34542  0.21397  2.22448
#> d[5]  -3.441462 -2.33173 -1.97880 -1.64351 -0.58831
#> d[6]  -5.033796 -2.80454 -1.97838 -1.16276  1.03514
#> d[7]  -2.800080 -1.01161 -0.46079  0.04705  1.68809
#> sd     0.120649  0.50606  0.78952  1.19148  2.71808
#> 
#> 
#> $Treat.order
#>             1             2             3             4             5 
#>  "Adalimumab"         "CZP"  "Etanercept"  "Infliximab"     "Placebo" 
#>             6             7 
#>   "Rituximab" "Tocilizumab" 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 27.02814 22.66387 49.69201 
#> 
#> $total_n
#> [1] 24
#> 
#> attr(,"class")
#> [1] "summary.network.result"