factorMerger: a set of tools to support results from post-hoc testing

Agnieszka Sitko

2017-06-30

Introduction

The aim of factorMerger is to provide set of tools to support results from post hoc comparisons. Post hoc testing is an analysis performed after running ANOVA to examine differences between group means (of some response numeric variable) for each pair of groups (groups are defined by a factor variable).

This project arose from the need to create a method of post hoc testing which gives the hierarchical interpretation of relations between groups means. Thereby, for a given significance level we may divide groups into nonoverlapping clusters.

Algorithm inputs

In the current version the factorMerger package supports parametric models:

Set of hypotheses that are tested during merging may be either comprehensive or limited. This gives two possibilities:

The version all-to-all considers all possible pairs of factor levels. In the successive approach factor levels are preliminarily sorted and then only consecutive groups are tested for means equality.

The factorMerger package also implements two strategies of a single iteration of the algorithm. They use one of the following:

Generating samples

To visualize functionalities of factorMerger we use samples for which response variable is generated from one of the distributions listed above and corresponding factor variable is sampled uniformly from a finite set of a size \(k\).

To do so, we may use function generateSample or generateMultivariateSample.

library(factorMerger) 
library(knitr)
library(dplyr)
randSample <- generateMultivariateSample(N = 100, k = 10, d = 3)

Merging factors

mergeFactors is a function that performs hierarchical post hoc testing. As arguments it takes:

By default (with argument abbreviate = TRUE) factor levels are abbreviated and surrounded with brackets.

Multi-dimensional Gaussian model

Computations

fmAll <- mergeFactors(randSample$response, randSample$factor)

mergeFactors outputs with information about the ‘merging history’.

mergingHistory(fmAll, showStats = TRUE) %>% 
    kable()
groupA groupB model pvalVsFull pvalVsPrevious
1 -423.0896 1.0000 1.0000
11 (D) (G) -423.1677 0.9869 0.9869
12 (D)(G) (F) -423.3334 0.9985 0.9607
13 (E) (A) -423.9513 0.9966 0.7726
14 (I) (C) -424.8327 0.9941 0.6565
15 (E)(A) (B) -426.5236 0.9752 0.3723
16 (D)(G)(F) (I)(C) -428.3013 0.9455 0.3443
17 (E)(A)(B) (H) -430.6818 0.8669 0.2124
18 (E)(A)(B)(H) (J) -433.7445 0.7096 0.1192
19 (E)(A)(B)(H)(J) (D)(G)(F)(I)(C) -440.3614 0.2250 0.0052

Each row of the above frame describes one step of the merging algorithm. First two columns specify which groups were merged in the iteration, columns model and GIC gather loglikelihood and Generalized Information Criterion for the model after merging. Last two columns are p-values for the Likelihood Ratio Test – against the full model (pvalVsFull) and against the previous one (pvalVsPrevious).

If we set successive = TRUE then at the beginning one dimensional response is fitted using isoMDS{MASS}. Next, in each step only groups whose means are closed are compared.

fm <- mergeFactors(randSample$response, randSample$factor, 
                   successive = TRUE, 
                   method = "hclust")

mergingHistory(fm, showStats = TRUE) %>% 
    kable()
groupA groupB model pvalVsFull pvalVsPrevious
-423.0896 1.0000 1.0000
(D) (I) -423.5597 0.8418 0.8418
(D)(I) (F) -423.8785 0.9648 0.9031
(E) (D)(I)(F) -424.9144 0.9505 0.5987
(G) (C) -426.1487 0.9360 0.5205
(J) (A) -427.3879 0.9294 0.5140
(B) (G)(C) -429.5102 0.8571 0.2649
(J)(A) (H) -432.3802 0.7030 0.1432
(E)(D)(I)(F) (B)(G)(C) -435.6037 0.5055 0.1042
(E)(D)(I)(F)(B)(G)(C) (J)(A)(H) -440.3614 0.2250 0.0270

Final clusters

Algorithms implemented in the factorMerger package enable to create unequivocal partition of a factor. Below we present how to extract the partition from the mergeFactor output.

  • predict new labels for observations
cutTree(fm)
#>   [1] (B)       (J)       (C)       (G)       (A)       (D)(I)(F) (J)      
#>   [8] (A)       (D)(I)(F) (C)       (A)       (B)       (B)       (J)      
#>  [15] (B)       (A)       (C)       (D)(I)(F) (D)(I)(F) (D)(I)(F) (D)(I)(F)
#>  [22] (B)       (E)       (E)       (J)       (G)       (D)(I)(F) (D)(I)(F)
#>  [29] (D)(I)(F) (C)       (B)       (B)       (C)       (G)       (D)(I)(F)
#>  [36] (D)(I)(F) (J)       (H)       (D)(I)(F) (G)       (D)(I)(F) (B)      
#>  [43] (D)(I)(F) (E)       (J)       (D)(I)(F) (B)       (B)       (E)      
#>  [50] (B)       (D)(I)(F) (E)       (D)(I)(F) (D)(I)(F) (B)       (D)(I)(F)
#>  [57] (E)       (B)       (C)       (C)       (D)(I)(F) (A)       (B)      
#>  [64] (D)(I)(F) (A)       (D)(I)(F) (J)       (D)(I)(F) (G)       (G)      
#>  [71] (C)       (B)       (D)(I)(F) (E)       (D)(I)(F) (A)       (B)      
#>  [78] (G)       (J)       (D)(I)(F) (C)       (D)(I)(F) (J)       (D)(I)(F)
#>  [85] (C)       (A)       (A)       (D)(I)(F) (D)(I)(F) (H)       (A)      
#>  [92] (E)       (A)       (D)(I)(F) (D)(I)(F) (D)(I)(F) (C)       (C)      
#>  [99] (A)       (J)      
#> Levels: (E) (D)(I)(F) (B) (G) (C) (J) (A) (H)

By default, cutTree returns a factor split for the optimal GIC (with penalty = 2) model. However, we can specify different metrics (stat = c("loglikelihood", "p-value", "GIC") we would like to use in cutting. If loglikelihood or p-value is chosen an exact threshold must be given as a value parameter. Then cutTree returns factor for the smallest model whose statistic is higher than the threshold. If we choose GIC then value is interpreted as GIC penalty.

mH <- mergingHistory(fm, T)
thres <- mH$model[nrow(mH) / 2]
cutTree(fm, stat = "loglikelihood", value = thres)
#>   [1] (B)          (J)          (G)(C)       (G)(C)       (A)         
#>   [6] (E)(D)(I)(F) (J)          (A)          (E)(D)(I)(F) (G)(C)      
#>  [11] (A)          (B)          (B)          (J)          (B)         
#>  [16] (A)          (G)(C)       (E)(D)(I)(F) (E)(D)(I)(F) (E)(D)(I)(F)
#>  [21] (E)(D)(I)(F) (B)          (E)(D)(I)(F) (E)(D)(I)(F) (J)         
#>  [26] (G)(C)       (E)(D)(I)(F) (E)(D)(I)(F) (E)(D)(I)(F) (G)(C)      
#>  [31] (B)          (B)          (G)(C)       (G)(C)       (E)(D)(I)(F)
#>  [36] (E)(D)(I)(F) (J)          (H)          (E)(D)(I)(F) (G)(C)      
#>  [41] (E)(D)(I)(F) (B)          (E)(D)(I)(F) (E)(D)(I)(F) (J)         
#>  [46] (E)(D)(I)(F) (B)          (B)          (E)(D)(I)(F) (B)         
#>  [51] (E)(D)(I)(F) (E)(D)(I)(F) (E)(D)(I)(F) (E)(D)(I)(F) (B)         
#>  [56] (E)(D)(I)(F) (E)(D)(I)(F) (B)          (G)(C)       (G)(C)      
#>  [61] (E)(D)(I)(F) (A)          (B)          (E)(D)(I)(F) (A)         
#>  [66] (E)(D)(I)(F) (J)          (E)(D)(I)(F) (G)(C)       (G)(C)      
#>  [71] (G)(C)       (B)          (E)(D)(I)(F) (E)(D)(I)(F) (E)(D)(I)(F)
#>  [76] (A)          (B)          (G)(C)       (J)          (E)(D)(I)(F)
#>  [81] (G)(C)       (E)(D)(I)(F) (J)          (E)(D)(I)(F) (G)(C)      
#>  [86] (A)          (A)          (E)(D)(I)(F) (E)(D)(I)(F) (H)         
#>  [91] (A)          (E)(D)(I)(F) (A)          (E)(D)(I)(F) (E)(D)(I)(F)
#>  [96] (E)(D)(I)(F) (G)(C)       (G)(C)       (A)          (J)         
#> Levels: (E)(D)(I)(F) (B) (G)(C) (J) (A) (H)

In this example data partition is created for the last model from the merging path whose loglikelihood is greater than -426.1487.

  • get final clusters and clusters dictionary
getOptimalPartition(fm)
#> [1] "(E)"       "(D)(I)(F)" "(B)"       "(G)"       "(C)"       "(J)"      
#> [7] "(A)"       "(H)"

Function getOptimalPartition returns a vector with the final cluster names from the factorMerger object.

getOptimalPartitionDf(fm)
#>    orig      pred
#> 1   (B)       (B)
#> 2   (J)       (J)
#> 3   (C)       (C)
#> 4   (G)       (G)
#> 5   (A)       (A)
#> 6   (F) (D)(I)(F)
#> 9   (I) (D)(I)(F)
#> 18  (D) (D)(I)(F)
#> 23  (E)       (E)
#> 38  (H)       (H)

Function getOptimalPartitionDf returns a dictionary in a data frame format. Each row gives an original label of a factor level and its new (cluster) label.

Similarly to cutTree, functions getOptimalPartition and getOptimalPartitionDf take arguments stat and threshold.

Visualizations

We may plot results using function plot.

plot(fm, panel = "all", nodesSpacing = "equidistant", colorCluster = TRUE)

plot(fmAll, panel = "tree", statistic = "p-value", 
     nodesSpacing = "effects", colorCluster = TRUE)

plot(fm, colorCluster = TRUE, panel = "response")

The heatmap on the right shows means of all variables taken into analysis by groups.

plot(fm, colorCluster = TRUE, panel = "response", responsePanel = "profile")

In the above plots colours are connected with the group. The plot on the right shows means rankings for all variables included in the algorithm.

It is also possible to plot GIC together with the merging path plot.

plot(fm, panel = "GIC", penalty = 5)

Model with the lowest GIC is marked.

One-dimensional Gaussian model

oneDimRandSample <- generateSample(1000, 10)
oneDimFm <- mergeFactors(oneDimRandSample$response, oneDimRandSample$factor, 
                         method = "hclust")
mergingHistory(oneDimFm, showStats = TRUE) %>% 
    kable()
groupA groupB model pvalVsFull pvalVsPrevious
-1444.168 1.0000 1.0000
(D) (A) -1444.168 0.9873 0.9873
(J) (G) -1444.169 0.9986 0.9592
(D)(A) (J)(G) -1444.176 0.9994 0.9057
(H) (F) -1444.186 0.9998 0.8869
(B) (E) -1444.244 0.9995 0.7343
(D)(A)(J)(G) (I) -1444.314 0.9995 0.7104
(D)(A)(J)(G)(I) (C) -1444.823 0.9884 0.3141
(H)(F) (B)(E) -1445.641 0.9389 0.2014
(H)(F)(B)(E) (D)(A)(J)(G)(I)(C) -1448.795 0.4195 0.0121
plot(oneDimFm, palette = "Reds")

plot(oneDimFm, responsePanel = "boxplot", colorCluster = TRUE)

Binomial model

If family = "binomial" response must have to values: 0 and 1 (1 is interpreted as success).

binomRandSample <- generateSample(1000, 10, distr = "binomial")
table(binomRandSample$response, binomRandSample$factor) %>% 
    kable()
I D B F J H C E A G
0 78 91 84 97 79 66 55 53 33 19
1 6 8 8 10 20 26 55 53 75 84
binomFm <- mergeFactors(binomRandSample$response, 
                        binomRandSample$factor, 
                        family = "binomial", 
                        successive = TRUE)
mergingHistory(binomFm, showStats = TRUE) %>% 
    kable()
groupA groupB model pvalVsFull pvalVsPrevious
1 -479.8386 1.0000 1.0000
2 (D) (B) -479.8503 0.8782 0.8782
6 (C) (E) -479.8503 0.9883 1.0000
21 (D)(B) (F) -479.8904 0.9914 0.7771
11 (I) (D)(B)(F) -480.0009 0.9882 0.6382
22 (J) (H) -480.8486 0.8464 0.1929
4 (A) (G) -482.9523 0.3982 0.0402
12 (I)(D)(B)(F) (J)(H) -495.5196 0.0001 0.0000
23 (C)(E) (A)(G) -510.4017 0.0000 0.0000
13 (I)(D)(B)(F)(J)(H) (C)(E)(A)(G) -644.2964 0.0000 0.0000
plot(binomFm, colorCluster = TRUE, penalty = 7)

plot(binomFm, gicPanelColor = "red")

Survival model

If family = "survival" response must be of a class Surv.

library(survival)
data(veteran)
survResponse <- Surv(time = veteran$time, 
                 event = veteran$status)
survivalFm <- mergeFactors(response = survResponse, 
                   factor = veteran$celltype, 
                   family = "survival")
mergingHistory(survivalFm, showStats = TRUE) %>% 
    kable()
groupA groupB model pvalVsFull pvalVsPrevious
1 -493.0247 1.0000 1.0000
11 (smll) (aden) -493.1951 0.5594 0.5594
12 (sqms) (larg) -493.5304 0.6031 0.4128
13 (sqms)(larg) (smll)(aden) -505.4491 0.0000 0.0000
plot(survivalFm)

plot(survivalFm, nodesSpacing = "effects", colorCluster = TRUE)