Overview of loose.rock

André Veríssimo

2019-05-06

loose rock

Set of Functions to Use in Survival Analysis and in Data Science

Collection of function to improve workflow in survival analysis and data science. Among the many features, the generation of balanced datasets, retrieval of protein coding genes from two public databases (live) and generation of random matrix based on covariance matrix.

The work has been mainly supported by two grants: FCT SFRH/BD/97415/2013 and the EU Commission under SOUND project with contract number 633974.

Install

The only pre-requirement is to install biomaRt bioconductor package as it cannot be installed automatically via CRAN.

All other dependencies should be installed when running the install command.

# install bioconductor
## try http:// if https:// URLs are not supported
source("https://bioconductor.org/biocLite.R")
biocLite()

# install the package
biocLite('loose.rock', dependencies = TRUE)

# use the package
library(loose.rock)

Overview

Libraries required for this vignette

library(dplyr)

Get a current list of protein coding genes

Showing only a random sample of 15

coding.genes() %>%
  arrange(external_gene_name) %>% {
   slice(., sample(seq(nrow(.)), 15)) 
  } %>%
  knitr::kable()
#> Coding genes from biomaRt: 22734 
#>    Coding genes from CCDS: 19632 
#>         Unique in biomaRt: 617 
#>            Unique in CCDS: 1229 
#> -------------------------------
#>                     genes: 23238
ensembl_gene_id external_gene_name
ENSG00000274958 ITPK1
ENSG00000163958 ZDHHC19
ENSG00000053747 LAMA3
ENSG00000136404 TM6SF1
ENSG00000180185 FAHD1
ENSG00000206439 TNF
ENSG00000188368 PRR19
ENSG00000236925 LTB
ENSG00000213949 ITGA1
ENSG00000079257 LXN
ENSG00000179950 PUF60
ENSG00000077549 CAPZB
ENSG00000121903 ZSCAN20
ENSG00000235443 ZNRD1
ENSG00000157119 KLHL40

Balanced test/train dataset

This is specially relevant in survival or binary output with few cases of one category that need to be well distributed among test/train datasets or in cross-validation folds.

Example below sets aside 90% of the data to the training set. As samples are already divided in two sets (set1 and set2), it performs the 90% separation for each and then joins (with option join.all = T) the result.

set1 <- c(T,T,T,T,T,T,T,T,F,T,T,T,T,T,T,T,T,T,F,T)
set2 <- !set1
cat('Set1\n', set1, '\n\nSet2\n', set2, '\n\nTraining / Test set using logical indices\n\n')
set.seed(1985)
balanced.train.and.test(set1, set2, train.perc = .9)
#
set1 <- which(set1)
set2 <- which(set2)
cat('##### Same sets but using numeric indices\n\n', 'Set1\n', set1, '\n\nSet2\n', set2, '\n\nTraining / Test set using numeric indices\n')
set.seed(1985)
balanced.train.and.test(set1, set2, train.perc = .9)
#
#> Set1
#>  TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE 
#> 
#> Set2
#>  FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE 
#> 
#> Training / Test set using logical indices
#> 
#> $train
#>  [1]  1  2  3  4  5  6  7  8  9 10 11 12 14 15 17 18 20
#> 
#> $test
#> [1] 13 16 19
#> 
#> ##### Same sets but using numeric indices
#> 
#>  Set1
#>  1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 17 18 20 
#> 
#> Set2
#>  9 19 
#> 
#> Training / Test set using numeric indices
#> $train
#>  [1]  1  2  3  4  5  6  7  8  9 10 11 12 14 15 17 18 20
#> 
#> $test
#> [1] 13 16 19

Generate synthetic matrix with covariance

xdata1 <- gen.synth.xdata(10, 5, .2)
xdata2 <- gen.synth.xdata(10, 5, .75)
#> Using .2^|i-j| to generate co-variance matrix
#> X generated
#>             X1         X2         X3          X4           X5
#> 1   0.58312957 -1.4177825 -1.5962307 -0.36925641  0.727465210
#> 2   0.41677037  1.1936101 -0.1339503 -1.76290605 -0.007014473
#> 3   1.42448267  1.4447604 -0.9234123  0.58587981 -1.055072673
#> 4  -1.05432665 -1.2694366 -0.7239440 -0.40329452 -1.907850014
#> 5   0.08636972 -0.1037405 -0.1262496  0.50288158  1.495560595
#> 6  -0.05170387  0.2428157  1.4867422 -0.65335638 -0.790914639
#> 7   1.26205122 -0.3761988  1.2515329 -0.05340056  0.339303448
#> 8  -0.91086475  1.0808547  0.6094629  1.50616045  0.683470155
#> 9  -0.04990854 -0.6898217  0.7417349  1.34572271 -0.007549189
#> 10 -1.70599973 -0.1050608 -0.5856860 -0.69843065  0.522601581
#> cov(X)
#>       X1    X2   X3    X4     X5
#> 1 1.0000 0.200 0.04 0.008 0.0016
#> 2 0.2000 1.000 0.20 0.040 0.0080
#> 3 0.0400 0.200 1.00 0.200 0.0400
#> 4 0.0080 0.040 0.20 1.000 0.2000
#> 5 0.0016 0.008 0.04 0.200 1.0000

#> Using .75^|i-j| to generate co-variance matrix (plotting correlation)
#> X generated
#>            X1          X2         X3          X4          X5
#> 1  -0.5352761  0.01866608 -0.3626905 -0.79410109 -1.23133292
#> 2  -1.3405531 -1.84315073 -1.2186326 -1.21589142 -0.08130708
#> 3   0.1230853 -0.15684536  0.5826062  1.11404563  0.92246564
#> 4   0.3834488  0.27160718  0.4574595 -0.23430815  0.92046339
#> 5  -1.6754005 -0.13201392  0.1192358  0.69832597  0.37141739
#> 6  -0.2885482 -0.74204784 -1.8912859 -0.90740440 -0.70642514
#> 7   0.2785362 -0.59850263 -0.5850362 -0.74983768 -1.57447837
#> 8   1.4110025  1.00281272  1.0552661  1.92476214  1.47005647
#> 9   1.2933359  1.86340386  0.7103639  0.04042754  0.45349389
#> 10  0.3503693  0.31607064  1.1327139  0.12398146 -0.54435325
#> cov(X)
#>          X1       X2     X3       X4        X5
#> 1 1.0000000 0.750000 0.5625 0.421875 0.3164062
#> 2 0.7500000 1.000000 0.7500 0.562500 0.4218750
#> 3 0.5625000 0.750000 1.0000 0.750000 0.5625000
#> 4 0.4218750 0.562500 0.7500 1.000000 0.7500000
#> 5 0.3164062 0.421875 0.5625 0.750000 1.0000000

Save in cache

Uses a cache to save and retrieve results. The cache is automatically created with the arguments and source code for function, so that if any of those changes, the cache is regenerated.

Caution: Files are not deleted so the cache directory can become rather big.

Set a temporary directory to save all caches (optional)

base.dir(tempdir())
#> [1] "/tmp/RtmpJwaylZ"

Run sum function twice

a <- run.cache(sum, 1, 2)
#> Saving in cache: /tmp/RtmpJwaylZ/561a/cache-generic_cache-H_561a43a3af7b265aed512a7995a46f89c382f78fdba4170e569495892b0076ba.RData
b <- run.cache(sum, 1, 2)
#> Loading from cache (not calculating): /tmp/RtmpJwaylZ/561a/cache-generic_cache-H_561a43a3af7b265aed512a7995a46f89c382f78fdba4170e569495892b0076ba.RData
all(a == b)
#> [1] TRUE

Run rnorm function with an explicit seed (otherwise it would return the same random number)

a <- run.cache(rnorm, 5, seed = 1985)
#> Saving in cache: /tmp/RtmpJwaylZ/9636/cache-generic_cache-H_96360922babcb9eeb480fabc9811eab598abaf087c10f3ef49e9093607089531.RData
b <- run.cache(rnorm, 5, seed = 2000)
#> Saving in cache: /tmp/RtmpJwaylZ/ab76/cache-generic_cache-H_ab768ab59eab0e3848e3f5b8c133baaa381eb1e6d5fda439f10847d911b0ace7.RData
all(a == b)
#> [1] FALSE

Proper

One of such is a proper function that capitalizes a string.

x <- "OnE oF sUcH iS a proPer function that capitalizes a string."
proper(x)
#> [1] "One Of Such Is A Proper Function That Capitalizes A String."

Custom colors and symbols

my.colors() and my.symbols() can be used to improve plot readability.

xdata <- -10:10
plot(xdata, 1/10 * xdata * xdata + 1, type="l", pch = my.symbols(1), col = my.colors(1), cex = .9,
     xlab = '', ylab = '', ylim = c(0, 20))
grid(NULL, NULL, lwd = 2) # grid only in y-direction
for (ix in 2:22) {
  points(xdata, 1/10 * xdata * xdata + ix, pch = my.symbols(ix), col = my.colors(ix), cex = .9)
}