The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

Introduction to rSVDdpd

library(rsvddpd)
library(microbenchmark)
library(matrixStats)
library(pcaMethods)
#> Loading required package: Biobase
#> Loading required package: BiocGenerics
#> Loading required package: parallel
#> 
#> Attaching package: 'BiocGenerics'
#> The following objects are masked from 'package:parallel':
#> 
#>     clusterApply, clusterApplyLB, clusterCall, clusterEvalQ,
#>     clusterExport, clusterMap, parApply, parCapply, parLapply,
#>     parLapplyLB, parRapply, parSapply, parSapplyLB
#> The following objects are masked from 'package:stats':
#> 
#>     IQR, mad, sd, var, xtabs
#> The following objects are masked from 'package:base':
#> 
#>     Filter, Find, Map, Position, Reduce, anyDuplicated, append,
#>     as.data.frame, basename, cbind, colnames, dirname, do.call,
#>     duplicated, eval, evalq, get, grep, grepl, intersect, is.unsorted,
#>     lapply, mapply, match, mget, order, paste, pmax, pmax.int, pmin,
#>     pmin.int, rank, rbind, rownames, sapply, setdiff, sort, table,
#>     tapply, union, unique, unsplit, which.max, which.min
#> Welcome to Bioconductor
#> 
#>     Vignettes contain introductory material; view with
#>     'browseVignettes()'. To cite Bioconductor, see
#>     'citation("Biobase")', and for packages 'citation("pkgname")'.
#> 
#> Attaching package: 'Biobase'
#> The following objects are masked from 'package:matrixStats':
#> 
#>     anyMissing, rowMedians
#> 
#> Attaching package: 'pcaMethods'
#> The following object is masked from 'package:stats':
#> 
#>     loadings

Introduction

Singular Value Decomposition (SVD) is a very popular technique which is abundantly used in different applications from Bioinformatics, Image and Signal processing, Textual Analysis, Dimensional Reduction techniques etc.

However, it is often the case that the data matrix, on which SVD is generally applied on, contains outliers which are not in accord with the data generating mechanism. In such a case, usual SVD performs poorly in a sense that the singular values and the left and right singular vectors are found to be very different from the ones that would have been obtained if the data matrix was free of outliers. Hence, the dire need of a robust version of SVD is extremely prevalent, since hardly any data in practice becomes free of any type of outliers.

For illustration, consider the simple \(4\times 3\) matrix, where the elements go from \(1\) to \(12\).

X <- matrix(1:12, nrow = 4, ncol = 3, byrow = TRUE)
X
#>      [,1] [,2] [,3]
#> [1,]    1    2    3
#> [2,]    4    5    6
#> [3,]    7    8    9
#> [4,]   10   11   12

and the singular value decomposition turns out the singular values as approximately \(25, 1.3\) and \(0\).

svd(X)
#> $d
#> [1] 2.546241e+01 1.290662e+00 2.503310e-15
#> 
#> $u
#>            [,1]        [,2]       [,3]
#> [1,] -0.1408767 -0.82471435  0.5418041
#> [2,] -0.3439463 -0.42626394 -0.6625522
#> [3,] -0.5470159 -0.02781353 -0.3003078
#> [4,] -0.7500855  0.37063688  0.4210560
#> 
#> $v
#>            [,1]        [,2]       [,3]
#> [1,] -0.5045331  0.76077568 -0.4082483
#> [2,] -0.5745157  0.05714052  0.8164966
#> [3,] -0.6444983 -0.64649464 -0.4082483

Now, note what happens when we contaminate a single entry of the matrix by a large outlying value.

X[2, 2] <- 100
svd(X)
#> $d
#> [1] 101.431313  18.313121   1.148165
#> 
#> $u
#>             [,1]       [,2]          [,3]
#> [1,] -0.02260136  0.1500488  0.9516017926
#> [2,] -0.98805726 -0.1540849  0.0008289283
#> [3,] -0.08969187  0.5758535  0.1322532569
#> [4,] -0.12323712  0.7887559 -0.2774210109
#> 
#> $v
#>             [,1]        [,2]         [,3]
#> [1,] -0.05752705  0.62535728 -0.778215212
#> [2,] -0.99499917 -0.09966888 -0.006539692
#> [3,] -0.08165348  0.77394728  0.627963626

All the singular values are now much different, being \(101.4, 18.3\) and \(1.14\). However, in practical cases, where \(X\) actually represent a data matrix, this can pose a serious problem.

On the other hand, using rSVDdpd function from rsvddpd package enables us a mitigate the effect of this outlier.

rSVDdpd(X, alpha = 0.3)
#> $d
#> [1] 25.4729767  1.2901311  0.0280741
#> 
#> $u
#>           [,1]        [,2]       [,3]
#> [1,] 0.1408240 -0.82503969  0.4775454
#> [2,] 0.3450057 -0.42538876 -0.8366693
#> [3,] 0.5467891 -0.02773026  0.2395712
#> [4,] 0.7497742  0.37092445  0.1205841
#> 
#> $v
#>           [,1]        [,2]       [,3]
#> [1,] 0.5043113  0.76325808  0.4038653
#> [2,] 0.5749968  0.05211381 -0.8164942
#> [3,] 0.6442428 -0.64398856  0.4125894

Since the function does some randomized initialization under the hood, the result might not be exactly same when you run the code again. However, you should get the singular values pretty close to the singular values of the original \(X\) before we added the outlier.


Theoretical Background

Let us take a look at what rSVDdpd does under the hood. Before that, singular value decomposition (SVD) of a matrix \(X\) is splitting it as;

\[X_{n\times p} = U_{n \times r} D_{r\times r}V_{p\times r}^T\] Here, \(r\) is the rank of the matrix \(X\), \(D\) is a diagonal matrix with non-negative real entries, and \(U, V\) are orthogonal matrices. Since, we usually observe data matrix \(X\) with errors, the model ends up being \(X = UDV^T + \epsilon\), where \(\epsilon\) is the errors.

For simplicity, we consider \(r = 1\), i.e. \(X \approx \lambda ab^T\), where \(a, b\) are vectors of appropriate dimensions. The usual SVD can be viewed as solving the problem \(\sum_{i, j} (X_{ij} - \lambda a_i b_j)^2\), with respect to the choices of \(a_i, b_j\)’s and \(\lambda\). This \(L_2\) norm is essentially susceptible to outliers, hence people have generally tried to use \(L_1\) norm instead and tried to minimize that.

Here, we use Density Power Divergence (which is popularly used in robust estimation techniques bridging robustness and efficiency) to quantify the norm of the error. In particular, we try to minimize the function,

\[ H = \int \phi\left( \dfrac{x - \lambda a_ib_j}{\sigma} \right)^{(1 + \alpha)}dx - \dfrac{1}{np} \sum_{i=1}^{n} \sum_{j = 1}^{p} \phi\left( \dfrac{X_{ij} - \lambda a_ib_j}{\sigma} \right)^{\alpha} \] with respect to the unknowns \(\lambda, a_i, b_j\) and \(\sigma^2\), where \(\phi(\cdot)\) is the standard normal density function. However, since the above problem is actually non-convex, but is convex when one of \(a_i\)’s or \(b_j\)’s are held fixed, we iterate between situations fixing \(a_i\)’s and \(b_j\)’s and finding minimum of the other quantities respectively.


Features

Underflow and Overflow

Because of the usage of standard normal density, and exponential functions, the usual algorithm suffers from underflow and overflow and the estimates tend to become NAN or Inf in some iterations for reasonably large or reasonably small values in the data matrix. To deal with this, rSVDdpd function first scales all elements of the data matrix to a suitable range, and then perform the robust SVD algorithm. Finally, the scaling factor can be adjusted to obtain the original singular values.

rSVDdpd(X * 1e6, alpha  = 0.3)
#> $d
#> [1] 25472980.29  1290130.68    28091.52
#> 
#> $u
#>           [,1]        [,2]       [,3]
#> [1,] 0.1408240  0.82503991  0.4775451
#> [2,] 0.3450068  0.42538806 -0.8366692
#> [3,] 0.5467889  0.02773012  0.2395718
#> [4,] 0.7497739 -0.37092478  0.1205851
#> 
#> $v
#>           [,1]        [,2]       [,3]
#> [1,] 0.5043113 -0.76326003  0.4038617
#> [2,] 0.5749969 -0.05210988 -0.8164944
#> [3,] 0.6442427  0.64398656  0.4125926
rSVDdpd(X * 1e-6, alpha = 0.3)
#> $d
#> [1] 2.547297e-05 1.290132e-06 2.804355e-08
#> 
#> $u
#>           [,1]        [,2]       [,3]
#> [1,] 0.1408241  0.82503930  0.4775461
#> [2,] 0.3450038  0.42538999 -0.8366694
#> [3,] 0.5467895  0.02773051  0.2395703
#> [4,] 0.7497747 -0.37092388  0.1205824
#> 
#> $v
#>           [,1]       [,2]       [,3]
#> [1,] 0.5043115 -0.7632546  0.4038716
#> [2,] 0.5749965 -0.0521207 -0.8164940
#> [3,] 0.6442429  0.6439921  0.4125837

As it can be seen, the function rSVDdpd handles the very large or very small elements nicely.

Permutation Invariance

Y <- X[, c(3, 1, 2)]
rSVDdpd(Y, alpha = 0.3)
#> $d
#> [1] 25.4729806  1.2901307  0.0280928
#> 
#> $u
#>           [,1]        [,2]       [,3]
#> [1,] 0.1408240 -0.82503992  0.4775450
#> [2,] 0.3450069 -0.42538801 -0.8366692
#> [3,] 0.5467889 -0.02773011  0.2395718
#> [4,] 0.7497738  0.37092480  0.1205852
#> 
#> $v
#>           [,1]        [,2]       [,3]
#> [1,] 0.6442427 -0.64398641  0.4125928
#> [2,] 0.5043112  0.76326017  0.4038615
#> [3,] 0.5749970  0.05210959 -0.8164944

As expected, the singular values do not change when the columns of the data matrix is permuted, however, the singular vector permutes in the same manner of the permutation of the columns.

Orthogonality of Left and Right Singular vectors

An important property of SVD is that the matrix corresponding to the left and right singular vectors are orthogonal matrices. A sanity check of this property can also be verified very easily.

crossprod(rSVDdpd(X, alpha = 0.3)$u)
#>               [,1]         [,2]          [,3]
#> [1,]  1.000000e+00 5.551115e-17 -4.163336e-17
#> [2,]  5.551115e-17 1.000000e+00  9.714451e-17
#> [3,] -4.163336e-17 9.714451e-17  1.000000e+00

As it seems, the off diagonal entries are very small values. This is ensured by introducing a Gram Schimdt Orthogonalization step between successive iterations of the algorithm.

Effect of Robustness Parameter

In presence of outliers with large deviation, the performance of rSVDdpd is fairly robust to the choice of \(\alpha\), the robustness parameter. With \(\alpha = 0\), rSVDdpd corresponds to usual svd function from base package. However, with increasing \(\alpha\), the robustness increases, i.e. even a smaller deviation would not affect the singular values, while with higher \(\alpha\), the variance of the estimators generally increase.

To demonstrate the effect of \(\alpha\) on time complexity, microbenchmark package will be used.

microbenchmark::microbenchmark(svd(X), rSVDdpd(X, alpha = 0), rSVDdpd(X, alpha = 0.25), 
                               rSVDdpd(X, alpha = 0.5), rSVDdpd(X, alpha = 0.75), 
                               rSVDdpd(X, alpha = 1), times = 30)
#> Unit: microseconds
#>                      expr   min    lq     mean median    uq    max neval
#>                    svd(X)  42.8  63.5 100.9700  87.75 123.0  304.1    30
#>     rSVDdpd(X, alpha = 0) 136.4 172.7 292.9767 286.45 365.6 1002.5    30
#>  rSVDdpd(X, alpha = 0.25) 191.9 234.1 332.5533 264.85 453.0  590.8    30
#>   rSVDdpd(X, alpha = 0.5) 207.6 254.2 341.4333 302.95 442.5  566.5    30
#>  rSVDdpd(X, alpha = 0.75) 220.0 253.0 377.1700 323.40 486.6  649.6    30
#>     rSVDdpd(X, alpha = 1) 205.4 276.4 390.7900 380.35 474.9  868.0    30

Therefore, the execution time slightly increases with higher \(\alpha\).


Comparison with existing packages

To compare performances of usual SVD algorithm with that of rSVDdpd, one can use simSVD function, which is used to simulate data matrices based on a model and then obtain an estimate of Bias and MSE of the estimates using a Monte Carlo approach.

First, we create the true data matrix, with singular vectors taken from coefficients of orthogonal polynomials.

U <- as.matrix(stats::contr.poly(10)[, 1:3])
V <- as.matrix(stats::contr.poly(4)[, 1:3])
trueSVD <- list(d = c(10, 5, 3), u = U, v = V)  # true svd of the data matrix

We can now call simSVD function to see the performance of usual SVD algorithm under contamination from outlier.

res <- simSVD(trueSVD, svdfun = svd, B = 100, seed = 2021, outlier = TRUE, out_value = 25, tau = 0.9)
res
#> $Bias
#> [1] 28.20376 21.39073 11.20089
#> 
#> $MSE
#> [1] 845.4570 527.2264 208.7107
#> 
#> $Variance
#> [1] 50.00523 69.66283 83.25073
#> 
#> $Left
#> [1] 0.6713628 0.7145263 0.7491816
#> 
#> $Right
#> [1] 0.5724405 0.5660693 0.6095304

Following is the performance of robustSvd function from pcaMethods package.

res <- simSVD(trueSVD, svdfun = pcaMethods::robustSvd, B = 100, seed = 2021, outlier = TRUE, out_value = 25, tau = 0.9)
res
#> $Bias
#> [1] 16.29930 14.78907 16.33431
#> 
#> $MSE
#> [1] 477.2494 378.6364 387.5905
#> 
#> $Variance
#> [1] 211.5821 159.9197 120.7808
#> 
#> $Left
#> [1] 0.4711769 0.6602616 0.7136883
#> 
#> $Right
#> [1] 0.4039576 0.5323604 0.5199647

Now we compare rSVDdpd function’s performance with the other SVD implementations.

res <- simSVD(trueSVD, svdfun = rSVDdpd, B = 100, seed = 2021, outlier = TRUE, out_value = 25, tau = 0.9, alpha = 0.25)
res
#> $Bias
#> [1]  3.47466190  0.05686266 -0.22769722
#> 
#> $MSE
#> [1] 104.9849391   0.6843969   0.2378147
#> 
#> $Variance
#> [1] 92.9116638  0.6811636  0.1859687
#> 
#> $Left
#> [1] 0.08385757 0.09691123 0.10080858
#> 
#> $Right
#> [1] 0.05255743 0.07546599 0.08511308

And with \(\alpha = 0.75\), we have;

res <- simSVD(trueSVD, svdfun = rSVDdpd, B = 100, seed = 2021, outlier = TRUE, out_value = 25, tau = 0.9, alpha = 0.75)
res
#> $Bias
#> [1]  0.1903417 -0.1843705 -0.2716241
#> 
#> $MSE
#> [1] 0.4822773 0.1357128 0.1671783
#> 
#> $Variance
#> [1] 0.44604734 0.10172035 0.09339867
#> 
#> $Left
#> [1] 0.01421574 0.03210005 0.04942867
#> 
#> $Right
#> [1] 0.006691835 0.020217527 0.029774606

As it can be seen, the bias and MSE are much lesser in rSVDdpd algorithm.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.