The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

Eigenvalues and Eigenvectors: Properties

Michael Friendly

2024-10-02

Setup

This vignette uses an example of a \(3 \times 3\) matrix to illustrate some properties of eigenvalues and eigenvectors. We could consider this to be the variance-covariance matrix of three variables, but the main thing is that the matrix is square and symmetric, which guarantees that the eigenvalues, \(\lambda_i\) are real numbers. Covariance matrices are also positive semi-definite, meaning that their eigenvalues are non-negative, \(\lambda_i \ge 0\).

A <- matrix(c(13, -4, 2, -4, 11, -2, 2, -2, 8), 3, 3, byrow=TRUE)
A
##      [,1] [,2] [,3]
## [1,]   13   -4    2
## [2,]   -4   11   -2
## [3,]    2   -2    8

Get the eigenvalues and eigenvectors using eigen(); this returns a named list, with eigenvalues named values and eigenvectors named vectors.

ev <- eigen(A)
# extract components
(values <- ev$values)
## [1] 17  8  7
(vectors <- ev$vectors)
##         [,1]    [,2]   [,3]
## [1,]  0.7454  0.6667 0.0000
## [2,] -0.5963  0.6667 0.4472
## [3,]  0.2981 -0.3333 0.8944

The eigenvalues are always returned in decreasing order, and each column of vectors corresponds to the elements in values.

Properties of eigenvalues and eigenvectors

The following steps illustrate the main properties of eigenvalues and eigenvectors. We use the notation \(A = V' \Lambda V\) to express the decomposition of the matrix \(A\), where \(V\) is the matrix of eigenvectors and \(\Lambda = diag(\lambda_1, \lambda_2, \dots, \lambda_p)\) is the diagonal matrix composed of the ordered eigenvalues, \(\lambda_1 \ge \lambda_2 \ge \dots \lambda_p\).

  1. Orthogonality: Eigenvectors are always orthogonal, \(V' V = I\). zapsmall() is handy for cleaning up tiny values.
crossprod(vectors)
##           [,1]      [,2]      [,3]
## [1,] 1.000e+00 3.053e-16 5.551e-17
## [2,] 3.053e-16 1.000e+00 0.000e+00
## [3,] 5.551e-17 0.000e+00 1.000e+00
zapsmall(crossprod(vectors))
##      [,1] [,2] [,3]
## [1,]    1    0    0
## [2,]    0    1    0
## [3,]    0    0    1
  1. trace(A) = sum of eigenvalues, \(\sum \lambda_i\).
library(matlib)   # use the matlib package
tr(A)
## [1] 32
sum(values)
## [1] 32
  1. sum of squares of A = sum of squares of eigenvalues, \(\sum \lambda_i^2\).
sum(A^2)
## [1] 402
sum(values^2)
## [1] 402
  1. determinant = product of eigenvalues, \(\det(A) = \prod \lambda_i\). This means that the determinant will be zero if any \(\lambda_i = 0\).
det(A)
## [1] 952
prod(values)
## [1] 952
  1. rank = number of non-zero eigenvalues
R(A)
## [1] 3
sum(values != 0)
## [1] 3
  1. eigenvalues of the inverse \(A^{-1}\) = 1/eigenvalues of A. The eigenvectors are the same, except for order, because eigenvalues are returned in decreasing order.
AI <- solve(A)
AI
##          [,1]    [,2]     [,3]
## [1,]  0.08824 0.02941 -0.01471
## [2,]  0.02941 0.10504  0.01891
## [3,] -0.01471 0.01891  0.13340
eigen(AI)$values
## [1] 0.14286 0.12500 0.05882
eigen(AI)$vectors
##        [,1]    [,2]    [,3]
## [1,] 0.0000  0.6667  0.7454
## [2,] 0.4472  0.6667 -0.5963
## [3,] 0.8944 -0.3333  0.2981
  1. There are similar relations for other powers of a matrix, \(A^2, \dots A^p\): values(mpower(A,p)) = values(A)^p, where mpower(A,2) = A %*% A, etc.
eigen(A %*% A)
## eigen() decomposition
## $values
## [1] 289  64  49
## 
## $vectors
##         [,1]    [,2]   [,3]
## [1,]  0.7454  0.6667 0.0000
## [2,] -0.5963  0.6667 0.4472
## [3,]  0.2981 -0.3333 0.8944
eigen(A %*% A %*% A)$values
## [1] 4913  512  343
eigen(mpower(A, 4))$values
## [1] 83521  4096  2401

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.