The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
library(HypergeoMat)
Let \((a_1, \ldots, a_p)\) and \((b_1, \ldots, b_q)\) be two vectors of real or complex numbers, possibly empty, \(\alpha > 0\) and \(X\) a real symmetric or a complex Hermitian matrix. The corresponding hypergeometric function of a matrix argument is defined by \[ {}_pF_q^{(\alpha)} \left(\begin{matrix} a_1, \ldots, a_p \\ b_1, \ldots, b_q\end{matrix}; X\right) = \sum_{k=0}^\infty\sum_{\kappa \vdash k} \frac{{(a_1)}_\kappa^{(\alpha)} \cdots {(a_p)}_\kappa^{(\alpha)}} {{(b_1)}_\kappa^{(\alpha)} \cdots {(b_q)}_\kappa^{(\alpha)}} \frac{C_\kappa^{(\alpha)}(X)}{k!}. \] The inner sum is over the integer partitions \(\kappa\) of \(k\) (which we also denote by \(|\kappa| = k\)). The symbol \({(\cdot)}_\kappa^{(\alpha)}\) is the generalized Pochhammer symbol, defined by \[ {(c)}_\kappa^{(\alpha)} = \prod_{i=1}^\ell\prod_{j=1}^{\kappa_i} \left(c - \frac{i-1}{\alpha} + j-1\right) \] when \(\kappa = (\kappa_1, \ldots, \kappa_\ell)\). Finally, \(C_\kappa^{(\alpha)}\) is a Jack function. Given an integer partition \(\kappa\) and \(\alpha > 0\), and a real symmetric or complex Hermitian matrix \(X\) of order \(n\), the Jack function \[ C_\kappa^{(\alpha)}(X) = C_\kappa^{(\alpha)}(x_1, \ldots, x_n) \] is a symmetric homogeneous polynomial of degree \(|\kappa|\) in the eigenvalues \(x_1\), \(\ldots\), \(x_n\) of \(X\).
The series defining the hypergeometric function does not always converge. See the references for a discussion about the convergence.
The inner sum in the definition of the hypergeometric function is over all partitions \(\kappa \vdash k\) but actually \(C_\kappa^{(\alpha)}(X) = 0\) when \(\ell(\kappa)\), the number of non-zero entries of \(\kappa\), is strictly greater than \(n\).
For \(\alpha=1\), \(C_\kappa^{(\alpha)}\) is a Schur
polynomial and it is a zonal polynomial for \(\alpha = 2\). In random matrix theory, the
hypergeometric function appears for \(\alpha=2\) and \(\alpha\) is omitted from the notation,
implicitely assumed to be \(2\). This
is the default value of \(\alpha\) in
the HypergeoMat
package.
Koev and Eldeman (2006) provided an efficient algorithm for the evaluation of the truncated series \[ {{}_{p\!\!\!\!\!}}^m\! F_q^{(\alpha)} \left(\begin{matrix} a_1, \ldots, a_p \\ b_1, \ldots, b_q\end{matrix}; X\right) = \sum_{k=0}^m\sum_{\kappa \vdash k} \frac{{(a_1)}_\kappa^{(\alpha)} \cdots {(a_p)}_\kappa^{(\alpha)}} {{(b_1)}_\kappa^{(\alpha)} \cdots {(b_q)}_\kappa^{(\alpha)}} \frac{C_\kappa^{(\alpha)}(X)}{k!}. \]
In the HypergeoMat
package, \(m\) is called the truncation weight of
the summation (because \(|\kappa|\) is called the weight of \(\kappa\)), the vector \((a_1, \ldots, a_p)\) is called the vector
of upper parameters while the vector \((b_1, \ldots, b_q)\) is called the vector
of lower parameters. The user can enter either the matrix \(X\) or the vector \((x_1, \ldots, x_n)\) of the eigenvalues of
\(X\).
For example, to compute \[ {{}_{2\!\!\!\!\!}}^{15}\! F_3^{(2)} \left(\begin{matrix} 3, 4 \\ 5, 6, 7\end{matrix}; \begin{pmatrix} 0.1 && 0.4 \\ 0.4 && 0.1 \end{pmatrix}\right) \] you have to enter (recall that \(\alpha=2\) is the default value)
hypergeomPFQ(m = 15, a = c(3,4), b = c(5,6,7), x = cbind(c(0.1,0.4),c(0.4,0.1)))
#> [1] 1.011526
We said that the hypergeometric function is defined for a real
symmetric matrix or a complex Hermitian matrix \(X\). However we do not impose this
restriction in the HypergeoMat
package. The user can enter
any real or complex square matrix, or a real or complex vector of
eigenvalues.
Plamen Koev and Alan Edelman. The Efficient Evaluation of the Hypergeometric Function of a Matrix Argument. Mathematics of Computation, 75, 833-846, 2006.
Robb Muirhead. Aspects of multivariate statistical theory. Wiley series in probability and mathematical statistics. Probability and mathematical statistics. John Wiley & Sons, New York, 1982.
A. K. Gupta and D. K. Nagar. Matrix variate distributions. Chapman and Hall, 1999.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.