The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
The Wishart distribution on a random positive-definite matrix \({\boldsymbol{X}}_{q\times q}\) is is denoted \({\boldsymbol{X}}\sim \operatorname{Wish}({\boldsymbol{\Psi}}, \nu)\), and defined as \({\boldsymbol{X}}= ({\boldsymbol{L}}{\boldsymbol{Z}})({\boldsymbol{L}}{\boldsymbol{Z}})'\), where:
\({\boldsymbol{\Psi}}_{q\times q} = {\boldsymbol{L}}{\boldsymbol{L}}'\) is the positive-definite matrix scale parameter,
\(\nu > q\) is the shape parameter,
\({\boldsymbol{Z}}_{q\times q}\) is a random lower-triangular matrix with elements
\[ Z_{ij} \begin{cases} \overset{\;\textrm{iid}\;}{\sim}\operatorname{Normal}(0,1) & i < j \\ \overset{\:\textrm{ind}\:}{\sim}\chi^2_{(\nu-i+1)} & i = j \\ = 0 & i > j. \end{cases} \]
The log-density of the Wishart distribution is
\[ \log p({\boldsymbol{X}}\mid {\boldsymbol{\Psi}}, \nu) = -\textstyle{\frac{1}{2}} \left[\mathrm{tr}({\boldsymbol{\Psi}}^{-1} {\boldsymbol{X}}) + (q+1-\nu)\log |{\boldsymbol{X}}| + \nu \log |{\boldsymbol{\Psi}}| + \nu q \log(2) + 2 \log \Gamma_q(\textstyle{\frac{\nu }{2}})\right], \]
where \(\Gamma_n(x)\) is the multivariate Gamma function defined as
\[ \Gamma_n(x) = \pi^{n(n-1)/4} \prod_{j=1}^n \Gamma\big(x + \textstyle{\frac{1}{2}} (1-j)\big). \]
The Inverse-Wishart distribution \({\boldsymbol{X}}\sim \operatorname{InvWish}({\boldsymbol{\Psi}}, \nu)\) is defined as \({\boldsymbol{X}}^{-1} \sim \operatorname{Wish}({\boldsymbol{\Psi}}^{-1}, \nu)\). Its log-density is given by
\[ \log p({\boldsymbol{X}}\mid {\boldsymbol{\Psi}}, \nu) = -\textstyle{\frac{1}{2}} \left[\mathrm{tr}({\boldsymbol{\Psi}}{\boldsymbol{X}}^{-1}) + (\nu+q+1) \log |{\boldsymbol{X}}| - \nu \log |{\boldsymbol{\Psi}}| + \nu q \log(2) + 2 \log \Gamma_q(\textstyle{\frac{\nu }{2}})\right]. \]
If \({\boldsymbol{X}}_{q\times q} \sim \operatorname{Wish}({\boldsymbol{\Psi}},\nu)\), the for a nonzero vector \({\boldsymbol{a}}\in \mathbb R^q\) we have
\[ \frac{{\boldsymbol{a}}'{\boldsymbol{X}}{\boldsymbol{a}}}{{\boldsymbol{a}}'{\boldsymbol{\Psi}}{\boldsymbol{a}}} \sim \chi^2_{(\nu)}. \]
The Matrix-Normal distribution on a random matrix \({\boldsymbol{X}}_{p \times q}\) is denoted \({\boldsymbol{X}}\sim \operatorname{MatNorm}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C)\), and defined as \({\boldsymbol{X}}= {\boldsymbol{L}}{\boldsymbol{Z}}{\boldsymbol{U}}+ {\boldsymbol{\Lambda}}\), where:
The log-density of the Matrix-Normal distribution is
\[ \log p({\boldsymbol{X}}\mid {\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C) = -\textstyle{\frac{1}{2}} \left[\mathrm{tr}\big({\boldsymbol{\Sigma}}_C^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}})'{\boldsymbol{\Sigma}}_R^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}})\big) + \nu q \log(2\pi) + \nu \log |{\boldsymbol{\Sigma}}_C| + q \log |{\boldsymbol{\Sigma}}_R|\right]. \]
If \({\boldsymbol{X}}_{p \times q} \sim \operatorname{MatNorm}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C)\), then for nonzero vectors \({\boldsymbol{a}}\in \mathbb R^p\) and \({\boldsymbol{b}}\in \mathbb R^q\) we have
\[ {\boldsymbol{a}}' {\boldsymbol{X}}{\boldsymbol{b}}\sim \operatorname{Normal}({\boldsymbol{a}}' {\boldsymbol{\Lambda}}{\boldsymbol{b}}, {\boldsymbol{a}}'{\boldsymbol{\Sigma}}_R{\boldsymbol{a}}\cdot {\boldsymbol{b}}'{\boldsymbol{\Sigma}}_C{\boldsymbol{b}}). \]
The Matrix-Normal Inverse-Wishart Distribution on a random matrix \({\boldsymbol{X}}_{p \times q}\) and random positive-definite matrix \({\boldsymbol{V}}_{q\times q}\) is denoted \(({\boldsymbol{X}},{\boldsymbol{V}}) \sim \operatorname{MNIW}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}, {\boldsymbol{\Psi}}, \nu)\), and defined as
\[ \begin{aligned} {\boldsymbol{X}}\mid {\boldsymbol{V}}& \sim \operatorname{MatNorm}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}, {\boldsymbol{V}}) \\ {\boldsymbol{V}}& \sim \operatorname{InvWish}({\boldsymbol{\Psi}}, \nu). \end{aligned} \]
The MNIX distribution is conjugate prior for the multivariable response regression model
\[ {\boldsymbol{Y}}_{n \times q} \sim \operatorname{MatNorm}({\boldsymbol{X}}_{n\times p} {\boldsymbol{\beta}}_{p \times q}, {\boldsymbol{V}}, {\boldsymbol{\Sigma}}). \]
That is, if \(({\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) \sim \operatorname{MNIW}({\boldsymbol{\Lambda}}, {\boldsymbol{\Omega}}^{-1}, {\boldsymbol{\Psi}}, \nu)\), then
\[ {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}\mid {\boldsymbol{Y}}\sim \operatorname{MNIW}(\hat {\boldsymbol{\Lambda}}, \hat {\boldsymbol{\Omega}}^{-1}, \hat {\boldsymbol{\Psi}}, \hat \nu), \]
where
\[ \begin{aligned} \hat {\boldsymbol{\Omega}}& = {\boldsymbol{X}}'{\boldsymbol{V}}^{-1}{\boldsymbol{X}}+ {\boldsymbol{\Omega}} & \hat {\boldsymbol{\Psi}}& = {\boldsymbol{\Psi}}+ {\boldsymbol{Y}}'{\boldsymbol{V}}^{-1}{\boldsymbol{Y}}+ {\boldsymbol{\Lambda}}'{\boldsymbol{\Omega}}{\boldsymbol{\Lambda}}- \hat {\boldsymbol{\Lambda}}'\hat {\boldsymbol{\Omega}}\hat {\boldsymbol{\Lambda}} \\ \hat {\boldsymbol{\Lambda}}& = \hat {\boldsymbol{\Omega}}^{-1}({\boldsymbol{X}}'{\boldsymbol{V}}^{-1}{\boldsymbol{Y}}+ {\boldsymbol{\Omega}}{\boldsymbol{\Lambda}}) & \hat \nu & = \nu + n. \end{aligned} \]
The Matrix-\(t\) distribution on a random matrix \({\boldsymbol{X}}_{p \times q}\) is denoted \({\boldsymbol{X}}\sim \operatorname{MatT}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C, \nu)\), and defined as the marginal distribution of \({\boldsymbol{X}}\) for \(({\boldsymbol{X}}, {\boldsymbol{V}}) \sim \operatorname{MNIW}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C, \nu)\). Its log-density is given by
\[ \begin{aligned} \log p({\boldsymbol{X}}\mid {\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C, \nu) & = -\textstyle{\frac{1}{2}} \Big[(\nu+p+q-1)\log | I + {\boldsymbol{\Sigma}}_R^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}}){\boldsymbol{\Sigma}}_C^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}})'| \\ & \phantom{= -\textstyle{\frac{1}{2}} \Big[} + q \log |{\boldsymbol{\Sigma}}_R| + p \log |{\boldsymbol{\Sigma}}_C| \\ & \phantom{= -\textstyle{\frac{1}{2}} \Big[} + pq \log(\pi) - \log \Gamma_q(\textstyle{\frac{\nu+p+q-1}{2}}) + \log \Gamma_q(\textstyle{\frac{\nu+q-1}{2}})\Big]. \end{aligned} \]
If \({\boldsymbol{X}}_{p\times q} \sim \operatorname{MatT}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C, \nu)\), then for nonzero vectors \({\boldsymbol{a}}\in \mathbb R^p\) and \({\boldsymbol{b}}\in \mathbb R^q\) we have
\[ \frac{{\boldsymbol{a}}'{\boldsymbol{X}}{\boldsymbol{b}}- \mu}{\sigma} \sim t_{(\nu -q + 1)}, \]
where \[ \mu = {\boldsymbol{a}}'{\boldsymbol{\Lambda}}{\boldsymbol{b}}, \qquad \sigma^2 = \frac{{\boldsymbol{a}}'{\boldsymbol{\Sigma}}_R{\boldsymbol{a}}\cdot {\boldsymbol{b}}'{\boldsymbol{\Sigma}}_C{\boldsymbol{b}}}{\nu - q + 1}. \]
Consider the multivariate normal distribution on \(q\)-dimensional vectors \({\boldsymbol{x}}\) and \({\boldsymbol{\mu}}\) given by
\[ \begin{aligned} {\boldsymbol{x}}\mid {\boldsymbol{\mu}}& \sim \operatorname{Normal}({\boldsymbol{\mu}}, {\boldsymbol{V}}) \\ {\boldsymbol{\mu}}& \sim \operatorname{Normal}({\boldsymbol{\lambda}}, {\boldsymbol{\Sigma}}). \end{aligned} \]
The random-effects normal distribution is defined as the posterior distribution \({\boldsymbol{\mu}}\sim p({\boldsymbol{\mu}}\mid {\boldsymbol{x}})\), which is given by
\[ {\boldsymbol{\mu}}\mid {\boldsymbol{x}}\sim \operatorname{Normal}\big({\boldsymbol{G}}({\boldsymbol{x}}-{\boldsymbol{\lambda}}) + {\boldsymbol{\lambda}}, {\boldsymbol{G}}{\boldsymbol{V}}\big), \qquad {\boldsymbol{G}}= {\boldsymbol{\Sigma}}({\boldsymbol{V}}+ {\boldsymbol{\Sigma}})^{-1}. \]
The notation for this distribution is \({\boldsymbol{\mu}}\sim \operatorname{RxNorm}({\boldsymbol{x}}, {\boldsymbol{V}}, {\boldsymbol{\lambda}}, {\boldsymbol{\Sigma}})\).
The hierarchical Normal-Normal model is defined as
\[ \begin{aligned} {\boldsymbol{y}}_i \mid {\boldsymbol{\mu}}_i, {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}& \overset{\:\textrm{ind}\:}{\sim}\operatorname{Normal}({\boldsymbol{\mu}}_i, {\boldsymbol{V}}_i) \\ {\boldsymbol{\mu}}_i \mid {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}& \overset{\;\textrm{iid}\;}{\sim}\operatorname{Normal}({\boldsymbol{x}}_i'{\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) \\ ({\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) & \sim \operatorname{MNIW}({\boldsymbol{\Lambda}}, {\boldsymbol{\Omega}}^{-1}, {\boldsymbol{\Psi}}, \nu), \end{aligned} \]
where:
Let \({\boldsymbol{Y}}_{n\times q} = ({\boldsymbol{y}}_{1},\ldots,{\boldsymbol{y}}_{n})\), \({\boldsymbol{X}}_{n\times p} = ({\boldsymbol{x}}_{1},\ldots,{\boldsymbol{x}}_{n})\), and \({\boldsymbol{\Theta}}_{n \times q} = ({\boldsymbol{\mu}}_{1},\ldots,{\boldsymbol{\mu}}_{n})\). If interest lies in the posterior distribution \(p({\boldsymbol{\Theta}}, {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}\mid {\boldsymbol{Y}}, {\boldsymbol{X}})\), then a Gibbs sampler can be used to cycle through the following conditional distributions:
\[ \begin{aligned} {\boldsymbol{\mu}}_i \mid {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}, {\boldsymbol{Y}}, {\boldsymbol{X}}& \overset{\:\textrm{ind}\:}{\sim}\operatorname{RxNorm}({\boldsymbol{y}}_i, {\boldsymbol{V}}_i, {\boldsymbol{x}}_i'{\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) \\ {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}\mid {\boldsymbol{\Theta}}, {\boldsymbol{Y}}, {\boldsymbol{X}}& \sim \operatorname{MNIW}(\hat {\boldsymbol{\Lambda}}, \hat {\boldsymbol{\Omega}}^{-1}, \hat {\boldsymbol{\Psi}}, \hat \nu), \end{aligned} \]
where \(\hat {\boldsymbol{\Lambda}}\), \(\hat {\boldsymbol{\Omega}}\), \(\hat {\boldsymbol{\Psi}}\), and \(\hat \nu\) are obtained from the MNIW conjugate posterior formula with \({\boldsymbol{Y}}\gets {\boldsymbol{\Theta}}\).
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.