To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

# Inverse-Wishart distribution

Notation ${\displaystyle {\mathcal {W}}^{-1}({\mathbf {\Psi } },\nu )}$ ${\displaystyle \nu >p-1}$ degrees of freedom (real)${\displaystyle \mathbf {\Psi } >0}$, ${\displaystyle p\times p}$ scale matrix (pos. def.) ${\displaystyle \mathbf {X} }$ is p × p positive definite ${\displaystyle {\frac {\left|\mathbf {\Psi } \right|^{\nu /2}}{2^{\nu p/2}\Gamma _{p}({\frac {\nu }{2}})}}\left|\mathbf {x} \right|^{-(\nu +p+1)/2}e^{-{\frac {1}{2}}\operatorname {tr} (\mathbf {\Psi } \mathbf {x} ^{-1})}}$ ${\displaystyle \Gamma _{p}}$ is the multivariate gamma function ${\displaystyle \operatorname {tr} }$ is the trace function ${\displaystyle {\frac {\mathbf {\Psi } }{\nu -p-1}}}$For ${\displaystyle \nu >p+1}$ ${\displaystyle {\frac {\mathbf {\Psi } }{\nu +p+1}}}$[1]:406 see below

In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution.

We say ${\displaystyle \mathbf {X} }$ follows an inverse Wishart distribution, denoted as ${\displaystyle \mathbf {X} \sim {\mathcal {W}}^{-1}(\mathbf {\Psi } ,\nu )}$, if its inverse ${\displaystyle \mathbf {X} ^{-1}}$ has a Wishart distribution ${\displaystyle {\mathcal {W}}(\mathbf {\Psi } ^{-1},\nu )}$. Important identities have been derived for the inverse-Wishart distribution.[2]

## Density

The probability density function of the inverse Wishart is:[3]

${\displaystyle f_{\mathbf {x} }({\mathbf {x} };{\mathbf {\Psi } },\nu )={\frac {\left|{\mathbf {\Psi } }\right|^{\nu /2}}{2^{\nu p/2}\Gamma _{p}({\frac {\nu }{2}})}}\left|\mathbf {x} \right|^{-(\nu +p+1)/2}e^{-{\frac {1}{2}}\operatorname {tr} (\mathbf {\Psi } \mathbf {x} ^{-1})}}$

where ${\displaystyle \mathbf {x} }$ and ${\displaystyle {\mathbf {\Psi } }}$ are ${\displaystyle p\times p}$ positive definite matrices, and Γp(·) is the multivariate gamma function.

## Theorems

### Distribution of the inverse of a Wishart-distributed matrix

If ${\displaystyle {\mathbf {X} }\sim {\mathcal {W}}({\mathbf {\Sigma } },\nu )}$ and ${\displaystyle {\mathbf {\Sigma } }}$ is of size ${\displaystyle p\times p}$, then ${\displaystyle \mathbf {A} ={\mathbf {X} }^{-1}}$ has an inverse Wishart distribution ${\displaystyle \mathbf {A} \sim {\mathcal {W}}^{-1}({\mathbf {\Sigma } }^{-1},\nu )}$ .[4]

### Marginal and conditional distributions from an inverse Wishart-distributed matrix

Suppose ${\displaystyle {\mathbf {A} }\sim {\mathcal {W}}^{-1}({\mathbf {\Psi } },\nu )}$ has an inverse Wishart distribution. Partition the matrices ${\displaystyle {\mathbf {A} }}$ and ${\displaystyle {\mathbf {\Psi } }}$ conformably with each other

${\displaystyle {\mathbf {A} }={\begin{bmatrix}\mathbf {A} _{11}&\mathbf {A} _{12}\\\mathbf {A} _{21}&\mathbf {A} _{22}\end{bmatrix}},\;{\mathbf {\Psi } }={\begin{bmatrix}\mathbf {\Psi } _{11}&\mathbf {\Psi } _{12}\\\mathbf {\Psi } _{21}&\mathbf {\Psi } _{22}\end{bmatrix}}}$

where ${\displaystyle {\mathbf {A} _{ij}}}$ and ${\displaystyle {\mathbf {\Psi } _{ij}}}$ are ${\displaystyle p_{i}\times p_{j}}$ matrices, then we have

i) ${\displaystyle \mathbf {A} _{11}}$ is independent of ${\displaystyle \mathbf {A} _{11}^{-1}\mathbf {A} _{12}}$ and ${\displaystyle {\mathbf {A} }_{22\cdot 1}}$, where ${\displaystyle {\mathbf {A} _{22\cdot 1}}={\mathbf {A} }_{22}-{\mathbf {A} }_{21}{\mathbf {A} }_{11}^{-1}{\mathbf {A} }_{12}}$ is the Schur complement of ${\displaystyle {\mathbf {A} _{11}}}$ in ${\displaystyle {\mathbf {A} }}$;

ii) ${\displaystyle {\mathbf {A} _{11}}\sim {\mathcal {W}}^{-1}({\mathbf {\Psi } _{11}},\nu -p_{2})}$;

iii) ${\displaystyle {\mathbf {A} }_{11}^{-1}{\mathbf {A} }_{12}\mid {\mathbf {A} }_{22\cdot 1}\sim MN_{p_{1}\times p_{2}}({\mathbf {\Psi } }_{11}^{-1}{\mathbf {\Psi } }_{12},{\mathbf {A} }_{22\cdot 1}\otimes {\mathbf {\Psi } }_{11}^{-1})}$, where ${\displaystyle MN_{p\times q}(\cdot ,\cdot )}$ is a matrix normal distribution;

iv) ${\displaystyle {\mathbf {A} }_{22\cdot 1}\sim {\mathcal {W}}^{-1}({\mathbf {\Psi } }_{22\cdot 1},\nu )}$, where ${\displaystyle {\mathbf {\Psi } _{22\cdot 1}}={\mathbf {\Psi } }_{22}-{\mathbf {\Psi } }_{21}{\mathbf {\Psi } }_{11}^{-1}{\mathbf {\Psi } }_{12}}$;

### Conjugate distribution

Suppose we wish to make inference about a covariance matrix ${\displaystyle {\mathbf {\Sigma } }}$ whose prior ${\displaystyle {p(\mathbf {\Sigma } )}}$ has a ${\displaystyle {\mathcal {W}}^{-1}({\mathbf {\Psi } },\nu )}$ distribution. If the observations ${\displaystyle \mathbf {X} =[\mathbf {x} _{1},\ldots ,\mathbf {x} _{n}]}$ are independent p-variate Gaussian variables drawn from a ${\displaystyle N(\mathbf {0} ,{\mathbf {\Sigma } })}$ distribution, then the conditional distribution ${\displaystyle {p(\mathbf {\Sigma } \mid \mathbf {X} )}}$ has a ${\displaystyle {\mathcal {W}}^{-1}({\mathbf {A} }+{\mathbf {\Psi } },n+\nu )}$ distribution, where ${\displaystyle {\mathbf {A} }=\mathbf {X} \mathbf {X} ^{T}}$.

Because the prior and posterior distributions are the same family, we say the inverse Wishart distribution is conjugate to the multivariate Gaussian.

Due to its conjugacy to the multivariate Gaussian, it is possible to marginalize out (integrate out) the Gaussian's parameter ${\displaystyle \mathbf {\Sigma } }$, using the formula ${\displaystyle p(x)={\frac {p(x|\Sigma )p(\Sigma )}{p(\Sigma |x)}}}$ and the linear algebra identity ${\displaystyle v^{T}\Omega v={\text{tr}}(\Omega vv^{T})}$:

${\displaystyle f_{\mathbf {X} \,\mid \,\Psi ,\nu }(\mathbf {x} )=\int f_{\mathbf {X} \,\mid \,\mathbf {\Sigma } \,=\,\sigma }(\mathbf {x} )f_{\mathbf {\Sigma } \,\mid \,\mathbf {\Psi } ,\nu }(\sigma )\,d\sigma ={\frac {|\mathbf {\Psi } |^{\nu /2}\Gamma _{p}\left({\frac {\nu +n}{2}}\right)}{\pi ^{np/2}|\mathbf {\Psi } +\mathbf {A} |^{(\nu +n)/2}\Gamma _{p}({\frac {\nu }{2}})}}}$

(this is useful because the variance matrix ${\displaystyle \mathbf {\Sigma } }$ is not known in practice, but because ${\displaystyle {\mathbf {\Psi } }}$ is known a priori, and ${\displaystyle {\mathbf {A} }}$ can be obtained from the data, the right hand side can be evaluated directly). The inverse-Wishart distribution as a prior can be constructed via existing transferred prior knowledge.[5]

### Moments

The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above.

The mean:[4]:85

${\displaystyle \operatorname {E} (\mathbf {X} )={\frac {\mathbf {\Psi } }{\nu -p-1}}.}$

The variance of each element of ${\displaystyle \mathbf {X} }$:

${\displaystyle \operatorname {Var} (x_{ij})={\frac {(\nu -p+1)\psi _{ij}^{2}+(\nu -p-1)\psi _{ii}\psi _{jj}}{(\nu -p)(\nu -p-1)^{2}(\nu -p-3)}}}$

The variance of the diagonal uses the same formula as above with ${\displaystyle i=j}$, which simplifies to:

${\displaystyle \operatorname {Var} (x_{ii})={\frac {2\psi _{ii}^{2}}{(\nu -p-1)^{2}(\nu -p-3)}}.}$

The covariance of elements of ${\displaystyle \mathbf {X} }$ are given by:

${\displaystyle \operatorname {Cov} (x_{ij},x_{k\ell })={\frac {2\psi _{ij}\psi _{k\ell }+(\nu -p-1)(\psi _{ik}\psi _{j\ell }+\psi _{i\ell }\psi _{kj})}{(\nu -p)(\nu -p-1)^{2}(\nu -p-3)}}}$

The results are expressed in the more succinct Kronecker product form by von Rosen[6] as follows.

${\displaystyle \mathbf {E} \left(W^{-1}\otimes W^{-1}\right)=c_{1}\Psi \otimes \Psi +c_{2}Vec(\Psi )Vec(\Psi )^{T}+c_{2}K_{pp}\Psi \otimes \Psi }$

${\displaystyle \mathbf {Cov} \left(W^{-1}\otimes W^{-1}\right)=(c_{1}-c_{3})\Psi \otimes \Psi +c_{2}Vec(\Psi )Vec(\Psi )^{T}+c_{2}K_{pp}\Psi \otimes \Psi }$

where ${\displaystyle c_{2}=\left[(\nu -p)(\nu -p-1)(\nu -p-3)\right]^{-1},\;\;c_{1}=(\nu -p-2)c_{2},\;c_{3}=(\nu -p-1)^{-2}}$ and ${\displaystyle K_{pp}{\text{is a }}p^{2}\times p^{2}}$ commutation matrix. There is a typo in the paper whereby the coefficient of ${\displaystyle K_{pp}\Psi \otimes \Psi }$ is given as ${\displaystyle c_{1}}$ rather than ${\displaystyle c_{2}}$. Also the expression for the mean square inverse Wishart, corollary 3.1, should read ${\displaystyle \mathbf {E} \left[W^{-1}W^{-1}\right]=(c_{1}+c_{2})\Sigma ^{-1}\Sigma ^{-1}+c_{2}\Sigma ^{-1}\mathbf {tr} (\Sigma ^{-1})}$

To show how the interacting terms become sparse when the covariance is diagonal, let ${\displaystyle \Psi =\mathbf {I} _{3\times 3}}$ and introduce some arbitrary parameters ${\displaystyle u,v,w}$:

${\displaystyle \mathbf {E} \left(W^{-1}\otimes W^{-1}\right)=u\Psi \otimes \Psi +vVec(\Psi )Vec(\Psi )^{T}+wK_{pp}\Psi \otimes \Psi }$

then the second moment matrix becomes

${\displaystyle \mathbf {E} \left(W^{-1}\otimes W^{-1}\right)={\begin{bmatrix}u+v+w&\cdot &\cdot &\cdot &v&\cdot &\cdot &\cdot &v\\\cdot &u&\cdot &w&\cdot &\cdot &\cdot &\cdot &\cdot \\\cdot &\cdot &u&\cdot &\cdot &\cdot &w&\cdot &\cdot \\\cdot &w&\cdot &u&\cdot &\cdot &\cdot &\cdot &\cdot \\v&\cdot &\cdot &\cdot &u+v+w&\cdot &\cdot &\cdot &v\\\cdot &\cdot &\cdot &\cdot &\cdot &u&\cdot &w&\cdot \\\cdot &\cdot &w&\cdot &\cdot &\cdot &u&\cdot &\cdot \\\cdot &\cdot &\cdot &\cdot &\cdot &w&\cdot &u&\cdot \\v&\cdot &\cdot &\cdot &v&\cdot &\cdot &\cdot &u+v+w\\\end{bmatrix}}}$

The variances of the Wishart product are also obtained by Cook et. al.[7] in the singular case and, by extension, to the full rank case. In the complex case, the "white" inverse complex Wishart ${\displaystyle {\mathcal {W}}^{-1}(\mathbf {I} ,\nu ,p)}$ was shown by Shaman[8] to have diagonal statistical structure in which the leading diagonal elements are correlated, while all other element are uncorrelated. It was also shown by Brennan and Reed[9] using a matrix partitioning procedure, albeit in the complex variable domain, that the marginal pdf of the [1,1] diagonal element of this matrix has an Inverse-chi-squared distribution. This extends easily to all diagonal elements since ${\displaystyle {\mathcal {W}}^{-1}(\mathbf {I} ,\nu ,p)}$ is statistically invariant under orthogonal transformations, which includes interchanges of diagonal elements.

For the inverse Chi squared distribution, with arbitrary ${\displaystyle \nu _{c}}$ degrees of freedom, the pdf is

${\displaystyle {\text{Inv-}}\chi ^{2}(x;\nu _{c})={\frac {2^{-\nu _{c}/2}}{\Gamma (\nu _{c}/2)}}x^{-\nu _{c}/2-1}e^{-1/(2x)}.}$

the mean and variance of which are ${\displaystyle {\frac {1}{\nu _{c}-2}}{\text{ and }}{\frac {2}{(\nu _{c}-2)^{2}(\nu _{c}-4)}}}$ respectively. These two parameters are matched to the corresponding inverse Wishart diagonal moments when ${\displaystyle \nu _{c}=\nu -p+1}$ and hence the diagonal element marginal pdf of ${\displaystyle {\mathcal {W}}^{-1}(\mathbf {I} ,\nu ,p)}$ becomes:

${\displaystyle f_{x_{11}}(x_{11};\Psi ,\nu ,p)={\frac {2^{-(\nu -p+1)/2}}{\Gamma \left({\frac {\nu -p+1}{2}}\right)}}\,x_{11}^{-(\nu -p+1)/2-1}e^{-1/(2x_{11})}}$

which, below, is generalized to all diagonal elements. Note that the mean of the complex inverse Wishart is thus ${\displaystyle {\frac {\mathbf {I} }{\nu -p}}}$ and differs from the real valued Wishart case which is ${\displaystyle {\frac {\mathbf {I} }{\nu -p-1}}}$.

## Related distributions

A univariate specialization of the inverse-Wishart distribution is the inverse-gamma distribution. With ${\displaystyle p=1}$ (i.e. univariate) and ${\displaystyle \alpha =\nu /2}$, ${\displaystyle \beta =\mathbf {\Psi } /2}$ and ${\displaystyle x=\mathbf {X} }$ the probability density function of the inverse-Wishart distribution becomes

${\displaystyle p(x\mid \alpha ,\beta )={\frac {\beta ^{\alpha }\,x^{-\alpha -1}\exp(-\beta /x)}{\Gamma _{1}(\alpha )}}.}$

i.e., the inverse-gamma distribution, where ${\displaystyle \Gamma _{1}(\cdot )}$ is the ordinary Gamma function.

The Inverse Wishart distribution is a special case of the inverse matrix gamma distribution when the shape parameter ${\displaystyle \alpha ={\frac {\nu }{2}}}$ and the scale parameter ${\displaystyle \beta =2}$.

Another generalization has been termed the generalized inverse Wishart distribution, ${\displaystyle {\mathcal {GW}}^{-1}}$. A ${\displaystyle p\times p}$ positive definite matrix ${\displaystyle \mathbf {X} }$ is said to be distributed as ${\displaystyle {\mathcal {GW}}^{-1}(\mathbf {\Psi } ,\nu ,\mathbf {S} )}$ if ${\displaystyle \mathbf {Y} =\mathbf {X} ^{1/2}\mathbf {S} ^{-1}\mathbf {X} ^{1/2}}$ is distributed as ${\displaystyle {\mathcal {W}}^{-1}(\mathbf {\Psi } ,\nu )}$. Here ${\displaystyle \mathbf {X} ^{1/2}}$ denotes the symmetric matrix square root of ${\displaystyle \mathbf {X} }$, the parameters ${\displaystyle \mathbf {\Psi } ,\mathbf {S} }$ are ${\displaystyle p\times p}$ positive definite matrices, and the parameter ${\displaystyle \nu }$ is a positive scalar larger than ${\displaystyle 2p}$. Note that when ${\displaystyle \mathbf {S} }$ is equal to an identity matrix, ${\displaystyle {\mathcal {GW}}^{-1}(\mathbf {\Psi } ,\nu ,\mathbf {S} )={\mathcal {W}}^{-1}(\mathbf {\Psi } ,\nu )}$. This generalized inverse Wishart distribution has been applied to estimating the distributions of multivariate autoregressive processes.[10]

A different type of generalization is the normal-inverse-Wishart distribution, essentially the product of a multivariate normal distribution with an inverse Wishart distribution.

When the scale matrix is an identity matrix, ${\displaystyle {\mathcal {\Psi }}=\mathbf {I} ,{\text{ and }}{\mathcal {\Phi }}}$ is an arbitrary orthogonal matrix, replacement of ${\displaystyle \mathbf {X} }$ by ${\displaystyle {\Phi }\mathbf {X} {\mathcal {\Phi }}^{T}}$ does not change the pdf of ${\displaystyle \mathbf {X} }$ so ${\displaystyle {\mathcal {W}}^{-1}(\mathbf {I} ,\nu ,p)}$ belongs to the family of spherically invariant random processes (SIRPs) in some sense.
Thus, an arbitrary p-vector ${\displaystyle V}$ with ${\displaystyle l_{2}{\text{ length }}V^{T}V=1}$ can be rotated into the vector ${\displaystyle \mathbf {\Phi } V=[1\;0\;0\cdots ]^{T}}$ without changing the pdf of ${\displaystyle V^{T}\mathbf {X} V}$, moreover ${\displaystyle \mathbf {\Phi } }$ can be a permutation matrix which exchanges diagonal elements. It follows that the diagonal elements of ${\displaystyle \mathbf {X} }$ are identically inverse chi squared distributed, with pdf ${\displaystyle f_{x_{11}}}$ in the previous section though they are not mutually independent. The result is known in optimal portfolio statistics, as in Theorem 2 Corollary 1 of Bodnar et al,[11] where it is expressed in the inverse form ${\displaystyle {\frac {V^{T}\mathbf {\Psi } V}{V^{T}\mathbf {X} V}}\sim \chi _{\nu -p+1}^{2}}$.

## References

1. ^ A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 978-0-340-80752-1.
2. ^ Haff, LR (1979). "An identity for the Wishart distribution with applications". Journal of Multivariate Analysis. 9 (4): 531–544. doi:10.1016/0047-259x(79)90056-3.
3. ^ Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013-11-01). Bayesian Data Analysis, Third Edition (3rd ed.). Boca Raton: Chapman and Hall/CRC. ISBN 9781439840955.
4. ^ a b Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press. ISBN 978-0-12-471250-8.
5. ^ Shahrokh Esfahani, Mohammad; Dougherty, Edward (2014). "Incorporation of Biological Pathway Knowledge in the Construction of Priors for Optimal Bayesian Classification". IEEE Transactions on Bioinformatics and Computational Biology. 11 (1): 202–218. doi:10.1109/tcbb.2013.143. PMID 26355519.
6. ^ Rosen, Dietrich von (1988). "Moments for the Inverted Wishart Distribution". Scand J Statistics. 15: 97–109 – via JSTOR.
7. ^ Cook, R D; Forzani, Liliana (August 2019). "On the mean and variance of the generalized inverse of a singular Wishart matrix". Electronic Journal of Statistics. 5.
8. ^ Shaman, Paul (1980). "The Inverted Complex Wishart Distribution and Its Application to Spectral Estimation" (PDF). Journal of Multivariate Analysis. 10: 51–59.
9. ^ Brennan, L E; Reed, I S (January 1982). "An Adaptive Array Signal Processing Algorithm for Communications". IEEE Trans on Aerospace and Electronic Systems. AES-18, No. 1: 120–130.
10. ^ Triantafyllopoulos, K. (2011). "Real-time covariance estimation for the local level model". Journal of Time Series Analysis. 32 (2): 93–107. arXiv:1311.0634. doi:10.1111/j.1467-9892.2010.00686.x.
11. ^ Bodnar T, Mazur S, Podg'orski K (January 2015). "Singular Inverse Wishart Distribution with Application to Portfolio Theory". Department of Statistics, Lund university. Department of Statistics, Lund university. (Working Papers in Statistics; Nr. 2): 1–17.