To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

# Marchenko–Pastur distribution

Plot of the Marchenko-Pastur distribution for various values of lambda

In the mathematical theory of random matrices, the Marchenko–Pastur distribution, or Marchenko–Pastur law, describes the asymptotic behavior of singular values of large rectangular random matrices. The theorem is named after Ukrainian mathematicians Vladimir Marchenko and Leonid Pastur who proved this result in 1967.

If ${\displaystyle X}$ denotes a ${\displaystyle m\times n}$ random matrix whose entries are independent identically distributed random variables with mean 0 and variance ${\displaystyle \sigma ^{2}<\infty }$, let

${\displaystyle Y_{n}={\frac {1}{n}}XX^{T}}$

and let ${\displaystyle \lambda _{1},\,\lambda _{2},\,\dots ,\,\lambda _{m}}$ be the eigenvalues of ${\displaystyle Y_{n}}$ (viewed as random variables). Finally, consider the random measure

${\displaystyle \mu _{m}(A)={\frac {1}{m}}\#\left\{\lambda _{j}\in A\right\},\quad A\subset \mathbb {R} .}$

Theorem. Assume that ${\displaystyle m,\,n\,\to \,\infty }$ so that the ratio ${\displaystyle m/n\,\to \,\lambda \in (0,+\infty )}$. Then ${\displaystyle \mu _{m}\,\to \,\mu }$ (in weak* topology in distribution), where

${\displaystyle \mu (A)={\begin{cases}(1-{\frac {1}{\lambda }})\mathbf {1} _{0\in A}+\nu (A),&{\text{if }}\lambda >1\\\nu (A),&{\text{if }}0\leq \lambda \leq 1,\end{cases}}}$

and

${\displaystyle d\nu (x)={\frac {1}{2\pi \sigma ^{2}}}{\frac {\sqrt {(\lambda _{+}-x)(x-\lambda _{-})}}{\lambda x}}\,\mathbf {1} _{x\in [\lambda _{-},\lambda _{+}]}\,dx}$

with

${\displaystyle \lambda _{\pm }=\sigma ^{2}(1\pm {\sqrt {\lambda }})^{2}.}$

The Marchenko–Pastur law also arises as the free Poisson law in free probability theory, having rate ${\displaystyle 1/\lambda }$ and jump size ${\displaystyle \sigma ^{2}}$.

## Cumulative distribution function

Using the same notation, cumulative distribution function reads

${\displaystyle F_{\lambda }(x)={\begin{cases}{\frac {\lambda -1}{\lambda }}\mathbf {1} _{x\in [0,\lambda _{-})}+\left({\frac {\lambda -1}{2\lambda }}+F(x)\right)\mathbf {1} _{x\in [\lambda _{-},\lambda _{+})}+\mathbf {1} _{x\in [\lambda _{+},\infty )},&{\text{if }}\lambda >1\\F(x)\mathbf {1} _{x\in [\lambda _{-},\lambda _{+})}+\mathbf {1} _{x\in [\lambda _{+},\infty )},&{\text{if }}0\leq \lambda \leq 1,\end{cases}}}$

where ${\textstyle F(x)={\frac {1}{2\pi \lambda }}\left(\pi \lambda +\sigma ^{-2}{\sqrt {(\lambda _{+}-x)(x-\lambda _{-})}}-(1+\lambda )\arctan {\frac {r(x)^{2}-1}{2r(x)}}+(1-\lambda )\arctan {\frac {\lambda _{-}r(x)^{2}-\lambda _{+}}{2\sigma ^{2}(1-\lambda )r(x)}}\right)}$ and ${\displaystyle r(x)={\sqrt {\frac {\lambda _{+}-x}{x-\lambda _{-}}}}}$.

## Some transforms of this law

The Cauchy transform (which is the negative of the Stieltjes transformation), when ${\displaystyle \sigma ^{2}=1}$, is given by

${\displaystyle G_{\mu }(z)={\frac {z+\lambda -1-{\sqrt {(z-\lambda -1)^{2}-4\lambda }}}{2\lambda z}}}$

This gives an ${\displaystyle R}$-transform of:

${\displaystyle R_{\mu }(z)={\frac {1}{1-\lambda z}}}$

## Application to correlation matrices

When applied to correlation matrices ${\displaystyle \sigma ^{2}=1}$ and ${\displaystyle \lambda =m/n}$ which leads to the bounds

${\displaystyle \lambda _{\pm }=\left(1\pm {\sqrt {\frac {m}{n}}}\right)^{2}.}$

Hence, it is often assumed that eigenvalues of correlation matrices lower than ${\displaystyle \lambda _{+}}$ are by a chance, and the values higher than ${\displaystyle \lambda _{+}}$ are the significant common factors. For instance, obtaining a correlation matrix of a year long series (i.e. 252 trading days) of 10 stock returns, would render ${\displaystyle \lambda _{+}=\left(1+{\sqrt {\frac {10}{252}}}\right)^{2}\approx 1.43}$. Out of 10 eigen values of the correlation matrix only the values higher than 1.43 would be considered significant.