To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

# von Mises–Fisher distribution

In directional statistics, the von Mises–Fisher distribution (named after Ronald Fisher and Richard von Mises), is a probability distribution on the ${\displaystyle (p-1)}$-sphere in ${\displaystyle \mathbb {R} ^{p}}$. If ${\displaystyle p=2}$ the distribution reduces to the von Mises distribution on the circle.

The probability density function of the von Mises–Fisher distribution for the random p-dimensional unit vector ${\displaystyle \mathbf {x} \,}$ is given by:

${\displaystyle f_{p}(\mathbf {x} ;{\boldsymbol {\mu }},\kappa )=C_{p}(\kappa )\exp \left({\kappa {\boldsymbol {\mu }}^{T}\mathbf {x} }\right),}$

where ${\displaystyle \kappa \geq 0,\left\Vert {\boldsymbol {\mu }}\right\Vert =1\,}$ and the normalization constant ${\displaystyle C_{p}(\kappa )\,}$ is equal to

${\displaystyle C_{p}(\kappa )={\frac {\kappa ^{p/2-1}}{(2\pi )^{p/2}I_{p/2-1}(\kappa )}},}$

where ${\displaystyle I_{v}}$ denotes the modified Bessel function of the first kind at order ${\displaystyle v}$. If ${\displaystyle p=3}$, the normalization constant reduces to

${\displaystyle C_{3}(\kappa )={\frac {\kappa }{4\pi \sinh \kappa }}={\frac {\kappa }{2\pi (e^{\kappa }-e^{-\kappa })}}.}$

The parameters ${\displaystyle {\boldsymbol {\mu }}\,}$ and ${\displaystyle \kappa \,}$ are called the mean direction and concentration parameter, respectively. The greater the value of ${\displaystyle \kappa \,}$, the higher the concentration of the distribution around the mean direction ${\displaystyle {\boldsymbol {\mu }}\,}$. The distribution is unimodal for ${\displaystyle \kappa >0\,}$, and is uniform on the sphere for ${\displaystyle \kappa =0\,}$.

The von Mises–Fisher distribution for ${\displaystyle p=3}$, also called the Fisher distribution, was first used to model the interaction of electric dipoles in an electric field (Mardia&Jupp, 1999). Other applications are found in geology, bioinformatics, and text mining.

## Relation to normal distribution

Starting from a normal distribution

${\displaystyle G_{p}(\mathbf {x} ;{\boldsymbol {\mu }},\kappa )=\left({\sqrt {\frac {\kappa }{2\pi }}}\right)^{p}\exp \left(-\kappa {\frac {(\mathbf {x} -{\boldsymbol {\mu }})^{2}}{2}}\right),}$

the von Mises-Fisher distribution is obtained by expanding

${\displaystyle (\mathbf {x} -{\boldsymbol {\mu }})^{2}=\mathbf {x} ^{2}+{\boldsymbol {\mu }}^{2}-2{\boldsymbol {\mu }}^{T}\mathbf {x} ,}$

using the fact that ${\displaystyle \mathbf {x} }$ and ${\displaystyle {\boldsymbol {\mu }}}$ are unit vectors, and recomputing the normalization constant by integrating ${\displaystyle \mathbf {x} }$ over the unit sphere.

## Estimation of parameters

A series of N independent measurements ${\displaystyle x_{i}}$ are drawn from a von Mises–Fisher distribution. Define

${\displaystyle A_{p}(\kappa )={\frac {I_{p/2}(\kappa )}{I_{p/2-1}(\kappa )}}.\,}$

Then (Mardia&Jupp, 1999) the maximum likelihood estimates of ${\displaystyle \mu \,}$ and ${\displaystyle \kappa \,}$ are given by the sufficient statistic

${\displaystyle {\bar {x}}={\frac {1}{N}}\sum _{i}^{N}x_{i},}$

as

${\displaystyle \mu ={\bar {x}}/{\bar {R}},{\text{where }}{\bar {R}}=\|{\bar {x}}\|,}$

and

${\displaystyle \kappa =A_{p}^{-1}({\bar {R}}).}$

Thus ${\displaystyle \kappa \,}$ is the solution to

${\displaystyle A_{p}(\kappa )={\frac {\|\sum _{i}^{N}x_{i}\|}{N}}={\bar {R}}.}$

A simple approximation to ${\displaystyle \kappa }$ is (Sra, 2011)

${\displaystyle {\hat {\kappa }}={\frac {{\bar {R}}(p-{\bar {R}}^{2})}{1-{\bar {R}}^{2}}},}$

but a more accurate measure can be obtained by iterating the Newton method a few times

${\displaystyle {\hat {\kappa }}_{1}={\hat {\kappa }}-{\frac {A_{p}({\hat {\kappa }})-{\bar {R}}}{1-A_{p}({\hat {\kappa }})^{2}-{\frac {p-1}{\hat {\kappa }}}A_{p}({\hat {\kappa }})}},}$
${\displaystyle {\hat {\kappa }}_{2}={\hat {\kappa }}_{1}-{\frac {A_{p}({\hat {\kappa }}_{1})-{\bar {R}}}{1-A_{p}({\hat {\kappa }}_{1})^{2}-{\frac {p-1}{{\hat {\kappa }}_{1}}}A_{p}({\hat {\kappa }}_{1})}}.}$

For N ≥ 25, the estimated spherical standard error of the sample mean direction can be computed as[1]

${\displaystyle {\hat {\sigma }}=\left({\frac {d}{N{\bar {R}}^{2}}}\right)^{1/2}}$

where

${\displaystyle d=1-{\frac {1}{N}}\sum _{i}^{N}(\mu ^{T}x_{i})^{2}}$

It's then possible to approximate a ${\displaystyle 100(1-\alpha )\%}$ confidence cone about ${\displaystyle \mu }$ with semi-vertical angle

${\displaystyle q=\arcsin(e_{\alpha }^{1/2}{\hat {\sigma }}),}$ where ${\displaystyle e_{\alpha }=-\ln(\alpha ).}$

For example, for a 95% confidence cone, ${\displaystyle \alpha =0.05,e_{\alpha }=-\ln(0.05)=2.996,}$ and thus ${\displaystyle q=\arcsin(1.731{\hat {\sigma }}).}$

## Generalizations

The matrix von Mises-Fisher distribution has the density

${\displaystyle f_{n,p}(\mathbf {X} ;\mathbf {F} )\propto \exp(\operatorname {tr} (\mathbf {F} ^{T}\mathbf {X} ))}$

supported on the Stiefel manifold of ${\displaystyle n\times p}$ orthonormal p-frames ${\displaystyle \mathbf {X} }$, where ${\displaystyle \mathbf {F} }$ is an arbitrary ${\displaystyle n\times p}$ real matrix.[2][3]