To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Multinomial distribution

Parameters ${\displaystyle n>0}$ number of trials (integer)${\displaystyle p_{1},\ldots ,p_{k}}$ event probabilities (${\displaystyle \Sigma p_{i}=1}$) ${\displaystyle x_{i}\in \{0,\dots ,k\},\,\,\,\,i\in \{1,\dots ,n\}}$${\displaystyle \!}$ ${\displaystyle {\frac {n!}{x_{1}!\cdots x_{k}!}}p_{1}^{x_{1}}\cdots p_{k}^{x_{k}}}$ ${\displaystyle \operatorname {E} (X_{i})=np_{i}}$ ${\displaystyle \operatorname {Var} (X_{i})=np_{i}(1-p_{i})}$${\displaystyle \operatorname {Cov} (X_{i},X_{j})=-np_{i}p_{j}~~(i\neq j)}$ ${\displaystyle -\log(n!)-n\sum _{i=1}^{k}p_{i}\log(p_{i})+\sum _{i=1}^{k}\sum _{x_{i}=0}^{n}{\binom {n}{x_{i}}}p_{i}^{x_{i}}(1-p_{i})^{n-x_{i}}\log(x_{i}!)}$ ${\displaystyle {\biggl (}\sum _{i=1}^{k}p_{i}e^{t_{i}}{\biggr )}^{n}}$ ${\displaystyle \left(\sum _{j=1}^{k}p_{j}e^{it_{j}}\right)^{n}}$ where ${\displaystyle i^{2}=-1}$ ${\displaystyle {\biggl (}\sum _{i=1}^{k}p_{i}z_{i}{\biggr )}^{n}{\text{ for }}(z_{1},\ldots ,z_{k})\in \mathbb {C} ^{k}}$

In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided die rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.

When k is 2 and n is 1, the multinomial distribution is the Bernoulli distribution. When k is 2 and n is bigger than 1, it is the binomial distribution. When k is bigger than 2 and n is 1, it is the categorical distribution.

The Bernoulli distribution models the outcome of a single Bernoulli trial. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The binomial distribution generalizes this to the number of heads from performing n independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of n experiments, where the outcome of each trial has a categorical distribution, such as rolling a k-sided die n times.

Let k be a fixed finite number. Mathematically, we have k possible mutually exclusive outcomes, with corresponding probabilities p1, ..., pk, and n independent trials. Since the k outcomes are mutually exclusive and one must occur we have pi ≥ 0 for i = 1, ..., k and ${\displaystyle \sum _{i=1}^{k}p_{i}=1}$. Then if the random variables Xi indicate the number of times outcome number i is observed over the n trials, the vector X = (X1, ..., Xk) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk). While the trials are independent, their outcomes X are dependent because they must be summed to n.

In some fields such as natural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-K" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range ${\displaystyle 1\dots K}$; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial.

Specification

Probability mass function

Suppose one does an experiment of extracting n balls of k different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color i (i = 1, ..., k) as Xi, and denote as pi the probability that a given extraction will be in color i. The probability mass function of this multinomial distribution is:

{\displaystyle {\begin{aligned}f(x_{1},\ldots ,x_{k};n,p_{1},\ldots ,p_{k})&{}=\Pr(X_{1}=x_{1}{\text{ and }}\dots {\text{ and }}X_{k}=x_{k})\\&{}={\begin{cases}{\displaystyle {n! \over x_{1}!\cdots x_{k}!}p_{1}^{x_{1}}\times \cdots \times p_{k}^{x_{k}}},\quad &{\text{when }}\sum _{i=1}^{k}x_{i}=n\\\\0&{\text{otherwise,}}\end{cases}}\end{aligned}}}

for non-negative integers x1, ..., xk.

The probability mass function can be expressed using the gamma function as:

${\displaystyle f(x_{1},\dots ,x_{k};p_{1},\ldots ,p_{k})={\frac {\Gamma (\sum _{i}x_{i}+1)}{\prod _{i}\Gamma (x_{i}+1)}}\prod _{i=1}^{k}p_{i}^{x_{i}}.}$

This form shows its resemblance to the Dirichlet distribution, which is its conjugate prior.

Visualization

As slices of generalized Pascal's triangle

Just like one can interpret the binomial distribution as (normalized) one-dimensional (1D) slices of Pascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilaterial "pyramids" in arbitrary dimension—i.e. a simplex with a grid.[citation needed]

As polynomial coefficients

Similarly, just like one can interpret the binomial distribution as the polynomial coefficients of ${\displaystyle (p+(1-p))^{n}}$ when expanded, one can interpret the multinomial distribution as the coefficients of ${\displaystyle (p_{1}+p_{2}+p_{3}+\cdots +p_{k})^{n}}$ when expanded. (Note that just like the binomial distribution, the coefficients must sum to 1.) This is the origin of the name "multinomial distribution".

Properties

The expected number of times the outcome i was observed over n trials is

${\displaystyle \operatorname {E} (X_{i})=np_{i}.\,}$

The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore

${\displaystyle \operatorname {Var} (X_{i})=np_{i}(1-p_{i}).\,}$

The off-diagonal entries are the covariances:

${\displaystyle \operatorname {Cov} (X_{i},X_{j})=-np_{i}p_{j}\,}$

for i, j distinct.

All covariances are negative because for fixed n, an increase in one component of a multinomial vector requires a decrease in another component.

When these expressions are combined into a matrix with i, j element ${\displaystyle \operatorname {cov} (X_{i},X_{j}),}$ the result is a k × k positive-semidefinite covariance matrix of rank k − 1. In the special case where k = n and where the pi are all equal, the covariance matrix is the centering matrix.

The entries of the corresponding correlation matrix are

${\displaystyle \rho (X_{i},X_{i})=1.}$
${\displaystyle \rho (X_{i},X_{j})={\frac {\operatorname {Cov} (X_{i},X_{j})}{\sqrt {\operatorname {Var} (X_{i})\operatorname {Var} (X_{j})}}}={\frac {-p_{i}p_{j}}{\sqrt {p_{i}(1-p_{i})p_{j}(1-p_{j})}}}=-{\sqrt {\frac {p_{i}p_{j}}{(1-p_{i})(1-p_{j})}}}.}$

Note that the sample size drops out of this expression.

Each of the k components separately has a binomial distribution with parameters n and pi, for the appropriate value of the subscript i.

The support of the multinomial distribution is the set

${\displaystyle \{(n_{1},\dots ,n_{k})\in \mathbb {N} ^{k}\mid n_{1}+\cdots +n_{k}=n\}.\,}$

Its number of elements is

${\displaystyle {n+k-1 \choose k-1}.}$

Matrix notation

In matrix notation,

${\displaystyle \operatorname {E} (\mathbf {X} )=n\mathbf {p} ,\,}$

and

${\displaystyle \operatorname {Var} (\mathbf {X} )=n\lbrace \operatorname {diag} (\mathbf {p} )-\mathbf {p} \mathbf {p} ^{\rm {T}}\rbrace ,\,}$

with pT = the row vector transpose of the column vector p.

Example

Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?

Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large.

${\displaystyle \Pr(A=1,B=2,C=3)={\frac {6!}{1!2!3!}}(0.2^{1})(0.3^{2})(0.5^{3})=0.135}$

Sampling from a multinomial distribution

First, reorder the parameters ${\displaystyle p_{1},\ldots ,p_{k}}$ such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable X from a uniform (0, 1) distribution. The resulting outcome is the component

${\displaystyle j=\min \left\{j'\in \{1,\dots ,k\}:\left(\sum _{i=1}^{j'}p_{i}\right)-X\geq 0\right\}.}$

{Xj = 1, Xk = 0 for k ≠ j } is one observation from the multinomial distribution with ${\displaystyle p_{1},\ldots ,p_{k}}$ and n = 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution with n equal to the number of such repetitions.

To simulate from a multinomial distribution

Various methods may be used to simulate from a multinomial distribution. A very simple solution is to use a uniform pseudo-random number generator on (0,1). First, we divide the (0,1) interval in k subintervals equal in length to the probabilities of the k categories. Then, we generate n independent pseudo-random numbers to determine in which of the k intervals they occur and count the number of occurrences in each interval.

Example

If we have:

 Categories 1 2 3 4 5 6 Probabilities 0.15 0.2 0.3 0.16 0.12 0.07 Superior limits of subintervals 0.15 0.35 0.65 0.81 0.93 1

Then, with software such as Excel, we may use the following recipe:

 Cells : Ai Bi Ci ... Gi Formulae : Rand() =If($Ai<0.15;1;0) =If(And($Ai>=0.15;$Ai<0.35);1;0) ... =If($Ai>=0.93;1;0)

After that, we will use functions such as SumIf to accumulate the observed results by category and to calculate the estimated covariance matrix for each simulated sample.

Another way is to use a discrete random number generator. In that case, the categories must be labeled or relabeled with numeric values.

In the two cases, the result is a multinomial distribution with k categories. This is equivalent, with a continuous random distribution, to simulate k independent standardized normal distributions, or a multinormal distribution N(0,I) having k components identically distributed and statistically independent.

Since the counts of all categories have to sum to the number of trials, the counts of the categories are always negatively correlated.[1]

Equivalence tests for multinomial distributions

The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions.

Let ${\displaystyle q}$ denote a theoretical multinomial distribution and let ${\displaystyle p}$ be a true underlying distribution. The distributions ${\displaystyle p}$ and ${\displaystyle q}$ are considered equivalent if ${\displaystyle d(p,q)<\varepsilon }$ for a distance ${\displaystyle d}$ and a tolerance parameter ${\displaystyle \varepsilon >0}$. The equivalence test problem is ${\displaystyle H_{0}=\{d(p,q)\geq \varepsilon \}}$ versus ${\displaystyle H_{1}=\{d(p,q)<\varepsilon \}}$. The true underlying distribution ${\displaystyle p}$ is unknown. Instead, the counting frequencies ${\displaystyle p_{n}}$ are observed, where ${\displaystyle n}$ is a sample size. An equivalence test uses ${\displaystyle p_{n}}$ to reject ${\displaystyle H_{0}}$. If ${\displaystyle H_{0}}$ can be rejected then the equivalence between ${\displaystyle p}$ and ${\displaystyle q}$ is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010).[2] The equivalence test for the total variation distance is developed in Ostrovski (2017).[3] The exact equivalence test for the specific cumulative distance is proposed in Frey (2009).[4]

The distance between the true underlying distribution ${\displaystyle p}$ and a family of the multinomial distributions ${\displaystyle {\mathcal {M}}}$ is defined by ${\displaystyle d(p,{\mathcal {M}})=\min _{h\in {\mathcal {M}}}d(p,h)}$. Then the equivalence test problem is given by ${\displaystyle H_{0}=\{d(p,{\mathcal {M}})\geq \varepsilon \}}$ and ${\displaystyle H_{1}=\{d(p,{\mathcal {M}})<\varepsilon \}}$. The distance ${\displaystyle d(p,{\mathcal {M}})}$ is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).[5]

References

Citations

1. ^ "1.7 - The Multinomial Distribution | STAT 504". onlinecourses.science.psu.edu. Retrieved 2016-09-11.
2. ^ Wellek, Stefan (2010). Testing statistical hypotheses of equivalence and noninferiority. Chapman and Hall/CRC. ISBN 978-1439808184.
3. ^ Ostrovski, Vladimir (May 2017). "Testing equivalence of multinomial distributions". Statistics & Probability Letters. 124: 77–82. doi:10.1016/j.spl.2017.01.004. S2CID 126293429.Official web link (subscription required). Alternate, free web link.
4. ^ Frey, Jesse (March 2009). "An exact multinomial test for equivalence". The Canadian Journal of Statistics. 37: 47–59. doi:10.1002/cjs.10000.Official web link (subscription required).
5. ^ Ostrovski, Vladimir (March 2018). "Testing equivalence to families of multinomial distributions with application to the independence model". Statistics & Probability Letters. 139: 61–66. doi:10.1016/j.spl.2018.03.014. S2CID 126261081.Official web link (subscription required). Alternate, free web link.