To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

# Discrete phase-type distribution

The discrete phase-type distribution is a probability distribution that results from a system of one or more inter-related geometric distributions occurring in sequence, or phases. The sequence in which each of the phases occur may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of an absorbing Markov chain with one absorbing state. Each of the states of the Markov chain represents one of the phases.

It has continuous time equivalent in the phase-type distribution.

## Definition

A terminating Markov chain is a Markov chain where all states are transient, except one which is absorbing. Reordering the states, the transition probability matrix of a terminating Markov chain with ${\displaystyle m}$ transient states is

${\displaystyle {P}=\left[{\begin{matrix}{T}&\mathbf {T} ^{0}\\\mathbf {0} &1\end{matrix}}\right],}$

where ${\displaystyle {T}}$ is a ${\displaystyle m\times m}$ matrix and ${\displaystyle \mathbf {T} ^{0}+{T}\mathbf {1} =\mathbf {1} }$. The transition matrix is characterized entirely by its upper-left block ${\displaystyle {T}}$.

Definition. A distribution on ${\displaystyle \{0,1,2,...\}}$ is a discrete phase-type distribution if it is the distribution of the first passage time to the absorbing state of a terminating Markov chain with finitely many states.

## Characterization

Fix a terminating Markov chain. Denote ${\displaystyle {T}}$ the upper-left block of its transition matrix and ${\displaystyle \tau }$ the initial distribution. The distribution of the first time to the absorbing state is denoted ${\displaystyle \mathrm {PH} _{d}({\boldsymbol {\tau }},{T})}$ or ${\displaystyle \mathrm {DPH} ({\boldsymbol {\tau }},{T})}$.

Its cumulative distribution function is

${\displaystyle F(k)=1-{\boldsymbol {\tau }}{T}^{k}\mathbf {1} ,}$

for ${\displaystyle k=1,2,...}$, and its density function is

${\displaystyle f(k)={\boldsymbol {\tau }}{T}^{k-1}\mathbf {T^{0}} ,}$

for ${\displaystyle k=1,2,...}$. It is assumed the probability of process starting in the absorbing state is zero. The factorial moments of the distribution function are given by,

${\displaystyle E[K(K-1)...(K-n+1)]=n!{\boldsymbol {\tau }}(I-{T})^{-n}{T}^{n-1}\mathbf {1} ,}$

where ${\displaystyle I}$ is the appropriate dimension identity matrix.

## Special cases

Just as the continuous time distribution is a generalisation of the exponential distribution, the discrete time distribution is a generalisation of the geometric distribution, for example: