To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time. 4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds Logistic distribution

Parameters Probability density function Cumulative distribution function $\mu ,$ location (real)$s>0,$ scale (real) $x\in (-\infty ,\infty )$ ${\frac {e^{-{\frac {x-\mu }{s}}}}{s\left(1+e^{-{\frac {x-\mu }{s}}}\right)^{2}}}$ ${\frac {1}{1+e^{-{\frac {x-\mu }{s}}}}}$ $\mu$ $\mu$ $\mu$ ${\frac {s^{2}\pi ^{2}}{3}}$ $0$ $6/5$ $\ln s+2$ $e^{\mu t}\mathrm {B} (1-st,1+st)$ for $t\in (-1/s,1/s)$ and $\mathrm {B}$ is the Beta function $e^{it\mu }{\frac {\pi st}{\sinh(\pi st)}}$ In probability theory and statistics, the logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution in shape but has heavier tails (higher kurtosis). The logistic distribution is a special case of the Tukey lambda distribution.

• 1/5
Views:
11 722
32 267
207 920
659
186 175
• ✪ Section 4.7 - Introduction to Logistic Functions
• ✪ FRM: Logistic distribution maps credit score to probability of default (PD)
• ✪ The Elo Rating System for Chess and Beyond
• ✪ 4 Types of Link Functions in Logistic Regression
• ✪ Logistic Regression Using Excel

Specification

Probability density function

The probability density function (pdf) of the logistic distribution is given by:

$f(x;\mu ,s)={\frac {e^{-{\frac {x-\mu }{s}}}}{s\left(1+e^{-{\frac {x-\mu }{s}}}\right)^{2}}}={\frac {1}{s\left(e^{\frac {x-\mu }{2s}}+e^{-{\frac {x-\mu }{2s}}}\right)^{2}}}={\frac {1}{4s}}\operatorname {sech} ^{2}\!\left({\frac {x-\mu }{2s}}\right).$ Because the pdf can be expressed in terms of the square of the hyperbolic secant function "sech", it is sometimes referred to as the sech-square(d) distribution.

Cumulative distribution function

The logistic distribution receives its name from its cumulative distribution function (cdf), which is an instance of the family of logistic functions. The cumulative distribution function of the logistic distribution is also a scaled version of the hyperbolic tangent.

$F(x;\mu ,s)={\frac {1}{1+e^{-{\frac {x-\mu }{s}}}}}={\frac {1}{2}}+{\frac {1}{2}}\;\operatorname {tanh} \!\left({\frac {x-\mu }{2s}}\right).$ In this equation, x is the random variable, μ is the mean, and s is a scale parameter proportional to the standard deviation.

Quantile function

The inverse cumulative distribution function (quantile function) of the logistic distribution is a generalization of the logit function. Its derivative is called the quantile density function. They are defined as follows:

$Q(p;\mu ,s)=\mu +s\,\ln \left({\frac {p}{1-p}}\right).$ $Q'(p;s)={\frac {s}{p(1-p)}}.$ Alternative parameterization

An alternative parameterization of the logistic distribution can be derived by expressing the scale parameter, $s$ , in terms of the standard deviation, $\sigma$ , using the substitution $s\,=\,q\,\sigma$ , where $q\,=\,{\sqrt {3}}/{\pi }\,=\,0.551328895\ldots$ . The alternative forms of the above functions are reasonably straightforward.

Applications

The logistic distribution—and the S-shaped pattern of its cumulative distribution function (the logistic function) and quantile function (the logit function)—have been extensively used in many different areas.

Logistic regression

One of the most common applications is in logistic regression, which is used for modeling categorical dependent variables (e.g., yes-no choices or a choice of 3 or 4 possibilities), much as standard linear regression is used for modeling continuous variables (e.g., income or population). Specifically, logistic regression models can be phrased as latent variable models with error variables following a logistic distribution. This phrasing is common in the theory of discrete choice models, where the logistic distribution plays the same role in logistic regression as the normal distribution does in probit regression. Indeed, the logistic and normal distributions have a quite similar shape. However, the logistic distribution has heavier tails, which often increases the robustness of analyses based on it compared with using the normal distribution.

Physics

The PDF of this distribution has the same functional form as the derivative of the Fermi function. In the theory of electron properties in semiconductors and metals, this derivative sets the relative weight of the various electron energies in their contributions to electron transport. Those energy levels whose energies are closest to the distribution's "mean" (Fermi level) dominate processes such as electronic conduction, with some smearing induced by temperature.:34 Note however that the pertinent probability distribution in Fermi–Dirac statistics is actually a simple Bernoulli distribution, with the probability factor given by the Fermi function.

The logistic distribution arises as limit distribution of a finite-velocity damped random motion described by a telegraph process in which the random times between consecutive velocity changes have independent exponential distributions with linearly increasing parameters.

Hydrology

In hydrology the distribution of long duration river discharge and rainfall (e.g., monthly and yearly totals, consisting of the sum of 30 respectively 360 daily values) is often thought to be almost normal according to the central limit theorem. The normal distribution, however, needs a numeric approximation. As the logistic distribution, which can be solved analytically, is similar to the normal distribution, it can be used instead. The blue picture illustrates an example of fitting the logistic distribution to ranked October rainfalls—that are almost normally distributed—and it shows the 90% confidence belt based on the binomial distribution. The rainfall data are represented by plotting positions as part of the cumulative frequency analysis.

Chess ratings

Τhe United States Chess Federation and FIDE have switched its formula for calculating chess ratings from the normal distribution to the logistic distribution; see the article on Elo rating system (itself based on the normal distribution).

Related distributions

• Logistic distribution mimics the sech distribution.
• If X ~ Logistic(μ, β) then kX + ~ Logistic( + , ).
• If X ~ U(0, 1) then μ + β(log(X) − log(1 − X)) ~ Logistic(μ, β).
• If $X\sim \mathrm {Gumbel} (\alpha _{X},\beta )$ and $Y\sim \mathrm {Gumbel} (\alpha _{Y},\beta )$ then $X-Y\sim \mathrm {Logistic} (\alpha _{X}-\alpha _{Y},\beta )\,$ .
• If $X$ and $Y\sim \mathrm {Gumbel} (\alpha ,\beta )$ then $X+Y\nsim \mathrm {Logistic} (2\alpha ,\beta )\,$ (The sum is not a logistic distribution). Note that $E(X+Y)=2\alpha +2\beta \gamma \neq 2\alpha =E\left(\mathrm {Logistic} (2\alpha ,\beta )\right)$ .
• If X ~ Logistic(μ, s) then exp(X) ~ LogLogistic$\left(\alpha =e^{\mu },\beta ={\frac {1}{s}}\right)$ , and exp(X) + γ ~ shifted log-logistic
$\left(\alpha =e^{\mu },\beta ={\frac {1}{s}},\gamma \right)$ .
$\mu +\beta \log(e^{X}-1)\sim \operatorname {Logistic} (\mu ,\beta ).$ • If X, Y ~ Exponential(1) then
$\mu -\beta \log \left({\frac {X}{Y}}\right)\sim \operatorname {Logistic} (\mu ,\beta ).$ Derivations

Higher-order moments

The nth-order central moment can be expressed in terms of the quantile function:

{\begin{aligned}\operatorname {E} [(X-\mu )^{n}]&=\int _{-\infty }^{\infty }(x-\mu )^{n}\,dF(x)\\&=\int _{0}^{1}{\big (}Q(p)-\mu {\big )}^{n}\,dp=s^{n}\int _{0}^{1}\left[\ln \!\left({\frac {p}{1-p}}\right)\right]^{n}\,dp.\end{aligned}} This integral is well-known and can be expressed in terms of Bernoulli numbers:

$\operatorname {E} [(X-\mu )^{n}]=s^{n}\pi ^{n}(2^{n}-2)\cdot |B_{n}|.$ 