To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Binary entropy function

From Wikipedia, the free encyclopedia

Entropy of a Bernoulli trial as a function of binary outcome probability, called the binary entropy function.

In information theory, the binary entropy function, denoted or , is defined as the entropy of a Bernoulli process with probability of one of two values. It is a special case of , the entropy function. Mathematically, the Bernoulli trial is modelled as a random variable that can take on only two values: 0 and 1, which are mutually exclusive and exhaustive.

If , then and the entropy of (in shannons) is given by

,

where is taken to be 0. The logarithms in this formula are usually taken (as shown in the graph) to the base 2. See binary logarithm.

When , the binary entropy function attains its maximum value. This is the case of an unbiased coin flip.

is distinguished from the entropy function in that the former takes a single real number as a parameter whereas the latter takes a distribution or random variable as a parameter. Sometimes the binary entropy function is also written as . However, it is different from and should not be confused with the Rényi entropy, which is denoted as .

YouTube Encyclopedic

  • 1/3
    Views:
    9 832
    7 984
    107 184
  • Binary Entropy cost function
  • Binary Cross-Entropy
  • (Info 1.1) Entropy - Definition

Transcription

Explanation

In terms of information theory, entropy is considered to be a measure of the uncertainty in a message. To put it intuitively, suppose . At this probability, the event is certain never to occur, and so there is no uncertainty at all, leading to an entropy of 0. If , the result is again certain, so the entropy is 0 here as well. When , the uncertainty is at a maximum; if one were to place a fair bet on the outcome in this case, there is no advantage to be gained with prior knowledge of the probabilities. In this case, the entropy is maximum at a value of 1 bit. Intermediate values fall between these cases; for instance, if , there is still a measure of uncertainty on the outcome, but one can still predict the outcome correctly more often than not, so the uncertainty measure, or entropy, is less than 1 full bit.

Derivative

The derivative of the binary entropy function may be expressed as the negative of the logit function:

.

Taylor series

The Taylor series of the binary entropy function in a neighborhood of 1/2 is

for .

Bounds

The following bounds hold for :[1]

and

where denotes natural logarithm.

See also

References

  1. ^ Topsøe, Flemming (2001). "Bounds for entropy and divergence for distributions over a two-element set". JIPAM. Journal of Inequalities in Pure & Applied Mathematics. 2 (2): Paper No. 25, 13 p.-Paper No. 25, 13 p.

Further reading

This page was last edited on 13 February 2022, at 00:50
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.