To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Inverse probability

From Wikipedia, the free encyclopedia

In probability theory, inverse probability is an old term for the probability distribution of an unobserved variable.

Today, the problem of determining an unobserved variable (by whatever method) is called inferential statistics, the method of inverse probability (assigning a probability distribution to an unobserved variable) is called Bayesian probability, the distribution of data given the unobserved variable is the likelihood function (which does not by itself give a probability distribution for the parameter), and the distribution of an unobserved variable, given both data and a prior distribution, is the posterior distribution. The development of the field and terminology from "inverse probability" to "Bayesian probability" is described by Fienberg (2006).

Ronald Fisher

The term "inverse probability" appears in an 1837 paper of De Morgan, in reference to Laplace's method of probability (developed in a 1774 paper, which independently discovered and popularized Bayesian methods, and a 1812 book), though the term "inverse probability" does not occur in these.[1] Fisher uses the term in Fisher (1922), referring to "the fundamental paradox of inverse probability" as the source of the confusion between statistical terms that refer to the true value to be estimated, with the actual value arrived at by the estimation method, which is subject to error. Later Jeffreys uses the term in his defense of the methods of Bayes and Laplace, in Jeffreys (1939). The term "Bayesian", which displaced "inverse probability", was introduced by Ronald Fisher in 1950.[2] Inverse probability, variously interpreted, was the dominant approach to statistics until the development of frequentism in the early 20th century by Ronald Fisher, Jerzy Neyman and Egon Pearson.[3] Following the development of frequentism, the terms frequentist and Bayesian developed to contrast these approaches, and became common in the 1950s.

YouTube Encyclopedic

  • 1/3
    Views:
    6 695
    6 344
    7 989
  • Treatment effects in Stata®: Inverse-probability weighting
  • Propensity Scores 101
  • Treatment effects in Stata®: Inverse-probability weighted regression adjustment

Transcription

Details

In modern terms, given a probability distribution p(x|θ) for an observable quantity x conditional on an unobserved variable θ, the "inverse probability" is the posterior distribution p(θ|x), which depends both on the likelihood function (the inversion of the probability distribution) and a prior distribution. The distribution p(x|θ) itself is called the direct probability.

The inverse probability problem (in the 18th and 19th centuries) was the problem of estimating a parameter from experimental data in the experimental sciences, especially astronomy and biology. A simple example would be the problem of estimating the position of a star in the sky (at a certain time on a certain date) for purposes of navigation. Given the data, one must estimate the true position (probably by averaging). This problem would now be considered one of inferential statistics.

The terms "direct probability" and "inverse probability" were in use until the middle part of the 20th century, when the terms "likelihood function" and "posterior distribution" became prevalent.

See also

References

  1. ^ Fienberg 2006, p. 5.
  2. ^ Fienberg 2006, p. 14.
  3. ^ Fienberg 2006, 4.1 Frequentist Alternatives to Inverse Probability, pp. 7–9.
  • Fisher, R. A. (1922). "On the Mathematical Foundations of Theoretical Statistics". Philos. Trans. R. Soc. Lond. A. 222A: 309–368.
    • See reprint in Kotz, S. (1992). Breakthroughs in Statistics Volume 1. Springer-Verlag.
  • Jeffreys, Harold (1939). Theory of Probability (Third ed.). Oxford University Press.
  • Fienberg, Stephen E. (2006). "When Did Bayesian Inference Become "Bayesian"?". Bayesian Analysis. 1 (1): 1–40. doi:10.1214/06-BA101.
This page was last edited on 17 October 2023, at 09:35
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.