In probability theory and statistics, Bayes's theorem (alternatively Bayes's law or Bayes's rule) describes the probability of an event, based on prior knowledge of conditions that might be related to the event.^{[1]} For example, if the risk of developing health problems is known to increase with age, Bayes's theorem allows the risk to an individual of a known age to be assessed more accurately than simply assuming that the individual is typical of the population as a whole.
One of the many applications of Bayes's theorem is Bayesian inference, a particular approach to statistical inference. When applied, the probabilities involved in Bayes's theorem may have different probability interpretations. With Bayesian probability interpretation, the theorem expresses how a degree of belief, expressed as a probability, should rationally change to account for the availability of related evidence. Bayesian inference is fundamental to Bayesian statistics.
Bayes's theorem is named after Reverend Thomas Bayes (/beɪz/; 1701?–1761), who first used conditional probability to provide an algorithm (his Proposition 9) that uses evidence to calculate limits on an unknown parameter, published as An Essay towards solving a Problem in the Doctrine of Chances (1763). In what he called a scholium, Bayes extended his algorithm to any unknown prior cause. Independently of Bayes, PierreSimon Laplace in 1774, and later in his 1812 Théorie analytique des probabilités, used conditional probability to formulate the relation of an updated posterior probability from a prior probability, given evidence. Sir Harold Jeffreys put Bayes's algorithm and Laplace’s formulation on an axiomatic basis, writing that Bayes's theorem "is to the theory of probability what the Pythagorean theorem is to geometry".^{[2]}
Statement of theorem
Bayes's theorem is stated mathematically as the following equation:^{[3]}
where and are events and .
 is a conditional probability: the likelihood of event occurring given that is true.
 is also a conditional probability: the likelihood of event occurring given that is true.
 and are the probabilities of observing and respectively; they are known as the marginal probability.
Examples
Drug testing
Suppose, a particular test for whether someone has been using cannabis is 90% sensitive, meaning the true positive rate (TPR)=0.90. Therefore it leads to 90% true positive results (correct identification of drug use) for cannabis users.
The test is also 80% specific, meaning true negative rate (TNR)=0.80. Therefore the test correctly identifies 80% of nonuse for nonusers, but also generates 20% false positives, or false positive rate (FPR)=0.20, for nonusers.
Assuming 0.05 prevalence, meaning 5% of people use cannabis, what is the probability that a random person who tests positive is really a cannabis user?
The Positive predictive value (PPV) of a test is proportion of persons who are actually positive out of all those testing positive, and can be calculated from a sample as:
 PPV = True positive / Tested positive
If sensitivity, specificity, and prevalence are known, PPV can be calculated using Bayes theorem. Let mean "the probability that someone is a cannabis user given that they test positive," which is what is meant by PPV. We can write:
The fact that is a direct application of the Law of Total Probability. In this case, it says that the probability that someone tests positive is the probability that a user tests positive, times the probability of being a user, plus the probability that a nonuser tests positive, times the probability of being a nonuser.
This is true because the classifications user and nonuser form a partition of a set, namely the set of people who take the drug test. This combined with the definition of conditional probability results in the above statement.
Even if someone tests positive, the probability they are a cannabis user is only 19%, because in this group only 5% of people are users, most positives are false positives coming from the remaining 95%.
If 1,000 people were tested:
 950 are nonusers and 190 of them give false positive (0.20 × 950)
 50 of them are users and 45 of them give true positive (0.90 × 50)
The 1,000 people thus yields 235 positive tests, of which only 45 are genuine drug users, about 19%. See Figure 1 for an illustration using a frequency box, and note how small the pink area of true positives is compared to the blue area of false positives.
Sensitivity or specificity
The importance of specificity can be seen by showing that even if sensitivity is raised to 100% and specificity remains at 80%, the probability of someone testing positive really being a cannabis user only rises from 19% to 21%, but if the sensitivity is held at 90% and the specificity is increased to 95%, the probability rises to 49%.



Cancer rate
Even if 100% of patients with pancreatic cancer have a certain symptom, when someone has the same symptom, it does not mean that this person has a 100% chance of getting pancreatic cancer. Assume the incidence rate of pancreatic cancer is 1/100000, while 1/10000 healthy individuals have the same symptoms worldwide, the probability of having pancreatic cancer given the symptoms is only 9.1%, and the other 90.9% could be "false positives" (that is, falsely say you have cancer; "positive" is a confusing term when, as here, the test tells you bad news).
Based on incidence rate, the following table presents the corresponding numbers per 100,000 people.
Cancer Symptom

Yes  No  Total  

Yes  1  10  11  
No  0  99989  99989  
Total  1  99999  100000 
Which can then be used to calculate the probability of having cancer when you have the symptoms:
A more complicated example
Condition Machine 
Defective  Flawless  Total  

A  10  190  200  
B  9  291  300  
C  5  495  500  
Total  24  976  1000 
A factory produces an item using three machines—A, B, and C—which account for 20%, 30%, and 50% of its output, respectively. Of the items produced by machine A, 5% are defective; similarly, 3% of machine B's items and 1% of machine C's are defective. If a randomly selected item is defective, what is the probability it was produced by machine C?
Once again, the answer can be reached without using the formula by applying the conditions to a hypothetical number of cases. For example, if the factory produces 1,000 items, 200 will be produced by Machine A, 300 by Machine B, and 500 by Machine C. Machine A will produce 5% × 200 = 10 defective items, Machine B 3% × 300 = 9, and Machine C 1% × 500 = 5, for a total of 24. Thus, the likelihood that a randomly selected defective item was produced by machine C is 5/24 (~20.83%).
This problem can also be solved using Bayes's theorem: Let X_{i} denote the event that a randomly chosen item was made by the i ^{th} machine (for i = A,B,C). Let Y denote the event that a randomly chosen item is defective. Then, we are given the following information:
If the item was made by the first machine, then the probability that it is defective is 0.05; that is, P(Y  X_{A}) = 0.05. Overall, we have
To answer the original question, we first find P(Y). That can be done in the following way:
Hence, 2.4% of the total output is defective.
We are given that Y has occurred, and we want to calculate the conditional probability of X_{C}. By Bayes's theorem,
Given that the item is defective, the probability that it was made by machine C is 5/24. Although machine C produces half of the total output, it produces a much smaller fraction of the defective items. Hence the knowledge that the item selected was defective enables us to replace the prior probability P(X_{C}) = 1/2 by the smaller posterior probability P(X_{C}  Y) = 5/24.
Interpretations
The interpretation of Bayes's rule depends on the interpretation of probability ascribed to the terms. The two main interpretations are described below. Figure 2 shows a geometric visualization similar to Figure 1. Gerd Gigerenzer and coauthors have pushed hard for teaching Bayes Rule this way, with special emphasis on teaching it to physicians.^{[4]} An example is Will Kurt's webpage,"Bayes' Theorem with Lego," later turned into the book, Bayesian Statistics the Fun Way: Understanding Statistics and Probability with Star Wars, LEGO, and Rubber Ducks. Zhu and Gigerenzer found in 2006 that whereas 0% of 4th, 5th, and 6thgraders could solve word problems after being taught with formulas, 19%, 39%, and 53% could after being taught with frequency boxes, and that the learning was either thorough or zero.^{[5]}
Bayesian interpretation
In the Bayesian (or epistemological) interpretation, probability measures a "degree of belief". Bayes's theorem links the degree of belief in a proposition before and after accounting for evidence. For example, suppose it is believed with 50% certainty that a coin is twice as likely to land heads than tails. If the coin is flipped a number of times and the outcomes observed, that degree of belief will probably rise or fall, but might even remain the same, depending on the results. For proposition A and evidence B,
 P (A), the prior, is the initial degree of belief in A.
 P (A  B), the posterior, is the degree of belief after incorporating news that B is true.
 the quotient P(B  A)/P(B) represents the support B provides for A.
For more on the application of Bayes's theorem under the Bayesian interpretation of probability, see Bayesian inference.
Frequentist interpretation
In the frequentist interpretation, probability measures a "proportion of outcomes". For example, suppose an experiment is performed many times. P(A) is the proportion of outcomes with property A (the prior) and P(B) is the proportion with property B. P(B  A) is the proportion of outcomes with property B out of outcomes with property A, and P(A  B) is the proportion of those with A out of those with B (the posterior).
The role of Bayes's theorem is best visualized with tree diagrams such as Figure 3. The two diagrams partition the same outcomes by A and B in opposite orders, to obtain the inverse probabilities. Bayes's theorem links the different partitionings.
Example
An entomologist spots what might, due to the pattern on its back, be a rare subspecies of beetle. A full 98% of the members of the rare subspecies have the pattern, so P(Pattern  Rare) = 98%. Only 5% of members of the common subspecies have the pattern. The rare subspecies is 0.1% of the total population. How likely is the beetle having the pattern to be rare: what is P(Rare  Pattern)?
From the extended form of Bayes's theorem (since any beetle is either rare or common),
Forms
Events
Simple form
For events A and B, provided that P(B) ≠ 0,
In many applications, for instance in Bayesian inference, the event B is fixed in the discussion, and we wish to consider the impact of its having been observed on our belief in various possible events A. In such a situation the denominator of the last expression, the probability of the given evidence B, is fixed; what we want to vary is A. Bayes's theorem then shows that the posterior probabilities are proportional to the numerator, so the last equation becomes:
 .
In words, the posterior is proportional to the prior times the likelihood.^{[6]}
If events A_{1}, A_{2}, ..., are mutually exclusive and exhaustive, i.e., one of them is certain to occur but no two can occur together, we can determine the proportionality constant by using the fact that their probabilities must add up to one. For instance, for a given event A, the event A itself and its complement ¬A are exclusive and exhaustive. Denoting the constant of proportionality by c we have
Adding these two formulas we deduce that
or
Alternative form
Background Proposition 
B  ¬B (not B) 
Total  

A  P(BA)·P(A) = P(AB)·P(B) 
P(¬BA)·P(A) = P(A¬B)·P(¬B) 
P(A)  
¬A (not A) 
P(B¬A)·P(¬A) = P(¬AB)·P(B) 
P(¬B¬A)·P(¬A) = P(¬A¬B)·P(¬B) 
P(¬A) = 1−P(A)  
Total  P(B)  P(¬B) = 1−P(B)  1 
Another form of Bayes's theorem for two competing statements or hypotheses is:
For an epistemological interpretation:
For proposition A and evidence or background B,^{[7]}
 is the prior probability, the initial degree of belief in A.
 is the corresponding initial degree of belief in notA, that A is false, where
 is the conditional probability or likelihood, the degree of belief in B given that proposition A is true.
 is the conditional probability or likelihood, the degree of belief in B given that proposition A is false.
 is the posterior probability, the probability of A after taking into account B.
Extended form
Often, for some partition {A_{j}} of the sample space, the event space is given in terms of P(A_{j}) and P(B  A_{j}). It is then useful to compute P(B) using the law of total probability:
In the special case where A is a binary variable:
Random variables
Consider a sample space Ω generated by two random variables X and Y. In principle, Bayes's theorem applies to the events A = {X = x} and B = {Y = y}.
However, terms become 0 at points where either variable has finite probability density. To remain useful, Bayes's theorem must be formulated in terms of the relevant densities (see Derivation).
Simple form
If X is continuous and Y is discrete,
where each is a density function.
If X is discrete and Y is continuous,
If both X and Y are continuous,
Extended form
A continuous event space is often conceptualized in terms of the numerator terms. It is then useful to eliminate the denominator using the law of total probability. For f_{Y}(y), this becomes an integral:
Bayes's rule
Bayes's theorem in odds form is:
where
is called the Bayes factor or likelihood ratio. The odds between two events is simply the ratio of the probabilities of the two events. Thus
Thus, the rule says that the posterior odds are the prior odds times the Bayes factor, or in other words, the posterior is proportional to the prior times the likelihood.
In the special case that and , one writes , and uses a similar abbreviation for the Bayes factor and for the conditional odds. The odds on is by definition the odds for and against . Bayes's rule can then be written in the abbreviated form
or, in word, the posterior odds on equals the prior odds on times the likelihood ratio for given information . In short, posterior odds equals prior odds times likelihood ratio.
Derivation
For events
Bayes's theorem may be derived from the definition of conditional probability:
where is the joint probability of both A and B being true, because
For random variables
For two continuous random variables X and Y, Bayes's theorem may be analogously derived from the definition of conditional density:
Therefore,
Correspondence to other mathematical frameworks
Propositional logic
Bayes's theorem represents a generalisation of contraposition which in propositional logic can be expressed as:
The corresponding formula in terms of probability calculus is Bayes's theorem which in its expanded form is expressed as:
In the equation above the conditional probability generalizes the logical statement , i.e. in addition to assigning TRUE or FALSE we can also assign any probability to the statement. The term denotes the prior probability (aka. the base rate) of . Assume that is equivalent to being TRUE, and that is equivalent to being FALSE. It is then easy to see that when i.e. when is TRUE. This is because so that the fraction on the righthand side of the equation above is equal to 1, and hence which is equivalent to being TRUE. Hence, Bayes's theorem represents a generalization of contraposition.^{[8]}
Subjective logic
Bayes's theorem represents a special case of conditional inversion in subjective logic expressed as:
where denotes the operator for conditional inversion. The argument denotes a pair of binomial conditional opinions given by source , and the argument denotes the prior probability (aka. the base rate) of . The pair of inverted conditional opinions is denoted . The conditional opinion generalizes the probabilistic conditional , i.e. in addition to assigning a probability the source can assign any subjective opinion to the conditional statement . A binomial subjective opinion is the belief in the truth of statement with degrees of uncertainty, as expressed by source . Every subjective opinion has a corresponding projected probability . The projected probability of opinions applied to Bayes's theorem produces a homomorphism so that Bayes's theorem can be expressed in terms of the projected probabilities of opinions:
Hence, the subjective Bayes's theorem represents a generalization of Bayes's theorem.^{[9]}
Generalizations
Conditioned version
A conditioned version of the Bayes's theorem^{[10]} results from the addition of a third event on which all probabilities are conditioned:
Derivation
Using the chain rule
And, on the other hand
The desired result is obtained by identifying both expressions and solving for .
Bayes' rule with 3 events
In the case of 3 events  A, B, and C  it can be shown that:
History
Bayes's theorem was named after Thomas Bayes (1701–1761), who studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). Bayes's unpublished manuscript was significantly edited by Richard Price before it was posthumously read at the Royal Society. Price edited^{[12]} Bayes's major work "An Essay towards solving a Problem in the Doctrine of Chances" (1763), which appeared in Philosophical Transactions,^{[13]} and contains Bayes's theorem. Price wrote an introduction to the paper which provides some of the philosophical basis of Bayesian statistics. In 1765, he was elected a Fellow of the Royal Society in recognition of his work on the legacy of Bayes.^{[14]}^{[15]}
The French mathematician PierreSimon Laplace reproduced and extended Bayes's results in 1774, apparently unaware of Bayes's work.^{[note 1]}^{[16]} The Bayesian interpretation of probability was developed mainly by Laplace.^{[17]}
Stephen Stigler used a Bayesian argument to conclude that Bayes's theorem was discovered by Nicholas Saunderson, a blind English mathematician, some time before Bayes;^{[18]}^{[19]} that interpretation, however, has been disputed.^{[20]} Martyn Hooper^{[21]} and Sharon McGrayne^{[22]} have argued that Richard Price's contribution was substantial:
By modern standards, we should refer to the Bayes–Price rule. Price discovered Bayes's work, recognized its importance, corrected it, contributed to the article, and found a use for it. The modern convention of employing Bayes's name alone is unfair but so entrenched that anything else makes little sense.^{[22]}
Use in Genetic Prediction and Testing
In genetics, Bayes's theorem can be used to calculate the probability of an individual having a specific genotype. Many people seek to approximate their chances of being affected by a genetic disease or their likelihood of being a carrier for a recessive gene of interest. A Bayesian analysis can be done based on family history or genetic testing, in order to predict whether an individual will develop a disease or pass one on to their children. Genetic testing and prediction is a common practice among couples who plan to have children but are concerned that they may both be carriers for a disease, especially within communities with low genetic variance.^{[citation needed]}
The first step in Bayesian analysis for genetics is to propose mutually exclusive hypotheses: for a specific allele, an individual either is or is not a carrier. Next, four probabilities are calculated: Prior Probability (the likelihood of each hypothesis considering information such as family history or predictions based on Mendelian Inheritance), Conditional Probability (of a certain outcome), Joint Probability (product of the first two), and Posterior Probability (a weighted product calculated by dividing the Joint Probability for each hypothesis by the sum of both joint probabilities). This type of analysis can be done based purely on family history of a condition or in concert with genetic testing.^{[citation needed]}
Using pedigree to calculate probabilities
Hypothesis  Hypothesis 1: Patient is a carrier  Hypothesis 2: Patient is not a carrier 

Prior Probability  1/2  1/2 
Conditional Probability that all four offspring will be unaffected  (1/2) · (1/2) · (1/2) · (1/2) = 1/16  About 1 
Joint Probability  (1/2) · (1/16) = 1/32  (1/2) · 1 = 1/2 
Posterior Probability  (1/32) / (1/32 + 1/2) = 1/17  (1/2) / (1/32 + 1/2) = 16/17 
Example of a Bayesian analysis table for a female individual's risk for a disease based on the knowledge that the disease is present in her siblings but not in her parents or any of her four children. Based solely on the status of the subject’s siblings and parents, she is equally likely to be a carrier as to be a noncarrier (this likelihood is denoted by the Prior Hypothesis). However, the probability that the subject’s four sons would all be unaffected is 1/16 (½·½·½·½) if she is a carrier, about 1 if she is a noncarrier (this is the Conditional Probability). The Joint Probability reconciles these two predictions by multiplying them together. The last line (the Posterior Probability) is calculated by dividing the Joint Probability for each hypothesis by the sum of both joint probabilities.^{[23]}
Using genetic test results
Prenatal genetic testing, while still a controversial practice, can detect around 90% of known disease alleles in parents that can lead to carrier or affected status in their child. Cystic fibrosis is a heritable disease caused by an autosomal recessive mutation on the CFTR gene,^{[24]} located on the q arm of chromosome 7.^{[25]}
Bayesian analysis of a female patient with a family history of cystic fibrosis (CF), who has tested negative for CF, demonstrating how this method was used to determine her risk of having a child born with CF:
Because the patient is unaffected, she is either homozygous for the wildtype allele, or heterozygous. To establish prior probabilities, a Punnett square is used, based on the knowledge that neither parent was affected by the disease but both could have been carriers:
Mother Father 
W
Homozygous for the wild 
M
Heterozygous (a CF carrier) 

W
Homozygous for the wild 
WW  MW 
M
Heterozygous (a CF carrier) 
MW  MM
(affected by cystic fibrosis) 
Given that the patient is unaffected, there are only three possibilities. Within these three, there are two scenarios in which the patient carries the mutant allele. Thus the prior probabilities are ⅔ and ⅓.
Next, the patient undergoes genetic testing and tests negative for cystic fibrosis. This test has a 90% detection rate, so the conditional probabilities of a negative test are 1/10 and 1. Finally, the joint and posterior probabilities are calculated as before.
Hypothesis  Hypothesis 1: Patient is a carrier  Hypothesis 2: Patient is not a carrier 

Prior Probability  2/3  1/3 
Conditional Probability of a negative test  1/10  1 
Joint Probability  1/15  1/3 
Posterior Probability  1/6  5/6 
After carrying out the same analysis on the patient’s male partner (with a negative test result), the chances of their child being affected is equal to the product of the parents' respective prior probabilities for being carriers times the chances that two carriers will produce an affected offspring (¼).
Genetic testing done in parallel with other risk factor identification.
Bayesian analysis can be done using phenotypic information associated with a genetic condition, and when combined with genetic testing this analysis becomes much more complicated. Cystic Fibrosis, for example, can be identified in a fetus through an ultrasound looking for an echogenic bowel, meaning one that appears brighter than normal on a scan2. This is not a foolproof test, as an echogenic bowel can be present in a perfectly healthy fetus. Parental genetic testing is very influential in this case, where a phenotypic facet can be overly influential in probability calculation. In the case of a fetus with an echogenic bowel, with a mother who has been tested and is known to be a CF carrier, the posterior probability that the fetus actually has the disease is very high (0.64). However, once the father has tested negative for CF, the posterior probability drops significantly (to 0.16).^{[23]}
Risk factor calculation is a powerful tool in genetic counseling and reproductive planning, but it cannot be treated as the only important factor to consider. As above, incomplete testing can yield falsely high probability of carrier status, and testing can be financially inaccessible or unfeasible when a parent is not present.
See also
Notes
 ^ Laplace refined Bayes's theorem over a period of decades:
 Laplace announced his independent discovery of Bayes's theorem in: Laplace (1774) "Mémoire sur la probabilité des causes par les événements," "Mémoires de l'Académie royale des Sciences de MI (Savants étrangers)," 4: 621–656. Reprinted in: Laplace, "Oeuvres complètes" (Paris, France: GauthierVillars et fils, 1841), vol. 8, pp. 27–65. Available online at: Gallica. Bayes's theorem appears on p. 29.
 Laplace presented a refinement of Bayes's theorem in: Laplace (read: 1783 / published: 1785) "Mémoire sur les approximations des formules qui sont fonctions de très grands nombres," "Mémoires de l'Académie royale des Sciences de Paris," 423–467. Reprinted in: Laplace, "Oeuvres complètes" (Paris, France: GauthierVillars et fils, 1844), vol. 10, pp. 295–338. Available online at: Gallica. Bayes's theorem is stated on page 301.
 See also: Laplace, "Essai philosophique sur les probabilités" (Paris, France: Mme. Ve. Courcier [Madame veuve (i.e., widow) Courcier], 1814), page 10. English translation: Pierre Simon, Marquis de Laplace with F. W. Truscott and F. L. Emory, trans., "A Philosophical Essay on Probabilities" (New York, New York: John Wiley & Sons, 1902), page 15.
References
 ^ Joyce, James (2003), "Bayes' Theorem", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Spring 2019 ed.), Metaphysics Research Lab, Stanford University, retrieved 20200117
 ^ Jeffreys, Harold (1973). Scientific Inference (3rd ed.). Cambridge University Press. p. 31. ISBN 9780521180788.
 ^ Stuart, A.; Ord, K. (1994), Kendall's Advanced Theory of Statistics: Volume I—Distribution Theory, Edward Arnold, §8.7
 ^ Gigerenzer, Gerd; Hoffrage, Ulrich (1995). "How to improve Bayesian reasoning without instruction: Frequency formats". Psychological Review. 102 (4): 684–704. CiteSeerX 10.1.1.128.3201. doi:10.1037/0033295X.102.4.684.
 ^ Zhu, Liqi; Gigerenzer, Gerd (January 2006). "Children can solve Bayesian problems: the role of representation in mental computation". Cognition. 98 (3): 287–308. doi:10.1016/j.cognition.2004.12.003. hdl:11858/00001M00000024FEFDA. PMID 16399266.
 ^ Lee, Peter M. (2012). "Chapter 1". Bayesian Statistics. Wiley. ISBN 9781118332573.
 ^ "Bayes' Theorem: Introduction". Trinity University. Archived from the original on 21 August 2004. Retrieved 5 August 2014.
 ^ Audun Jøsang, 2016, Subjective Logic; A formalism for Reasoning Under Uncertainty. Springer, Cham, ISBN 9783319423371
 ^ Audun Jøsang, 2016, Generalising Bayes' Theorem in Subjective Logic. IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI 2016), BadenBaden, September 2016
 ^ Koller, D.; Friedman, N. (2009). Probabilistic Graphical Models. Massachusetts: MIT Press. p. 1208. ISBN 9780262013192. Archived from the original on 20140427.
 ^ Graham Kemp (https://math.stackexchange.com/users/135106/grahamkemp), Bayes' rule with 3 variables, URL (version: 20150514): https://math.stackexchange.com/q/1281558
 ^ Allen, Richard (1999). David Hartley on Human Nature. SUNY Press. pp. 243–4. ISBN 9780791494516. Retrieved 16 June 2013.
 ^ Bayes, Thomas & Price, Richard (1763). "An Essay towards solving a Problem in the Doctrine of Chance. By the late Rev. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, A. M. F. R. S." (PDF). Philosophical Transactions of the Royal Society of London. 53: 370–418. doi:10.1098/rstl.1763.0053. Archived from the original (PDF) on 20110410. Retrieved 20031227.
 ^ Holland, pp. 46–7.
 ^ Price, Richard (1991). Price: Political Writings. Cambridge University Press. p. xxiii. ISBN 9780521409698. Retrieved 16 June 2013.
 ^ Daston, Lorraine (1988). Classical Probability in the Enlightenment. Princeton Univ Press. p. 268. ISBN 0691084971.
 ^ Stigler, Stephen M. (1986). "Inverse Probability". The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press. pp. 99–138. ISBN 9780674403413.
 ^ Stigler, Stephen M. (1983). "Who Discovered Bayes' Theorem?". The American Statistician. 37 (4): 290–296. doi:10.1080/00031305.1983.10483122.
 ^ de Vaux, Richard; Velleman, Paul; Bock, David (2016). Stats, Data and Models (4th ed.). Pearson. pp. 380–381. ISBN 9780321986498.
 ^ Edwards, A. W. F. (1986). "Is the Reference in Hartley (1749) to Bayesian Inference?". The American Statistician. 40 (2): 109–110. doi:10.1080/00031305.1986.10475370.
 ^ Hooper, Martyn (2013). "Richard Price, Bayes' theorem, and God". Significance. 10 (1): 36–39. doi:10.1111/j.17409713.2013.00638.x. S2CID 153704746.
 ^ ^{a} ^{b} McGrayne, S. B. (2011). The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines & Emerged Triumphant from Two Centuries of Controversy. Yale University Press. ISBN 9780300188226.
 ^ ^{a} ^{b} Ogino, Shuji; Wilson, Robert B; Gold, Bert; Hawley, Pamela; Grody, Wayne W (October 2004). "Bayesian analysis for cystic fibrosis risks in prenatal and carrier screening". Genetics in Medicine. 6 (5): 439–449. doi:10.1097/01.GIM.0000139511.83336.8F. PMID 15371910.
 ^ "Types of CFTR Mutations". Cystic Fibrosis Foundation, www.cff.org/WhatisCF/Genetics/TypesofCFTRMutations/.
 ^ "CFTR Gene – Genetics Home Reference". U.S. National Library of Medicine, National Institutes of Health, ghr.nlm.nih.gov/gene/CFTR#location.
Further reading
 Grunau, HansChristoph (24 January 2014). "Preface Issue 3/42013". Jahresbericht der Deutschen MathematikerVereinigung. 115 (3–4): 127–128. doi:10.1365/s132910130077z.
 Gelman, A, Carlin, JB, Stern, HS, and Rubin, DB (2003), "Bayesian Data Analysis," Second Edition, CRC Press.
 Grinstead, CM and Snell, JL (1997), "Introduction to Probability (2nd edition)," American Mathematical Society (free pdf available) [1].
 "Bayes formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
 McGrayne, SB (2011). The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines & Emerged Triumphant from Two Centuries of Controversy. Yale University Press. ISBN 9780300188226.
 Laplace, Pierre Simon (1986). "Memoir on the Probability of the Causes of Events". Statistical Science. 1 (3): 364–378. doi:10.1214/ss/1177013621. JSTOR 2245476.
 Lee, Peter M (2012), "Bayesian Statistics: An Introduction," 4th edition. Wiley. ISBN 9781118332573.
 Puga JL, Krzywinski M, Altman N (31 March 2015). "Bayes' theorem". Nature Methods. 12 (4): 277–278. doi:10.1038/nmeth.3335. PMID 26005726.
 Rosenthal, Jeffrey S (2005), "Struck by Lightning: The Curious World of Probabilities". HarperCollins. (Granta, 2008. ISBN 9781862079960).
 Stigler, Stephen M. (August 1986). "Laplace's 1774 Memoir on Inverse Probability". Statistical Science. 1 (3): 359–363. doi:10.1214/ss/1177013620.
 Stone, JV (2013), download chapter 1 of "Bayes' Rule: A Tutorial Introduction to Bayesian Analysis", Sebtel Press, England.
 Bayesian Reasoning for Intelligent People, An introduction and tutorial to the use of Bayes' theorem in statistics and cognitive science.
 Morris, Dan (2016), Read first 6 chapters for free of "Bayes' Theorem Examples: A Visual Introduction For Beginners" Blue Windmill ISBN 9781549761744. A short tutorial on how to understand problem scenarios and find P(B), P(A), and P(BA).
External links
 Bayes' theorem at the Encyclopædia Britannica
 The Theory That Would Not Die by Sharon Bertsch McGrayne New York Times Book Review by John Allen Paulos on 5 August 2011
 Visual explanation of Bayes using trees (video)
 Bayes' frequentist interpretation explained visually (video)
 Earliest Known Uses of Some of the Words of Mathematics (B). Contains origins of "Bayesian", "Bayes' Theorem", "Bayes Estimate/Risk/Solution", "Empirical Bayes", and "Bayes Factor".
 Weisstein, Eric W. "Bayes' Theorem". MathWorld.
 Bayes' theorem at PlanetMath.org.
 Bayes Theorem and the Folly of Prediction
 A tutorial on probability and Bayes' theorem devised for Oxford University psychology students
 An Intuitive Explanation of Bayes' Theorem by Eliezer S. Yudkowsky
 Online demonstrator of the subjective Bayes' theorem