To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Beta negative binomial distribution

From Wikipedia, the free encyclopedia

Beta Negative Binomial
Parameters shape (real)
shape (real)
— number of failures until the experiment is stopped (integer but can be extended to real)
Support k ∈ { 0, 1, 2, 3, ... }
pmf
Mean
Variance
Skewness
MGF undefined
CF where B is the beta function and 2F1 is the hypergeometric function.

In probability theory, a beta negative binomial distribution is the probability distribution of a discrete random variable X equal to the number of failures needed to get r successes in a sequence of independent Bernoulli trials where the probability p of success on each trial is constant within any given experiment but is itself a random variable following a beta distribution, varying between different experiments. Thus the distribution is a compound probability distribution.

This distribution has also been called both the inverse Markov-Pólya distribution and the generalized Waring distribution.[1] A shifted form of the distribution has been called the beta-Pascal distribution.[1]

If parameters of the beta distribution are α and β, and if

where

then the marginal distribution of X is a beta negative binomial distribution:

In the above, NB(rp) is the negative binomial distribution and B(αβ) is the beta distribution.

YouTube Encyclopedic

  • 1/5
    Views:
    123 316
    12 127
    16 060
    3 021
    606 181
  • Introduction to the Negative Binomial Distribution
  • Negative Binomial Distribution
  • GLM in R - Negative binomial regression v Poisson regression
  • Negative Binomial - a member of the Natural Exponential Family
  • Binomial distribution | Probability and Statistics | Khan Academy

Transcription

Let's take a look at the negative binomial distribution, another important discrete probability distribution. Let's take a look at a simple example to start. A coin is tossed repeatedly until heads comes up for the sixth time. What is the probability this happens on the 15th toss? Well we could calculate this probability using the negative binomial distribution. Before we look at calculating probabilities using the negative binomial distribution, let's look at how it relates to a couple of other important discrete probability distributions. The geometric distribution is the distribution of the number of trials needed to get the first success in repeated independent Bernoulli trials. Well the negative binomial distribution generalizes this. The negative binomial distribution is the distribution of the number of trials needed to get the rth success in repeated independent Bernoulli trials. So if we're interested in the number of trials to get the second success then r would be equal to 2, and if we're interested in the number of trials to get the 12 success then r would be equal to 12, and that r is simply going to depend on the problem at hand. A random variable that has a negative binomial distribution can sometimes be confused for one that has a binomial distribution. And that confusion can cause some problems, so let's look at the differences here. The binomial distribution is the distribution of the number of successes, so the number of successes is our random variable X, in a fixed number of independent Bernoulli trials. And that fixed number of trials we typically call n. But in the negative binomial distribution, the number of successes is the fixed number, and we're going to call that r, and the number of trials needed to get that number of successes is the random variable X. The negative binomial distribution can be defined a little bit differently. For instance, sometimes it's described as the distribution of the number of failures needed to get that fixed number of successes But we're going to use the definition here. So what we're interested in is the probability distribution of the number of trials needed to get this fixed number of successes in repeated independent Bernoulli trials. Let's break down that notion of independent Bernoulli trials a little bit further. Suppose we have independent trials and each trial results in one of two possible mutually exclusive outcomes. (and we're going to label those success and failure). The probability of success on any given trial is little p and this stays constant from trial to trial. The probability of failure is simply 1-p, and capital X is a random variable representing the trial number of the rth success. In order for the rth success to occur on the xth trial, we need a couple of events to occur. First of all, in the first x-1 trials we're going to need to have r-1 successes. And we can calculate the probability of that using the binomial formula. So this is just the binomial formula here. But we also need the xth trial to be a success, and that has probability p. and to calculate the probability the rth success occurs on the xth trial, we're simply going to multiply these two probabilities together, because the trials are assumed to be independent. So then the probability a random variable X takes on the value little x is the product of those two probabilities we just looked at. and then we have our probability mass function for the negative binomial distribution. We need to list out what values X can take on. So this is for x being equal to r, which is the smallest possible value of x (if we need r successes we're going to need at least r trials) and then r+1 and so on off to infinity. There's not going to be an upper bound on that. And it can be shown that the mean of this probability distribution is simply r/p and the variance of this probability distribution is r(1-p)/p^2. Let's look at an example here. A person conducting telephone surveys must get 3 more completed surveys before their job is finished. On each randomly dialed number there's a 9% chance of reaching an adult who will complete the survey. And this is close to reality for some types of surveys. What is the probability the 3rd completed survey occurs on the 10th call? Here we're dialing random numbers, and knowing the outcome of one randomly dialed call tells us nothing about the outcome of another randomly dialed call and so these trials are independent. On any individual call, we're either going to get the survey completed or we're not and so we've got the success and failure aspect on any individual trial. And we are interested in the probability of getting the 3rd success on the 10th trial. So what we want to know is the probability that the random variable X, representing the trial number of the third success, takes on the value 10. And the conditions of a negative binomial distribution are satisfied here. We have this 9% chance of completing the survey on any one individual call, so then little p is equal to 0.09. And we're interested in the trial number of the third success, and so r is 3. Here's our formula for the negative binomial distribution. And we have an r of 3 and a p of 0.09. And we're interested in the probability that a random variable X takes on the value 10. Well this is going to be equal to 10-1 choose 3-1 times 0.09 raised to the third power times 1-0.09 raised to the 10-3. And this, to 5 decimal places, 0.01356. Here I've plotted out the probability distribution of the number of calls required to get the 3rd success. And that probability that we just calculated for 10 calls is right about there. Note that the smallest value here is 3. If we need 3 successes, well we're going to need at least three trials. Over here on this side I've truncated the plot at 120. The values the random variable can take on goes off to infinity here, but the probabilities start to get pretty small so I just chopped off there for visual purposes. The mean number of calls required to get that 3rd success is going to be r/p, which is 3/ 0.09 and that works to 33 and a third. So on average we're going to have to make about 33 calls before we can go home for the day or move on to something else.

Contents

Definition

If is an integer, then the PMF can be written in terms of the beta function,:

.

More generally the PMF can be written

.

PMF expressed with Gamma

Using the properties of the Beta function, the PMF with integer can be rewritten as:

.

More generally, the PMF can be written as

.

PMF expressed with the rising Pochammer symbol

The PMF is often also presented in terms of the Pochammer symbol for integer

Properties

The beta negative binomial distribution contains the beta geometric distribution as a special case when . It can therefore approximate the geometric distribution arbitrarily well. It also approximates the negative binomial distribution arbitrary well for large and . It can therefore approximate the Poisson distribution arbitrarily well for large , and .

By Stirling's approximation to the beta function, it can be easily shown that

which implies that the beta negative binomial distribution is heavy tailed.

Notes

  1. ^ a b Johnson et al. (1993)

References

External links

This page was last edited on 31 January 2018, at 22:59
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.