In statistics and statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. This sequence can be used to approximate the distribution (e.g. to generate a histogram) or to compute an integral (e.g. an expected value). Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multidimensional distributions, especially when the number of dimensions is high. For singledimensional distributions, there are usually other methods (e.g. adaptive rejection sampling) that can directly return independent samples from the distribution, and these are free from the problem of autocorrelated samples that is inherent in MCMC methods.
YouTube Encyclopedic

1/5Views:1 97238 613960135 670171 659

✪ MaxwellBoltzmann Distribution generated by Metropolis Monte Carlo Simulation

✪ (ML 18.9) Example illustrating the Metropolis algorithm

✪ Metropolis algorithm and the 1D oscillatorpython

✪ Markov Chain Monte Carlo and the Metropolis Alogorithm

✪ Origin of Markov chains  Journey into information theory  Computer Science  Khan Academy
Transcription
Contents
History
The algorithm was named after Nicholas Metropolis, who authored the 1953 paper Equation of State Calculations by Fast Computing Machines together with Arianna W. Rosenbluth, Marshall Rosenbluth, Augusta H. Teller and Edward Teller. This paper proposed the algorithm for the case of symmetrical proposal distributions, and W. K. Hastings extended it to the more general case in 1970.^{[1]}
Some controversy exists with regard to credit for development of the algorithm. Metropolis had coined the term "Monte Carlo" in an earlier paper with Stanislav Ulam, was familiar with the computational aspects of the method, and led the group in the Theoretical Division that designed and built the MANIAC I computer used in the experiments in 1952. However, prior to 2003, there was no detailed account of the algorithm's development. Then, shortly before his death, Marshall Rosenbluth attended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics".^{[2]}. Further historical clarification is made by Gubernatis in a 2005 journal article^{[3]} recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time.
This contradicts an account by Edward Teller, who states in his memoirs that the five authors of the 1953 paper worked together for "days (and nights)."^{[4]} In contrast, the detailed account by Rosenbluth credits Teller with a crucial but early suggestion to "take advantage of statistical mechanics and take ensemble averages instead of following detailed kinematics". This, says Rosenbluth, started him thinking about the generalized Monte Carlo approach  a topic which he says he had discussed often with Von Neumann. Arianna recounted (to Gubernatis in 2003) that Augusta Teller started the computer work, but that Arianna herself took it over and wrote the code from scratch. In an oral history recorded shortly before his death^{[5]}, Rosenbluth again credits Teller with posing the original problem, himself with solving it, and Arianna with programming the computer. In terms of reputation there is little reason to question Rosenbluth's account. In a biographical memoir of Rosenbluth, Freeman Dyson writes
Many times I came to Rosenbluth, asking him a question [...] and receiving an answer in two minutes. Then it would usually take me a week of hard work to understand in detail why Rosenbluth's answer was right. He had an amazing ability to see through a complicated physical situation and reach the right answer by physical arguments. Enrico Fermi was the only other physicist I have known who was equal to Rosenbluth in his intuitive grasp of physics.^{[6]}
Intuition
The Metropolis–Hastings algorithm can draw samples from any probability distribution , provided that the value of a function proportional to the density of can be calculated^{[clarification needed]}. The requirement that must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because calculating the necessary normalization factor is often extremely difficult in practice^{[citation needed]}.
The Metropolis–Hastings algorithm works by generating a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution . These sample values are produced iteratively, with the distribution of the next sample being dependent only on the current sample value (thus making the sequence of samples into a Markov chain). Specifically, at each iteration, the algorithm picks a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted (in which case the candidate value is used in the next iteration) or rejected (in which case the candidate value is discarded, and current value is reused in the next iteration)—the probability of acceptance is determined by comparing the values of the function of the current and candidate sample values with respect to the desired distribution .
For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm where the proposal function is symmetric, is described below.
Metropolis algorithm (symmetric proposal distribution)
Let be a function that is proportional to the desired probability distribution (a.k.a. a target distribution).
 Initialization: Choose an arbitrary point to be the first sample, and choose an arbitrary probability density (sometimes written ) that suggests a candidate for the next sample value , given the previous sample value . For the Metropolis algorithm, must be symmetric; in other words, it must satisfy . A usual choice is to let be a Gaussian distribution centered at , so that points closer to are more likely to be visited next—making the sequence of samples into a random walk. The function is referred to as the proposal density or jumping distribution.
 For each iteration t:
 Generate : Generate a candidate for the next sample by picking from the distribution .
 Calculate : Calculate the acceptance ratio , which will be used to decide whether to accept or reject the candidate. Because f is proportional to the density of P, we have that .
 Accept or Reject :
 Generate a uniform random number on [0,1].
 If accept the candidate by setting ,
 If reject the candidate and set , instead.
This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place. Note that the acceptance ratio indicates how probable the new proposed sample is with respect to the current sample, according to the distribution . If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higherdensity region of ), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the more the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) highdensity regions of , while only occasionally visiting lowdensity regions. Intuitively, this is why this algorithm works, and returns samples that follow the desired distribution .
Compared with an algorithm like adaptive rejection sampling^{[7]} that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages:
 The samples are correlated. Even though over the long term they do correctly follow , a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that if we want a set of independent samples, we have to throw away the majority of samples and only take every nth sample, for some value of n (typically determined by examining the autocorrelation between adjacent samples). Autocorrelation can be reduced by increasing the jumping width (the average size of a jump, which is related to the variance of the jumping distribution), but this will also increase the likelihood of rejection of the proposed jump. Too large or too small a jumping size will lead to a slowmixing Markov chain, i.e. a highly correlated set of samples, so that a very large number of samples will be needed to get a reasonable estimate of any desired property of the distribution.
 Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a burnin period is typically necessary,^{[8]} where an initial number of samples (e.g. the first 1,000 or so) are thrown away.
On the other hand, most simple rejection sampling methods suffer from the "curse of dimensionality", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples from hierarchical Bayesian models and other highdimensional statistical models used nowadays in many disciplines.
In multivariate distributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multidimensional sample point. When the number of dimensions is high, finding the right jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known as Gibbs sampling, involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. This is especially applicable when the multivariate distribution is composed out of a set of individual random variables in which each variable is conditioned on only a small number of other variables, as is the case in most typical hierarchical models. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are the adaptive rejection sampling methods,^{[7]}^{[9]}^{[10]}^{[11]} the adaptive rejection Metropolis sampling algorithm^{[12]} or its improvements^{[13]}^{[14]} (see matlab code), a simple onedimensional Metropolis–Hastings step, or slice sampling.
Formal derivation
The purpose of the Metropolis–Hastings algorithm is to generate a collection of states according to a desired distribution . To accomplish this, the algorithm uses a Markov process which asymptotically reaches a unique stationary distribution such that .^{[15]}
A Markov process is uniquely defined by its transition probabilities, , the probability of transitioning from any given state, , to any other given state, . It has a unique stationary distribution when the following two conditions are met:^{[15]}
 existence of stationary distribution: there must exist a stationary distribution . A sufficient but not necessary condition is detailed balance which requires that each transition is reversible: for every pair of states , the probability of being in state and transitioning to state must be equal to the probability of being in state and transitioning to state , .
 uniqueness of stationary distribution: the stationary distribution must be unique. This is guaranteed by ergodicity of the Markov process, which requires that every state must (1) be aperiodic—the system does not return to the same state at fixed intervals; and (2) be positive recurrent—the expected number of steps for returning to the same state is finite.
The Metropolis–Hastings algorithm involves designing a Markov process (by constructing transition probabilities) which fulfills the two above conditions, such that its stationary distribution is chosen to be . The derivation of the algorithm starts with the condition of detailed balance:
which is rewritten as
.
The approach is to separate the transition in two substeps; the proposal and the acceptancerejection. The proposal distribution is the conditional probability of proposing a state given , and the acceptance ratio the probability to accept the proposed state . The transition probability can be written as the product of them:
.
Inserting this relation in the previous equation, we have
.
The next step in the derivation is to choose an acceptance ratio that fulfils the condition above. One common choice is the Metropolis choice:
For this Metropolis acceptance ratio , either or and, either way, the condition is satisfied.
The Metropolis–Hastings algorithm thus consists in the following:
 Initialise
 Pick an initial state ;
 Set ;
 Iterate
 Generate: randomly generate a candidate state x' according to ;
 Calculate: calculate the acceptance probability ;
 Accept or Reject:
 generate a uniform random number ;
 if , accept the new state and set ;
 if , reject the new state, and copy the old state forward ;
 Increment: set ;
Provided that specified conditions are met, the empirical distribution of saved states will approach . The number of iterations () required to effectively estimate depends on the number of factors, including the relationship between and the proposal distribution and the desired accuracy of estimation.^{[16]} For distribution on discrete state spaces, it has to be of the order of the autocorrelation time of the Markov process.^{[17]}
It is important to notice that it is not clear, in a general problem, which distribution one should use or the number of iterations necessary for proper estimation; both are free parameters of the method which must be adjusted to the particular problem in hand.
Use in numerical integration
A common use of Metropolis–Hastings algorithm is to compute an integral. Specifically, consider a space and a probability distribution P(x) over , . MetropolisHastings can estimate an integral of the form of
where A(x) is an arbitrary function of interest. For example, consider a statistic E(x) and its probability distribution P(E), which is a marginal distribution. Suppose that the goal is to estimate P(E) for E on the tail of P(E). Formally, P(E) can be written as
and, thus, estimating P(E) can be accomplished by estimating the expected value of the indicator function , which is 1 when and zero otherwise. Because E is on the tail of P(E), the probability to draw a state x with E(x) on the tail of P(E) is proportional to P(E), which is small by definition. MetropolisHastings can be used here to sample (rare) states more likely and thus increase the number of samples used to estimate P(E) on the tails. This can be done e.g. by using a sampling distribution to favor those states (e.g. with a>0).
Stepbystep instructions
Suppose the most recent value sampled is . To follow the Metropolis–Hastings algorithm, we next draw a new proposal state with probability density , and calculate a value
where
is the probability (e.g., Bayesian posterior) ratio between the proposed sample and the previous sample , and
is the ratio of the proposal density in two directions (from to and vice versa). This is equal to 1 if the proposal density is symmetric. Then the new state is chosen according to the following rules.
The Markov chain is started from an arbitrary initial value and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known as burnin. The remaining set of accepted values of represent a sample from the distribution .
The algorithm works best if the proposal density matches the shape of the target distribution from which direct sampling is difficult, that is . If a Gaussian proposal density is used the variance parameter has to be tuned during the burnin period. This is usually done by calculating the acceptance rate, which is the fraction of proposed samples that is accepted in a window of the last samples. The desired acceptance rate depends on the target distribution, however it has been shown theoretically that the ideal acceptance rate for a onedimensional Gaussian distribution is approx 50%, decreasing to approx 23% for an dimensional Gaussian target distribution.^{[18]}
If is too small the chain will mix slowly (i.e., the acceptance rate will be high but successive samples will move around the space slowly and the chain will converge only slowly to ). On the other hand, if is too large the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density, so will be very small and again the chain will converge very slowly. One typically tunes the proposal distribution so that the algorithms accepts on the order of 30% of all samples  in line with the theoretical estimates mentioned in the previous paragraph.
See also
 Detailed balance
 Genetic algorithms
 Gibbs sampling
 Mean field particle methods
 Metropolisadjusted Langevin algorithm
 Metropolis light transport
 Multipletry Metropolis
 Parallel tempering
 Preconditioned Crank–Nicolson algorithm
 Sequential Monte Carlo
 Simulated annealing
References
 ^ Hastings, W.K. (1970). "Monte Carlo Sampling Methods Using Markov Chains and Their Applications". Biometrika. 57 (1): 97–109. Bibcode:1970Bimka..57...97H. doi:10.1093/biomet/57.1.97. JSTOR 2334940. Zbl 0219.65008.
 ^ M.N. Rosenbluth (2003). "Genesis of the Monte Carlo Algorithm for Statistical Mechanics". AIP Conference Proceedings. 690: 22. doi:10.1063/1.1632112.
 ^ J.E. Gubernatis (2005). "Marshall Rosenbluth and the Metropolis Algorithm". Physics of Plasmas. 12 (5): 057303. Bibcode:2005PhPl...12e7303G. doi:10.1063/1.1887186.
 ^ Teller, Edward. Memoirs: A TwentiethCentury Journey in Science and Politics. Perseus Publishing, 2001, p. 328
 ^ Rosenbluth, Marshall. "Oral History Transcript". American Institute of Physics
 ^ F. Dyson (2006). "Marshall N. Rosenbluth". Proceedings of the American Philosophical Society. 250: 404.
 ^ ^{a} ^{b} Gilks, W. R.; Wild, P. (19920101). "Adaptive Rejection Sampling for Gibbs Sampling". Journal of the Royal Statistical Society. Series C (Applied Statistics). 41 (2): 337–348. doi:10.2307/2347565. JSTOR 2347565.
 ^ Bayesian data analysis. Gelman, Andrew. (2nd ed.). Boca Raton, Fla.: Chapman & Hall/CRC. 2004. ISBN 9781584883883. OCLC 51991499.CS1 maint: others (link)
 ^ Görür, Dilan; Teh, Yee Whye (20110101). "ConcaveConvex Adaptive Rejection Sampling". Journal of Computational and Graphical Statistics. 20 (3): 670–691. doi:10.1198/jcgs.2011.09058. ISSN 10618600.
 ^ Hörmann, Wolfgang (19950601). "A Rejection Technique for Sampling from Tconcave Distributions". ACM Trans. Math. Softw. 21 (2): 182–193. CiteSeerX 10.1.1.56.6055. doi:10.1145/203082.203089. ISSN 00983500.
 ^ Martino, Luca; Míguez, Joaquín (20100825). "A generalization of the adaptive rejection sampling algorithm". Statistics and Computing. 21 (4): 633–647. doi:10.1007/s1122201091979. hdl:10016/16624. ISSN 09603174.
 ^ Gilks, W. R.; Best, N. G.; Tan, K. K. C. (19950101). "Adaptive Rejection Metropolis Sampling within Gibbs Sampling". Journal of the Royal Statistical Society. Series C (Applied Statistics). 44 (4): 455–472. doi:10.2307/2986138. JSTOR 2986138.
 ^ Martino, L.; Read, J.; Luengo, D. (20150601). "Independent Doubly Adaptive Rejection Metropolis Sampling Within Gibbs Sampling". IEEE Transactions on Signal Processing. 63 (12): 3123–3138. arXiv:1205.5494. Bibcode:2015ITSP...63.3123M. doi:10.1109/TSP.2015.2420537. ISSN 1053587X.
 ^ Meyer, Renate; Cai, Bo; Perron, François (20080315). "Adaptive rejection Metropolis sampling using Lagrange interpolation polynomials of degree 2". Computational Statistics & Data Analysis. 52 (7): 3408–3423. doi:10.1016/j.csda.2008.01.005.
 ^ ^{a} ^{b} Robert, Christian; Casella, George (2004). Monte Carlo Statistical Methods. Springer. ISBN 9780387212395.
 ^ Raftery, Adrian E., and Steven Lewis. "How Many Iterations in the Gibbs Sampler?." In Bayesian Statistics 4. 1992.
 ^ Newman, M. E. J.; Barkema, G. T. (1999). Monte Carlo Methods in Statistical Physics. USA: Oxford University Press. ISBN 9780198517979.
 ^ Roberts, G.O.; Gelman, A.; Gilks, W.R. (1997). "Weak convergence and optimal scaling of random walk Metropolis algorithms". Ann. Appl. Probab. 7 (1): 110–120. CiteSeerX 10.1.1.717.2582. doi:10.1214/aoap/1034625254.
Further reading
 Bernd A. Berg. Markov Chain Monte Carlo Simulations and Their Statistical Analysis. Singapore, World Scientific, 2004.
 Siddhartha Chib and Edward Greenberg: "Understanding the Metropolis–Hastings Algorithm". American Statistician, 49(4), 327–335, 1995
 David D. L. Minh and Do Le Minh. "Understanding the Hastings Algorithm." Communications in Statistics  Simulation and Computation, 44:2 332349, 2015
 Bolstad, William M. (2010) Understanding Computational Bayesian Statistics, John Wiley & Sons ISBN 0470046090
External links
 MetropolisHastings algorithm on xβ
 Matlab implementation of RandomWalk Metropolis
 R implementation of RandomWalk Metropolis
 IA2RMS is a Matlab code of the Independent Doubly Adaptive Rejection Metropolis Sampling method for drawing from the fullconditional densities within a Gibbs sampler.
 unbiased Metropolis sampling Simple Visual C++ project which showcases numerical integration using Metropolis sampling without burnin samples and without bias. Uses idea from Ph.D. thesis of Eric Veach "ROBUST MONTE CARLO METHODS FOR LIGHT TRANSPORT SIMULATION"