In mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is a generalization of the canonical ensemble to infinite systems. The canonical ensemble gives the probability of the system X being in state x (equivalently, of the random variable X having value x) as
Here, E(x) is a function from the space of states to the real numbers; in physics applications, E(x) is interpreted as the energy of the configuration x. The parameter β is a free parameter; in physics, it is the inverse temperature. The normalizing constant Z(β) is the partition function. However, in infinite systems, the total energy is no longer a finite number and cannot be used in the traditional construction of the probability distribution of a canonical ensemble. Traditional approaches in statistical physics studied the limit of intensive properties as the size of a finite system approaches infinity (the thermodynamic limit). When the energy function can be written as a sum of terms that each involve only variables from a finite subsystem, the notion of a Gibbs measure provides an alternative approach. Gibbs measures were proposed by probability theorists such as Dobrushin, Lanford, and Ruelle and provided a framework to directly study infinite systems, instead of taking the limit of finite systems.
A measure is a Gibbs measure if the conditional probabilities it induces on each finite subsystem satisfy a consistency condition: if all degrees of freedom outside the finite subsystem are frozen, the canonical ensemble for the subsystem subject to these boundary conditions matches the probabilities in the Gibbs measure conditional on the frozen degrees of freedom.
The Hammersley–Clifford theorem implies that any probability measure that satisfies a Markov property is a Gibbs measure for an appropriate choice of (locally defined) energy function. Therefore, the Gibbs measure applies to widespread problems outside of physics, such as Hopfield networks, Markov networks, Markov logic networks, and bounded rational potential games in game theory and economics. A Gibbs measure in a system with local (finiterange) interactions maximizes the entropy density for a given expected energy density; or, equivalently, it minimizes the free energy density.
The Gibbs measure of an infinite system is not necessarily unique, in contrast to the canonical ensemble of a finite system, which is unique. The existence of more than one Gibbs measure is associated with statistical phenomena such as symmetry breaking and phase coexistence.
YouTube Encyclopedic

1/5Views:4 86831 19846 79989 3462 345

✪ General Gibbs Distribution  Stanford University

✪ Cell Potential & Gibbs Free Energy, Standard Reduction Potentials, Electrochemistry Problems

✪ Vapor Pressure Basic Introduction, Normal Boiling Point, & Clausius Clapeyron Equation  Chemistry

✪ Measurement and Significant Figures

✪ Measuring Pressure With Barometers and Manometers
Transcription
Contents
Statistical physics
The set of Gibbs measures on a system is always convex,^{[1]} so there is either a unique Gibbs measure (in which case the system is said to be "ergodic"), or there are infinitely many (and the system is called "nonergodic"). In the nonergodic case, the Gibbs measures can be expressed as the set of convex combinations of a much smaller number of special Gibbs measures known as "pure states" (not to be confused with the related but distinct notion of pure states in quantum mechanics). In physical applications, the Hamiltonian (the energy function) usually has some sense of locality, and the pure states have the cluster decomposition property that "farseparated subsystems" are independent. In practice, physically realistic systems are found in one of these pure states.
If the Hamiltonian possesses a symmetry, then a unique (i.e. ergodic) Gibbs measure will necessarily be invariant under the symmetry. But in the case of multiple (i.e. nonergodic) Gibbs measures, the pure states are typically not invariant under the Hamiltonian's symmetry. For example, in the infinite ferromagnetic Ising model below the critical temperature, there are two pure states, the "mostlyup" and "mostlydown" states, which are interchanged under the model's symmetry.
Markov property
An example of the Markov property can be seen in the Gibbs measure of the Ising model. The probability for a given spin σ_{k} to be in state s could, in principle, depend on the states of all other spins in the system. Thus, we may write the probability as
 .
However, in an Ising model with only finiterange interactions (for example, nearestneighbor interactions), we actually have
 ,
where N_{k} is a neighborhood of the site k. That is, the probability at site k depends only on the spins in a finite neighborhood. This last equation is in the form of a local Markov property. Measures with this property are sometimes called Markov random fields. More strongly, the converse is also true: any positive probability distribution (nonzero density everywhere) having the Markov property can be represented as a Gibbs measure for an appropriate energy function.^{[2]} This is the Hammersley–Clifford theorem.
Formal definition on lattices
What follows is a formal definition for the special case of a random field on a lattice. The idea of a Gibbs measure is, however, much more general than this.
The definition of a Gibbs random field on a lattice requires some terminology:
 The lattice: A countable set .
 The singlespin space: A probability space .
 The configuration space: , where and .
 Given a configuration ω ∈ Ω and a subset , the restriction of ω to Λ is . If and , then the configuration is the configuration whose restrictions to Λ_{1} and Λ_{2} are and , respectively.
 The set of all finite subsets of .
 For each subset , is the σalgebra generated by the family of functions , where . The union of these σalgebras as varies over is the algebra of cylinder sets on the lattice.
 The potential: A family of functions Φ_{A} : Ω → R such that
 For each is measurable, meaning it depends only on the restriction (and does so measurably).
 For all and ω ∈ Ω, the following series exists:^{[when defined as?]}
We interpret Φ_{A} as the contribution to the total energy (the Hamiltonian) associated to the interaction among all the points of finite set A. Then as the contribution to the total energy of all the finite sets A that meet . Note that the total energy is typically infinite, but when we "localize" to each it may be finite, we hope.
 The Hamiltonian in with boundary conditions , for the potential Φ, is defined by
 where .
 The partition function in with boundary conditions and inverse temperature β > 0 (for the potential Φ and λ) is defined by
 where
 is the product measure
 A potential Φ is λadmissible if is finite for all and β > 0.
 A probability measure μ on is a Gibbs measure for a λadmissible potential Φ if it satisfies the Dobrushin–Lanford–Ruelle (DLR) equation
 for all and .
An example
To help understand the above definitions, here are the corresponding quantities in the important example of the Ising model with nearestneighbor interactions (coupling constant J) and a magnetic field (h), on Z^{d}:
 The lattice is simply .
 The singlespin space is S = {−1, 1}.
 The potential is given by
See also
 Boltzmann distribution
 Exponential family
 Gibbs algorithm
 Gibbs sampling
 Interacting particle system
 Potential Game
 Softmax
 Stochastic cellular automata
References
 ^ http://www.stat.yale.edu/~pollard/Courses/606.spring06/handouts/Gibbs1.pdf
 ^ Ross Kindermann and J. Laurie Snell, Markov Random Fields and Their Applications (1980) American Mathematical Society, ISBN 0821850016
Further reading
 Georgii, H.O. (2011) [1988]. Gibbs Measures and Phase Transitions (2nd ed.). Berlin: de Gruyter. ISBN 9783110250299.
 Friedli, S.; Velenik, Y. (2017). Statistical Mechanics of Lattice Systems: a Concrete Mathematical Introduction. Cambridge: Cambridge University Press. ISBN 9781107184824.