To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Concentration of measure

From Wikipedia, the free encyclopedia

In mathematics, concentration of measure (about a median) is a principle that is applied in measure theory, probability and combinatorics, and has consequences for other fields such as Banach space theory. Informally, it states that "A random variable that depends in a Lipschitz way on many independent variables (but not too much on any of them) is essentially constant".[1]

The concentration of measure phenomenon was put forth in the early 1970s by Vitali Milman in his works on the local theory of Banach spaces, extending an idea going back to the work of Paul Lévy.[2][3] It was further developed in the works of Milman and Gromov, Maurey, Pisier, Schechtman, Talagrand, Ledoux, and others.

YouTube Encyclopedic

  • 1/5
    Views:
    1 820
    392
    657
    2 934
    497
  • Concentration of Measure on the Compact Classical Matrix Groups - Elizabeth Meckes
  • Log Concavity and Concentration of Measure on the Discrete Hypercube
  • Talagrand's majorizing measure theory: Lower bounds
  • Geometry of metrics and measure concentration in abstract ergodic theory - Tim Austin
  • Lecture 1 | Topics in concentration of measure | Лекториум

Transcription

The general setting

Let be a metric space with a measure on the Borel sets with . Let

where

is the -extension (also called -fattening in the context of the Hausdorff distance) of a set .

The function is called the concentration rate of the space . The following equivalent definition has many applications:

where the supremum is over all 1-Lipschitz functions , and the median (or Levy mean) is defined by the inequalities

Informally, the space exhibits a concentration phenomenon if decays very fast as grows. More formally, a family of metric measure spaces is called a Lévy family if the corresponding concentration rates satisfy

and a normal Lévy family if

for some constants . For examples see below.

Concentration on the sphere

The first example goes back to Paul Lévy. According to the spherical isoperimetric inequality, among all subsets of the sphere with prescribed spherical measure , the spherical cap

for suitable , has the smallest -extension (for any ).

Applying this to sets of measure (where ), one can deduce the following concentration inequality:

,

where are universal constants. Therefore meet the definition above of a normal Lévy family.

Vitali Milman applied this fact to several problems in the local theory of Banach spaces, in particular, to give a new proof of Dvoretzky's theorem.

Concentration of measure in physics

All classical statistical physics is based on the concentration of measure phenomena: The fundamental idea (‘theorem’) about equivalence of ensembles in thermodynamic limit (Gibbs, 1902[4] and Einstein, 1902-1904[5][6][7]) is exactly the thin shell concentration theorem. For each mechanical system consider the phase space equipped by the invariant Liouville measure (the phase volume) and conserving energy E. The microcanonical ensemble is just an invariant distribution over the surface of constant energy E obtained by Gibbs as the limit of distributions in phase space with constant density in thin layers between the surfaces of states with energy E and with energy E+ΔE. The canonical ensemble is given by the probability density in the phase space (with respect to the phase volume) where quantities F=const and T=const are defined by the conditions of probability normalisation and the given expectation of energy E.

When the number of particles is large, then the difference between average values of the macroscopic variables for the canonical and microcanonical ensembles tends to zero, and their fluctuations are explicitly evaluated. These results are proven rigorously under some regularity conditions on the energy function E by Khinchin (1943).[8] The simplest particular case when E is a sum of squares was well-known in detail before Khinchin and Lévy and even before Gibbs and Einstein. This is the Maxwell–Boltzmann distribution of the particle energy in ideal gas.

The microcanonical ensemble is very natural from the naïve physical point of view: this is just a natural equidistribution on the isoenergetic hypersurface. The canonical ensemble is very useful because of an important property: if a system consists of two non-interacting subsystems, i.e. if the energy E is the sum, , where are the states of the subsystems, then the equilibrium states of subsystems are independent, the equilibrium distribution of the system is the product of equilibrium distributions of the subsystems with the same T. The equivalence of these ensembles is the cornerstone of the mechanical foundations of thermodynamics.

Other examples

References

  1. ^ Talagrand, Michel (1996). "A New Look at Independence". Annals of Probability. 24 (1): 1–34. doi:10.1214/aop/1042644705.
  2. ^ "The concentration of , ubiquitous in the probability theory and statistical mechanics, was brought to geometry (starting from Banach spaces) by Vitali Milman, following the earlier work by Paul Lévy" - M. Gromov, Spaces and questions, GAFA 2000 (Tel Aviv, 1999), Geom. Funct. Anal. 2000, Special Volume, Part I, 118–161.
  3. ^ "The idea of concentration of measure (which was discovered by V.Milman) is arguably one of the great ideas of analysis in our times. While its impact on Probability is only a small part of the whole picture, this impact should not be ignored." - M. Talagrand, A new look at independence, Ann. Probab. 24 (1996), no. 1, 1–34.
  4. ^ Gibbs, Josiah Willard (1902). Elementary Principles in Statistical Mechanics (PDF). New York, NY: Charles Scribner's Sons.
  5. ^ Einstein, Albert (1902). "Kinetische Theorie des Wärmegleichgewichtes und des zweiten Hauptsatzes der Thermodynamik [Kinetic Theory of Thermal Equilibrium and of the Second Law of Thermodynamics]" (PDF). Annalen der Physik. Series 4. 9: 417–433. doi:10.1002/andp.19023141007. Retrieved January 21, 2020.
  6. ^ Einstein, Albert (1904). "Eine Theorie der Grundlagen der Thermodynamik [A Theory of the Foundations of Thermodynamics]" (PDF). Annalen der Physik. Series 4. 11: 417–433. Retrieved January 21, 2020.
  7. ^ Einstein, Albert (1904). "Allgemeine molekulare Theorie der Wärme [On the General Molecular Theory of Heat]" (PDF). Annalen der Physik. Series 4. 14: 354–362. doi:10.1002/andp.19043190707. Retrieved January 21, 2020.
  8. ^ Khinchin, Aleksandr Y. (1949). Mathematical foundations of statistical mechanics [English translation from the Russian edition, Moscow, Leningrad, 1943]. New York, NY: Courier Corporation. Retrieved January 21, 2020.

Further reading

External links

 

This page was last edited on 13 January 2024, at 18:46
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.