In mathematics, a lowdiscrepancy sequence is a sequence with the property that for all values of N, its subsequence x_{1}, ..., x_{N} has a low discrepancy.
Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary set B is close to proportional to the measure of B, as would happen on average (but not for particular samples) in the case of an equidistributed sequence. Specific definitions of discrepancy differ regarding the choice of B (hyperspheres, hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value).
Lowdiscrepancy sequences are also called quasirandom or subrandom sequences, due to their common use as a replacement of uniformly distributed random numbers. The "quasi" modifier is used to denote more clearly that the values of a lowdiscrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasiMonte Carlo method their lower discrepancy is an important advantage.
YouTube Encyclopedic

1/5Views:1 3069554 1961 7572 184

✪ TU Wien Rendering #26  Low Discrepancy Sequences

✪ IACS Seminar: Orderly Randomness: Quasirandom Numbers and QuasiMonte Carlo 02/06/15

✪ The Entropy Decrement Method and the Erdos Discrepancy Problem

✪ A Mathmatical Model of Force Transmission by Desmin in Skeletal Muscle SciVee

✪ Mersenne Twister and Friends
Transcription
Contents
 1 Applications
 2 Definition of discrepancy
 3 The Koksma–Hlawka inequality
 4 The formula of Hlawka–Zaremba
 5 The version of the Koksma–Hlawka inequality
 6 The Erdős–Turán–Koksma inequality
 7 The main conjectures
 8 Lower bounds
 9 Construction of lowdiscrepancy sequences
 10 Graphical examples
 11 See also
 12 References
 13 External links
Applications
Subrandom numbers have an advantage over pure random numbers in that they cover the domain of interest quickly and evenly. They have an advantage over purely deterministic methods in that deterministic methods only give high accuracy when the number of datapoints is preset whereas in using subrandom sequences the accuracy typically improves continually as more datapoints are added, with full reuse of the existing points. On the other hand, subrandom sets can have a significantly lower discrepancy for a given number of points than purely random sequences.
Two useful applications are in finding the characteristic function of a probability density function, and in finding the derivative function of a deterministic function with a small amount of noise. Subrandom numbers allow higherorder moments to be calculated to high accuracy very quickly.
Applications that don't involve sorting would be in finding the mean, standard deviation, skewness and kurtosis of a statistical distribution, and in finding the integral and global maxima and minima of difficult deterministic functions. Subrandom numbers can also be used for providing starting points for deterministic algorithms that only work locally, such as Newton–Raphson iteration.
Subrandom numbers can also be combined with search algorithms. A binary tree Quicksortstyle algorithm ought to work exceptionally well because subrandom numbers flatten the tree far better than random numbers, and the flatter the tree the faster the sorting. With a search algorithm, subrandom numbers can be used to find the mode, median, confidence intervals and cumulative distribution of a statistical distribution, and all local minima and all solutions of deterministic functions.
Lowdiscrepancy sequences in numerical integration
At least three methods of numerical integration can be phrased as follows. Given a set {x_{1}, ..., x_{N}} in the interval [0,1], approximate the integral of a function f as the average of the function evaluated at those points:
If the points are chosen as x_{i} = i/N, this is the rectangle rule. If the points are chosen to be randomly (or pseudorandomly) distributed, this is the Monte Carlo method. If the points are chosen as elements of a lowdiscrepancy sequence, this is the quasiMonte Carlo method. A remarkable result, the Koksma–Hlawka inequality (stated below), shows that the error of such a method can be bounded by the product of two terms, one of which depends only on f, and the other one is the discrepancy of the set {x_{1}, ..., x_{N}}.
It is convenient to construct the set {x_{1}, ..., x_{N}} in such a way that if a set with N+1 elements is constructed, the previous N elements need not be recomputed. The rectangle rule uses points set which have low discrepancy, but in general the elements must be recomputed if N is increased. Elements need not be recomputed in the random Monte Carlo method if N is increased, but the point sets do not have minimal discrepancy. By using lowdiscrepancy sequences we aim for low discrepancy and no need for recomputations, but actually lowdiscrepancy sequences can only be incrementally good on discrepancy if we allow no recomputation.
Definition of discrepancy
The discrepancy of a set P = {x_{1}, ..., x_{N}} is defined, using Niederreiter's notation, as
where λ_{s} is the sdimensional Lebesgue measure, A(B;P) is the number of points in P that fall into B, and J is the set of sdimensional intervals or boxes of the form
where .
The stardiscrepancy D^{*}_{N}(P) is defined similarly, except that the supremum is taken over the set J^{*} of rectangular boxes of the form
where u_{i} is in the halfopen interval [0, 1).
The two are related by
Note: With these definitions, discrepancy represents the worstcase or maximum point density deviation of a uniform set. However, also other error measures are meaningful, leading to other definitions and variation measures. For instance, L2 discrepancy or modified centered L2 discrepancy are also used intensively to compare the quality of uniform point sets. Both are much easier to calculate for large N and s.
The Koksma–Hlawka inequality
Let Ī^{s} be the sdimensional unit cube, Ī^{s} = [0, 1] × ... × [0, 1]. Let f have bounded variation V(f) on Ī^{s} in the sense of Hardy and Krause. Then for any x_{1}, ..., x_{N} in I^{s} = [0, 1) × ... × [0, 1),
The Koksma–Hlawka inequality is sharp in the following sense: For any point set {x_{1},...,x_{N}} in I^{s} and any , there is a function f with bounded variation and V(f) = 1 such that
Therefore, the quality of a numerical integration rule depends only on the discrepancy D^{*}_{N}(x_{1},...,x_{N}).
The formula of Hlawka–Zaremba
Let . For we write
and denote by the point obtained from x by replacing the coordinates not in u by . Then
where is the discrepancy function.
The version of the Koksma–Hlawka inequality
Applying the Cauchy–Schwarz inequality for integrals and sums to the Hlawka–Zaremba identity, we obtain an version of the Koksma–Hlawka inequality:
where
and
L2 discrepancy has a high practical importance because fast explicit calculations are possible for a given point set. This way it is easy to create point set optimizers using L2 discrepancy as criteria.
The Erdős–Turán–Koksma inequality
It is computationally hard to find the exact value of the discrepancy of large point sets. The Erdős–Turán–Koksma inequality provides an upper bound.
Let x_{1},...,x_{N} be points in I^{s} and H be an arbitrary positive integer. Then
where
The main conjectures
Conjecture 1. There is a constant c_{s} depending only on the dimension s, such that
for any finite point set {x_{1},...,x_{N}}.
Conjecture 2. There is a constant c^{'}_{s} depending only on s, such that
for infinite number of N for any infinite sequence x_{1},x_{2},x_{3},....
These conjectures are equivalent. They have been proved for s ≤ 2 by W. M. Schmidt. In higher dimensions, the corresponding problem is still open. The bestknown lower bounds are due to Michael Lacey and collaborators.
Lower bounds
Let s = 1. Then
for any finite point set {x_{1}, ..., x_{N}}.
Let s = 2. W. M. Schmidt proved that for any finite point set {x_{1}, ..., x_{N}},
where
For arbitrary dimensions s > 1, K.F. Roth proved that
for any finite point set {x_{1}, ..., x_{N}}. This bound is the best known for s > 3.
Construction of lowdiscrepancy sequences
Because any distribution of random numbers can be mapped onto a uniform distribution, and subrandom numbers are mapped in the same way, this article only concerns generation of subrandom numbers on a multidimensional uniform distribution.
There are constructions of sequences known such that
where C is a certain constant, depending on the sequence. After Conjecture 2, these sequences are believed to have the best possible order of convergence. Examples below are the van der Corput sequence, the Halton sequences, and the Sobol sequences. One general limitation is that construction methods can usually only guarantee the order of convergence. Practically, low discrepancy can be only achieved if N is large enough, and for large given s this minimum N can be very large. This means running a MonteCarlo analysis with e.g. s=20 variables and N=1000 points from a lowdiscrepancy sequence generator may offer only a very minor accuracy improvement.
Random numbers
Sequences of subrandom numbers can be generated from random numbers by imposing a negative correlation on those random numbers. One way to do this is to start with a set of random numbers on and construct subrandom numbers which are uniform on using:
for odd and for even.
A second way to do it with the starting random numbers is to construct a random walk with offset 0.5 as in:
That is, take the previous subrandom number, add 0.5 and the random number, and take the result modulo 1.
For more than one dimension, Latin squares of the appropriate dimension can be used to provide offsets to ensure that the whole domain is covered evenly.
Additive recurrence
For any irrational , the sequence
has discrepancy tending to 0. (Note the sequence can be defined recursively by .) A good value of gives lower discrepancy than a sequence of independent uniform random numbers.
The discrepancy can be bounded by the approximation exponent of . If the approximation exponent is , then for any , the following bound holds:^{[1]}
By the Thue–Siegel–Roth theorem, the approximation exponent of any irrational algebraic number is 2, giving a bound of above.
The value of with lowest discrepancy is ^{[2]}
Another value that is nearly as good is:
In more than one dimension, separate subrandom numbers are needed for each dimension. In higher dimensions, one set of values that can be used is the square roots of primes from two up, all taken modulo 1:
The recurrence relation above is similar to the recurrence relation used by a Linear congruential generator, a poorquality pseudorandom number generator:^{[3]}
For the low discrepancy additive recurrence above, a and m are chosen to be 1. Note, however, that this will not generate independent random numbers, so should not be used for purposes requiring independence. The list of pseudorandom number generators lists methods for generating independent pseudorandom numbers. Note: In few dimensions, recursive recurrence leads to uniform sets of good quality, but for larger s (like s>8) other point set generators can offer much lower discrepancies.
van der Corput sequence
Let
be the bary representation of the positive integer n ≥ 1, i.e. 0 ≤ d_{k}(n) < b. Set
Then there is a constant C depending only on b such that (g_{b}(n))_{n ≥ 1}satisfies
where D^{*}_{N} is the star discrepancy.
Halton sequence
The Halton sequence is a natural generalization of the van der Corput sequence to higher dimensions. Let s be an arbitrary dimension and b_{1}, ..., b_{s} be arbitrary coprime integers greater than 1. Define
Then there is a constant C depending only on b_{1}, ..., b_{s}, such that sequence {x(n)}_{n≥1} is a sdimensional sequence with
Hammersley set
Let b_{1},...,b_{s−1} be coprime positive integers greater than 1. For given s and N, the sdimensional Hammersley set of size N is defined by^{[4]}
for n = 1, ..., N. Then
where C is a constant depending only on b_{1}, ..., b_{s−1}. Note: The formulas show that the Hammersley set is actually the Halton sequence, but we get one more dimension for free by adding a linear sweep. This is only possible if N is known upfront. A linear set is also the set with lowest possible onedimensional discrepancy in general. Unfortunately, for higher dimensions, no such "discrepancy record sets" are known. For s = 2, most lowdiscrepancy point set generators deliver at least nearoptimum discrepancies.
Sobol sequence
The Antonov–Saleev variant of the Sobol sequence generates numbers between zero and one directly as binary fractions of length , from a set of special binary fractions, called direction numbers. The bits of the Gray code of , , are used to select direction numbers. To get the Sobol sequence value take the exclusive or of the binary value of the Gray code of with the appropriate direction number. The number of dimensions required affects the choice of .
Poisson disk sampling
Poisson disk sampling is popular in video games to rapidly place objects in a way that appears randomlooking but guarantees that every two points are separated by at least the specified minimum distance.^{[5]} This does not guarantee low discrepancy (as e. g. Sobol), but at least a significantly lower discrepancy than pure random sampling.
Graphical examples
The points plotted below are the first 100, 1000, and 10000 elements in a sequence of the Sobol' type. For comparison, 10000 elements of a sequence of pseudorandom points are also shown. The lowdiscrepancy sequence was generated by TOMS algorithm 659.^{[6]} An implementation of the algorithm in Fortran is available from Netlib.
See also
References
 ^ Kuipers and Niederreiter, 2005, p. 123
 ^ http://mollwollfumble.blogspot.com/, Subrandom numbers
 ^ Donald E. Knuth The Art of Computer Programming Vol. 2, Ch. 3
 ^ Hammersley, J. M.; Handscomb, D. C. (1964). Monte Carlo Methods. doi:10.1007/9789400958197.
 ^ Herman Tulleken. "Poisson Disk Sampling". Dev.Mag Issue 21, March 2008.
 ^ P. Bratley and B.L. Fox in ACM Transactions on Mathematical Software, vol. 14, no. 1, pp 88—100
 Josef Dick and Friedrich Pillichshammer, Digital Nets and Sequences. Discrepancy Theory and QuasiMonte Carlo Integration, Cambridge University Press, Cambridge, 2010, ISBN 9780521191593
 Kuipers, L.; Niederreiter, H. (2005), Uniform distribution of sequences, Dover Publications, ISBN 0486450198
 Harald Niederreiter. Random Number Generation and QuasiMonte Carlo Methods. Society for Industrial and Applied Mathematics, 1992. ISBN 0898712955
 Michael Drmota and Robert F. Tichy, Sequences, discrepancies and applications, Lecture Notes in Math., 1651, Springer, Berlin, 1997, ISBN 3540626069
 William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling. Numerical Recipes in C. Cambridge, UK: Cambridge University Press, second edition 1992. ISBN 0521431085 (see Section 7.7 for a less technical discussion of lowdiscrepancy sequences)
External links
 Collected Algorithms of the ACM (See algorithms 647, 659, and 738.)
 QuasiRandom Sequences from the GNU Scientific Library
 Quasirandom sampling subject to constraints at FinancialMathematics.Com
 C++ generator of Sobol sequence