Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.
In estimation theory, two approaches are generally considered.^{[1]}
 The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest
 The setmembership approach assumes that the measured data vector belongs to a set which depends on the parameter vector.
YouTube Encyclopedic

1/5Views:2 897315 51211 85628 4531 671 214

✪ State Space Tracking: Estimation Theory Part 1

✪ The Spectrophotometer: A demo and practice experiment

✪ LOC  software engineering 

✪ Lec17 State Estimation

✪ Probability density functions  Probability and Statistics  Khan Academy
Transcription
Contents
Examples
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the twoway transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal.
Basics
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a vector,
Secondly, there are M parameters
whose values are to be estimated. Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:
It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics). It is then necessary to define the Bayesian probability
After the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted , where the "hat" indicates the estimate.
One common estimator is the minimum mean squared error (MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parameters
as the basis for optimality. This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator.
Estimators
Commonly used estimators (estimation methods) and topics related to them include:
 Maximum likelihood estimators
 Bayes estimators
 Method of moments estimators
 Cramér–Rao bound
 Least squares
 Minimum mean squared error (MMSE), also known as Bayes least squared error (BLSE)
 Maximum a posteriori (MAP)
 Minimum variance unbiased estimator (MVUE)
 Nonlinear system identification
 Best linear unbiased estimator (BLUE)
 Unbiased estimators — see estimator bias.
 Particle filter
 Markov chain Monte Carlo (MCMC)
 Kalman filter, and its various derivatives
 Wiener filter
Examples
Unknown constant in additive white Gaussian noise
Consider a received discrete signal, , of independent samples that consists of an unknown constant with additive white Gaussian noise (AWGN) with known variance (i.e., ). Since the variance is known then the only unknown parameter is .
The model for the signal is then
Two possible (of many) estimators for the parameter are:
 which is the sample mean
Both of these estimators have a mean of , which can be shown through taking the expected value of each estimator
and
At this point, these two estimators would appear to perform the same. However, the difference between them becomes apparent when comparing the variances.
and
It would seem that the sample mean is a better estimator since its variance is lower for every N > 1.
Maximum likelihood
Continuing the example using the maximum likelihood estimator, the probability density function (pdf) of the noise for one sample is
and the probability of becomes ( can be thought of a )
By independence, the probability of becomes
Taking the natural logarithm of the pdf
and the maximum likelihood estimator is
Taking the first derivative of the loglikelihood function
and setting it to zero
This results in the maximum likelihood estimator
which is simply the sample mean. From this example, it was found that the sample mean is the maximum likelihood estimator for samples of a fixed, unknown parameter corrupted by AWGN.
Cramér–Rao lower bound
To find the Cramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find the Fisher information number
and copying from above
Taking the second derivative
and finding the negative expected value is trivial since it is now a deterministic constant
Finally, putting the Fisher information into
results in
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean is equal to the Cramér–Rao lower bound for all values of and . In other words, the sample mean is the (necessarily unique) efficient estimator, and thus also the minimum variance unbiased estimator (MVUE), in addition to being the maximum likelihood estimator.
Maximum of a uniform distribution
One of the simplest nontrivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a handson classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use of maximum likelihood estimators and likelihood functions.
Given a discrete uniform distribution with unknown maximum, the UMVU estimator for the maximum is given by
where m is the sample maximum and k is the sample size, sampling without replacement.^{[2]}^{[3]} This problem is commonly known as the German tank problem, due to application of maximum estimation to estimates of German tank production during World War II.
The formula may be understood intuitively as;
 "The sample maximum plus the average gap between observations in the sample",
the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.^{[note 1]}
This has a variance of^{[2]}
so a standard deviation of approximately , the (population) average size of a gap between samples; compare above. This can be seen as a very simple case of maximum spacing estimation.
The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased.
Applications
Numerous fields require the use of estimation theory. Some of these fields include (but are by no means limited to):
 Interpretation of scientific experiments
 Signal processing
 Clinical trials
 Opinion polls
 Quality control
 Telecommunications
 Project management
 Software engineering
 Control theory (in particular Adaptive control)
 Network intrusion detection system
 Orbit determination
Measured data are likely to be subject to noise or uncertainty and it is through statistical probability that optimal solutions are sought to extract as much information from the data as possible.
See also
 Best linear unbiased estimator (BLUE)
 Chebyshev center
 Completeness (statistics)
 Cramér–Rao bound
 Detection theory
 Efficiency (statistics)
 Estimator, Estimator bias
 Expectationmaximization algorithm (EM algorithm)
 Fermi problem
 Grey box model
 Information theory
 Kalman filter
 Leastsquares spectral analysis
 Markov chain Monte Carlo (MCMC)
 Matched filter
 Maximum a posteriori (MAP)
 Maximum likelihood
 Maximum entropy spectral estimation
 Method of moments, generalized method of moments
 Minimum mean squared error (MMSE)
 Minimum variance unbiased estimator (MVUE)
 Nonlinear system identification
 Nuisance parameter
 Parametric equation
 Pareto principle
 Particle filter
 Rao–Blackwell theorem
 Rule of three (statistics)
 Spectral density, Spectral density estimation
 Statistical signal processing
 Sufficiency (statistics)
 Wiener filter
Notes
 ^ The sample maximum is never more than the population maximum, but can be less, hence it is a biased estimator: it will tend to underestimate the population maximum.
References
Citations
 ^ Walter, E.; Pronzato, L. (1997). Identification of Parametric Models from Experimental Data. London, England: SpringerVerlag.
 ^ ^{a} ^{b} Johnson, Roger (1994), "Estimating the Size of a Population", Teaching Statistics, 16 (2 (Summer)): 50, doi:10.1111/j.14679639.1994.tb00688.x External link in
journal=
(help)  ^ Johnson, Roger (2006), "Estimating the Size of a Population", Getting the Best from Teaching Statistics, archived from the original (PDF) on November 20, 2008
Sources
 Theory of Point Estimation by E.L. Lehmann and G. Casella. (ISBN 0387985026)
 Systems Cost Engineering by Dale Shermon. (ISBN 9780566088612)
 Mathematical Statistics and Data Analysis by John Rice. (ISBN 0534209343)
 Fundamentals of Statistical Signal Processing: Estimation Theory by Steven M. Kay (ISBN 0133457117)
 An Introduction to Signal Detection and Estimation by H. Vincent Poor (ISBN 0387941738)
 Detection, Estimation, and Modulation Theory, Part 1 by Harry L. Van Trees (ISBN 0471095176; website)
 Optimal State Estimation: Kalman, Hinfinity, and Nonlinear Approaches by Dan Simon website
 Ali H. Sayed, Adaptive Filters, Wiley, NJ, 2008, ISBN 9780470253885.
 Ali H. Sayed, Fundamentals of Adaptive Filtering, Wiley, NJ, 2003, ISBN 0471461261.
 Thomas Kailath, Ali H. Sayed, and Babak Hassibi, Linear Estimation, PrenticeHall, NJ, 2000, ISBN 9780130224644.
 Babak Hassibi, Ali H. Sayed, and Thomas Kailath, Indefinite Quadratic Estimation and Control: A Unified Approach to H2 and Hoo Theories, Society for Industrial & Applied Mathematics (SIAM), PA, 1999, ISBN 9780898714111.
 V.G.Voinov, M.S.Nikulin, "Unbiased estimators and their applications. Vol.1: Univariate case", Kluwer Academic Publishers, 1993, ISBN 0792323823.
 V.G.Voinov, M.S.Nikulin, "Unbiased estimators and their applications. Vol.2: Multivariate case", Kluwer Academic Publishers, 1996, ISBN 0792339398.