To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Nonparametric regression

From Wikipedia, the free encyclopedia

Nonparametric regression is a category of regression analysis in which the predictor does not take a predetermined form but is constructed according to information derived from the data. That is, no parametric form is assumed for the relationship between predictors and dependent variable. Nonparametric regression requires larger sample sizes than regression based on parametric models because the data must supply the model structure as well as the model estimates.

YouTube Encyclopedic

  • 1/5
    Views:
    3 526
    58 484
    20 538
    127 660
    1 435
  • STAT 432 /// Nonparametric Regression
  • Nonparametric Kernel regression
  • Nonparametric regression in Stata
  • Parametric and Nonparametric Tests
  • Nonparametric regression

Transcription

Definition

In nonparametric regression, we have random variables and and assume the following relationship:

where is some deterministic function. Linear regression is a restricted case of nonparametric regression where is assumed to be affine. Some authors use a slightly stronger assumption of additive noise:

where the random variable is the `noise term', with mean 0. Without the assumption that belongs to a specific parametric family of functions it is impossible to get an unbiased estimate for , however most estimators are consistent under suitable conditions.

List of general-purpose nonparametric regression algorithms

This is a non-exhaustive list of non-parametric models for regression.

Examples

Gaussian process regression or Kriging

In Gaussian process regression, also known as Kriging, a Gaussian prior is assumed for the regression curve. The errors are assumed to have a multivariate normal distribution and the regression curve is estimated by its posterior mode. The Gaussian prior may depend on unknown hyperparameters, which are usually estimated via empirical Bayes. The hyperparameters typically specify a prior covariance kernel. In case the kernel should also be inferred nonparametrically from the data, the critical filter can be used.

Smoothing splines have an interpretation as the posterior mode of a Gaussian process regression.

Kernel regression

Example of a curve (red line) fit to a small data set (black points) with nonparametric regression using a Gaussian kernel smoother. The pink shaded area illustrates the kernel function applied to obtain an estimate of y for a given value of x. The kernel function defines the weight given to each data point in producing the estimate for a target point.

Kernel regression estimates the continuous dependent variable from a limited set of data points by convolving the data points' locations with a kernel function—approximately speaking, the kernel function specifies how to "blur" the influence of the data points so that their values can be used to predict the value for nearby locations.

Regression trees

Decision tree learning algorithms can be applied to learn to predict a dependent variable from data.[2] Although the original Classification And Regression Tree (CART) formulation applied only to predicting univariate data, the framework can be used to predict multivariate data, including time series.[3]

See also

References

  1. ^ Cherkassky, Vladimir; Mulier, Filip (1994). Cheeseman, P.; Oldford, R. W. (eds.). "Statistical and neural network techniques for nonparametric regression". Selecting Models from Data. Lecture Notes in Statistics. New York, NY: Springer: 383–392. doi:10.1007/978-1-4612-2660-4_39. ISBN 978-1-4612-2660-4.
  2. ^ Breiman, Leo; Friedman, J. H.; Olshen, R. A.; Stone, C. J. (1984). Classification and regression trees. Monterey, CA: Wadsworth & Brooks/Cole Advanced Books & Software. ISBN 978-0-412-04841-8.
  3. ^ Segal, M.R. (1992). "Tree-structured methods for longitudinal data". Journal of the American Statistical Association. 87 (418). American Statistical Association, Taylor & Francis: 407–418. doi:10.2307/2290271. JSTOR 2290271.

Further reading

External links

This page was last edited on 4 February 2024, at 14:52
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.