To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Probably approximately correct learning

From Wikipedia, the free encyclopedia

In computational learning theory, probably approximately correct (PAC) learning is a framework for mathematical analysis of machine learning. It was proposed in 1984 by Leslie Valiant.[1]

In this framework, the learner receives samples and must select a generalization function (called the hypothesis) from a certain class of possible functions. The goal is that, with high probability (the "probably" part), the selected function will have low generalization error (the "approximately correct" part). The learner must be able to learn the concept given any arbitrary approximation ratio, probability of success, or distribution of the samples.

The model was later extended to treat noise (misclassified samples).


An important innovation of the PAC framework is the introduction of computational complexity theory concepts to machine learning. In particular, the learner is expected to find efficient functions (time and space requirements bounded to a polynomial of the example size), and the learner itself must implement an efficient procedure (requiring an example count bounded to a polynomial of the concept size, modified by the approximation and likelihood bounds).

YouTube Encyclopedic

  • 1/5
    Views:
    45 537
    29 420
    23 578
    1 035
    40 692
  • Probably Approximately Correct Learning-PAC Learning-Supervised Learning-Machine Learning-15A05706
  • Probably Approximately Correct (PAC)Learning ( KTU CS467 Machine Learning Module 2)
  • Probably Approximately Correct Learning (PAC) / KTU / Machine learning
  • ECE595ML Lecture 24-1 Probably Approximately Correct
  • PAC Learning - Georgia Tech - Machine Learning

Transcription

Definitions and terminology

In order to give the definition for something that is PAC-learnable, we first have to introduce some terminology.[2]

For the following definitions, two examples will be used. The first is the problem of character recognition given an array of bits encoding a binary-valued image. The other example is the problem of finding an interval that will correctly classify points within the interval as positive and the points outside of the range as negative.

Let be a set called the instance space or the encoding of all the samples. In the character recognition problem, the instance space is . In the interval problem the instance space, , is the set of all bounded intervals in , where denotes the set of all real numbers.

A concept is a subset . One concept is the set of all patterns of bits in that encode a picture of the letter "P". An example concept from the second example is the set of open intervals, , each of which contains only the positive points. A concept class is a collection of concepts over . This could be the set of all subsets of the array of bits that are skeletonized 4-connected (width of the font is 1).

Let be a procedure that draws an example, , using a probability distribution and gives the correct label , that is 1 if and 0 otherwise.

Now, given , assume there is an algorithm and a polynomial in (and other relevant parameters of the class ) such that, given a sample of size drawn according to , then, with probability of at least , outputs a hypothesis that has an average error less than or equal to on with the same distribution . Further if the above statement for algorithm is true for every concept and for every distribution over , and for all then is (efficiently) PAC learnable (or distribution-free PAC learnable). We can also say that is a PAC learning algorithm for .

Equivalence

Under some regularity conditions these conditions are equivalent: [3]

  1. The concept class C is PAC learnable.
  2. The VC dimension of C is finite.
  3. C is a uniformly Glivenko-Cantelli class.[clarification needed]
  4. C is compressible in the sense of Littlestone and Warmuth

See also

References

  1. ^ L. Valiant. A theory of the learnable. Communications of the ACM, 27, 1984.
  2. ^ Kearns and Vazirani, pg. 1-12,
  3. ^ Blumer, Anselm; Ehrenfeucht, Andrzej; David, Haussler; Manfred, Warmuth (October 1989). "Learnability and the Vapnik-Chervonenkis Dimension". Journal of the Association for Computing Machinery. 36 (4): 929–965. doi:10.1145/76359.76371. S2CID 1138467.

Further reading

External links

This page was last edited on 6 May 2024, at 10:26
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.