To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Approximation error

From Wikipedia, the free encyclopedia

Graph of (blue) with its linear approximation (red) at a = 0. The approximation error is the gap between the curves, and it increases for x values further from 0.

The approximation error in a data value is the discrepancy between an exact value and some approximation to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute error divided by the data value).

An approximation error can occur for a variety of reasons, among them a computing machine precision or measurement error (e.g. the length of a piece of paper is 4.53 cm but the ruler only allows you to estimate it to the nearest 0.1 cm, so you measure it as 4.5 cm).

In the mathematical field of numerical analysis, the numerical stability of an algorithm indicates the extent to which errors in the input of the algorithm will lead to large errors of the output; numerically stable algorithms do not yield a significant error in output when the input is malformed and vice versa.[1]

YouTube Encyclopedic

  • 1/3
    Views:
    35 926
    13 824
    8 403
  • Approximate Error
  • APPROXIMATION AND ERRORS|ABSOLUTE ERROR|RELATIVE ERROR| PERCENTAGE ERROR WORKED EXAMPLES
  • Approximation and Errors - Form 3 Mathematics EasyElimu

Transcription

. . In this segment we're going to talk about approximate errors. And the reason why we need to talk about approximate errors is because we won't have the luxury of knowing what the true values are, so somehow we have to calculate what the approximate errors are.| So let's go ahead and first define what approximate error is, and then we will talk about how to use that in an example. So the approximate error is defined as, it's denoted by Ea, and is defined as the present approximation minus the previous approximation. So because when you are going to use numerical methods you won't have the privilege of knowing the exact value, otherwise you wouldn't be using numerical methods. So there has to be some mechanism of having approximations from present and from previous to be able to gauge how much error you are getting.| Only then you'll be able to tell people how good or bad your answer is, but you will have to depend on your numerical results themselves.| So let's go ahead and take an example that will make it clear what we mean by approximate errors, and there are many, many examples which you can take about that, so let's go ahead and take an example here. Let's suppose we have, somebody tells me, hey, I want you to use this approximate formula to calculate the derivative of a function.| And we know that delta x has to approach 0 if we want to calculate the exact value, but if we want to calculate the approximate value, we have to choose delta x to be a finite number.| And they want you to use this for this particular function: f of x is 7 e to the power 0.5 x, they want you to calculate the derivative of the function at 2.| And what they do is they want you to use delta x equal to 0.3, and then they want you to use delta x equal to 0.15.| And this is what is going to give you the present approximation and the previous approximation, so if you had know knowledge of what the exact value of the derivative of this function is, and you are totally depending on a formula like this one to calculate the derivative of a function, what you would have to do is you will have to choose some value of delta x.| In this case what we are doing is we are choosing a value of delta x equal to 0.3 as our previous approximation, then we're going to halve the step size from 0.3 to 0.15, and calculate our f prime of 2 with that step size, and that's what's going to give us the current approximation and the previous approximation, and allow us to find out what Ea is, or what our approximate error is.| So I will show you for one of the delta xs and then we can, I can ask you to do the other one by yourself. So if I wanted to calculate f prime of 2, it will be approximately equal to the value of the function at 2 plus delta x minus the value of the function at x, divided by delta x.| And since in this first case we are taking delta x equal to 0.3, f prime of 2 will be approximately equal to f at 2 plus 0.3 minus f of 2, divided by 0.3. So that will give you f at 2.3 minus f at 2, divided by 0.3. And we just substitute the value of the function at those particular points, we get 7 e to the power 0.5 times 2.3 minus 7 e to the power 0.5 times 2, divided by 0.3, and this number here turns out to be 10.265.| So this is the value of the derivative of the function you're getting at x equal to 2 by using a step size of 0.3.| You can repeat the whole process for delta x equal to 0.15. In that case, you'll get f prime of 2 to be approximately equal to the value of the function at 2 plus delta x, which is 0.15, minus the value of the function at 2, divided by 0.15. and that gives you the value of the function as calculated at 2.15 minus the value of the function at 2, divided by 0.15. In order to go further in calculations, I need to substitute the value of x, which is 2.15 in this first case, and it is 2 for the second term, divided by 0.15. And this number here turns out to be 9.8799.| So you're getting a different number for the approximation of the derivative of the function when you choose a different step size, which is 0.15 in this case.| But this is what's going to allow us to now judge how much error we're getting, because we will get the approximate error to be the present approximation minus the previous approximation. Sometimes people call the present approximation the current approximation, so it's the same thing.| So the present approximation is something which we got by using a delta x equal to 0.15, and the previous approximation is what I got by using delta x equal to 0.3, and if I subtract the two, I get -0.348474, so that's the approximate error which I get.| So that's how you are able to calculate approximate errors, and this is a way to judge how much error you have in a calculation, without having to have a knowledge of the true values, because you're not going to have a knowledge of the true values in numerical methods.| And that is the end of this segment. . .

Formal definition

Given some value v, we say that vapprox approximates v with absolute error ε>0 if [2][3]

where the vertical bars denote the absolute value.

We say that vapprox approximates v with relative error η>0 if

.

If v ≠ 0, then

.

The percent error (an expression of the relative error) is [3]

An error bound is an upper limit on the relative or absolute size of an approximation error.[4]

Examples

Best rational approximants for π (green circle), e (blue diamond), ϕ (pink oblong), (√3)/2 (grey hexagon), 1/√2 (red octagon) and 1/√3 (orange triangle) calculated from their continued fraction expansions, plotted as slopes y/x with errors from their true values (black dashes)  

As an example, if the exact value is 50 and the approximation is 49.9, then the absolute error is 0.1 and the relative error is 0.1/50 = 0.002 = 0.2%. As a practical example, when measuring a 6 mL beaker, the value read was 5 mL. The correct reading being 6 mL, this means the percent error in that particular situation is, rounded, 16.7%.

The relative error is often used to compare approximations of numbers of widely differing size; for example, approximating the number 1,000 with an absolute error of 3 is, in most applications, much worse than approximating the number 1,000,000 with an absolute error of 3; in the first case the relative error is 0.003 while in the second it is only 0.000003.

There are two features of relative error that should be kept in mind. First, relative error is undefined when the true value is zero as it appears in the denominator (see below). Second, relative error only makes sense when measured on a ratio scale, (i.e. a scale which has a true meaningful zero), otherwise it is sensitive to the measurement units. For example, when an absolute error in a temperature measurement given in Celsius scale is 1 °C, and the true value is 2 °C, the relative error is 0.5. But if the exact same approximation is made with the Kelvin scale, a 1 K absolute error with the same true value of 275.15 K = 2 °C gives a relative error of 3.63×10−3.

Comparison

Statements about relative errors are sensitive to addition of constants, but not to multiplication by constants. For absolute errors, the opposite is true: are sensitive to multiplication by constants, but not to addition of constants.[5]: 34 

Polynomial-time approximation of real numbers

We say that a real value v is polynomially computable with absolute error from an input if, for every rational number ε>0, it is possible to compute a rational number vapprox that approximates v with absolute error ε, in time polynomial in the size of the input and the encoding size of ε (which is O(log(1/ε)). Analogously, v is polynomially computable with relative error if, for every rational number η>0, it is possible to compute a rational number vapprox that approximates v with relative error η, in time polynomial in the size of the input and the encoding size of η.

If v is polynomially computable with relative error (by some algorithm called REL), then it is also polynomially computable with absolute error. Proof. Let ε>0 be the desired absolute error. First, use REL with relative error η=1/2; find a rational number r1 such that |v-r1| ≤ |v|/2, and hence |v| ≤ 2 |r1|. If r1=0, then v=0 and we are done. Since REL is polynomial, the encoding length of r1 is polynomial in the input. Now, run REL again with relative error η=ε/(2 |r1|). This yields a rational number r2 that satisfies |v-r2| ≤ ε|v| / (2r1) ≤ ε, so it has absolute error ε as wished.[5]: 34 

The reverse implication is usually not true. But, if we assume that some positive lower bound on |v| can be computed in polynomial time, e.g. |v| > b > 0, and v is polynomially computable with absolute error (by some algorithm called ABS), then it is also polynomially computable with relative error, since we can simply call ABS with absolute error ε = η b.

An algorithm that, for every rational number η>0, computes a rational number vapprox that approximates v with relative error η, in time polynomial in the size of the input and 1/η (rather than log(1/η)), is called an FPTAS.

Instruments

In most indicating instruments, the accuracy is guaranteed to a certain percentage of full-scale reading. The limits of these deviations from the specified values are known as limiting errors or guarantee errors.[6]

Generalizations

The definitions can be extended to the case when and are n-dimensional vectors, by replacing the absolute value with an n-norm.[7]

See also

References

  1. ^ Weisstein, Eric W. "Numerical Stability". mathworld.wolfram.com. Retrieved 2023-06-11.
  2. ^ Weisstein, Eric W. "Absolute Error". mathworld.wolfram.com. Retrieved 2023-06-11.
  3. ^ a b "Absolute and Relative Error | Calculus II". courses.lumenlearning.com. Retrieved 2023-06-11.
  4. ^ "Approximation and Error Bounds". www.math.wpi.edu. Retrieved 2023-06-11.
  5. ^ a b Grötschel, Martin; Lovász, László; Schrijver, Alexander (1993), Geometric algorithms and combinatorial optimization, Algorithms and Combinatorics, vol. 2 (2nd ed.), Springer-Verlag, Berlin, doi:10.1007/978-3-642-78240-4, ISBN 978-3-642-78242-8, MR 1261419
  6. ^ Helfrick, Albert D. (2005) Modern Electronic Instrumentation and Measurement Techniques. p. 16. ISBN 81-297-0731-4
  7. ^ Golub, Gene; Charles F. Van Loan (1996). Matrix Computations – Third Edition. Baltimore: The Johns Hopkins University Press. p. 53. ISBN 0-8018-5413-X.

External links

This page was last edited on 27 March 2024, at 11:20
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.