To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Monotonic function

From Wikipedia, the free encyclopedia

Figure 1. A monotonically increasing function.
Figure 1. A monotonically increasing function.
Figure 2. A monotonically decreasing function
Figure 2. A monotonically decreasing function
Figure 3. A function that is not monotonic
Figure 3. A function that is not monotonic

In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order.[1][2][3] This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.

YouTube Encyclopedic

  • 1/5
    Views:
    136 032
    437
    320 556
    150 303
    64 665
  • ✪ Monotonicity Theorem
  • ✪ Monotonic Functions ☆ Math Concepts Lecture
  • ✪ Introduction to increasing, decreasing, positive or negative intervals | Algebra I | Khan Academy
  • ✪ Intro to Monotonic and Bounded Sequences, Ex 1
  • ✪ Increasing, decreasing and not monotonic sequences (KristaKingMath)

Transcription

Welcome back. Well, I've been requested to do several problems by our friend [? Akosh. ?] So I thought I would keep doing them. I'm skipping around a little bit. Because I think if I did all of them, it would just generate too many videos. But I encourage all of you all to let me know if you feel that there's something that I might have missed. But anyway, I'm just going through some of the problems he gave. So this one-- and I actually almost find it funny, because they use such formal language for something that's actually a pretty intuitive concept. Well they say, use the monotonicity theorem to find where the given function is increasing, and when it is decreasing. Monotonicity theorem. It make sound very serious. Well all the monotonicity theorem-- at least if I'm remembering things properly-- all it says is when the derivative is positive, when f prime of x is greater than 0, your function is increasing. And when the derivative is negative, your function is decreasing. And why am I kind of disparaging of this? Well what's the derivative? The derivative is the slope. And you learned in Algebra 1, that if the slope is positive, the graph of the line is increasing. And if the slope is negative, the graph of the line is decreasing. And the only thing that's different now is that this function isn't necessarily a line, it's a function. So it could be a curve. But having a positive slope means that the value of the function is increasing. For every change it x, there's a positive change in y. And similarly, a negative slope says that for every change in x, no matter how small, there's a negative change in y. So there's nothing fancy here. But let's see, they gave some examples. So they want us to use this theorem to find where the given function is increasing, and when it is decreasing. So the first function they give is f of x is equal to 3x plus 3. What's the derivative of this? f prime of x is equal to 3. For any value of x this is a positive. Right? F prime of x is positive for all values of x. So this thing, using the monotonicity theorem is positive. Sorry, it's increasing for all values of x. And you could have done that in the 9th grade. How? Well you would have said, this is the slope y-intercept. The slope is 3. So this is an increasing function. The function will look something like this. The y-intercept is 3. And the slope is 3. So this function looks something like that. And so it's increasing over from minus infinite to positive infinite over that interval, if you wanted to be fancy. But this is really just a lot of fancy terminology to talk about, I think, something fairly straightforward. Anyway, the next one looks a little bit more interesting. H of z is equal to z to the fourth over 4, minus-- and I hope I'm reading this right-- 4 z to the third over 6. So let's see if we can figure out when this is increasing, and when this is decreasing, when the function is increasing or decreasing. So what's the derivative of this? H prime of z is equal to 4. This will cancel out. So it's z to the third, 4z to the third over 4. So it's z to the third minus 3 times 4 is 12z squared over 6. So it's 2 2z squared. So we just have to figure out when is this function greater than 0, and when is this function less than 0. And to figure this out, we really just have to break out our algebra toolkit to figure out well one, when does this function equal 0? So let's set z-cubed minus 2z squared is equal to 0. We could factor out a z-squared. And then we have z minus 2 is equal to 0. So we know that either z-squared is equal to 0, or z minus 2 is equal to 0. So the points at which the derivative is 0, so H prime of 0 is equal to 0, and H prime of 2. And that's just our Algebra 2. So H prime of 2 is equal to 0. And so we just have to figure out what happens in the interval maybe when we're below 0, between 0 and 2, and then above 2. So what is the derivative? So when we're below 0, let's just take a point. Let's say H prime of negative 1 is equal to what? That is equal to negative 1 to the third is negative 1, minus 2 times negative 1 squared. Well negative 1 squared is just positive 1. So it's minus 2. So it equals minus 3. OK, let's take a point in between these two. Well 1 is in between them. So let's just take H prime of 1 is equal to 1 minus 2. Lets make sure I'm doing that right. Because it's 1 squared. So it equals negative 1, H prime of 1. And then what happens when we go of z values greater than 2. So let's try 3. H prime of 3 is equal to what? H prime of 3 is equal to 27-- what's minus 2 times 9. It's equal to 9, right, 18, right. So it's positive. So, what do we know? We know that in all fairness, this is kind of an interesting problem. Because the derivative is negative. So if we were to draw the derivative of this, what are the interesting points? It's 0. So that's the x-axis. That's 0. And then we had 2. And I'm going to draw the derivative. I'm not drawing the actual function. Because the monotonicity theorem, we care about whether the derivative is positive or negative. So what this tells us is that the derivative, it was a negative when we're to left of 0, right? Because we just took a test point. Because we know it's 0 here. But then what happened? It's not like it got positive then. It went back negative again when we tried to derivative it once. It probably does something like this. Now what would it be? Then it goes up at 2 where it's greater than 0. Right. I was confusing myself for a second. Right. So the derivative, it starts. But it never goes above right there. So the derivative probably has a bit of a maximum point there or something. This is the derivative, remember. That's why I was confusing myself. And then it probably flips around, and then goes above there. But from the monotonicity theorem, what do we care about? We care about the intervals where the derivative is positive, and the derivative is negative. So if the derivative is positive for all of these values, for z is greater than 2. So we could say, using the monotonicity theorem, this function is increasing when z is greater than 2. So let me write that. And then we can say that the function is decreasing. We could say flat or decreasing, right? Because the slope of the function is 0 right here. And remember this is the graph of the derivative. So we could say flat or decreasing when z is less than 2. You normally do consider flat, monotonic in one direction. So you could say monotonic increasing would still include something that kind of flattens out. Well anyway, I'm pushing ten minutes. And I don't think I have time for the next problem, which I might do in the next video. But hopefully you found that a little bit helpful, and not too confusing. I will see you in the next video.

Contents

Monotonicity in calculus and analysis

In calculus, a function defined on a subset of the real numbers with real values is called monotonic if and only if it is either entirely non-increasing, or entirely non-decreasing.[2] That is, as per Fig. 1, a function that increases monotonically does not exclusively have to increase, it simply must not decrease.

A function is called monotonically increasing (also increasing or non-decreasing[3]), if for all and such that one has , so preserves the order (see Figure 1). Likewise, a function is called monotonically decreasing (also decreasing or non-increasing[3]) if, whenever , then , so it reverses the order (see Figure 2).

If the order in the definition of monotonicity is replaced by the strict order , then one obtains a stronger requirement. A function with this property is called strictly increasing.[3] Again, by inverting the order symbol, one finds a corresponding concept called strictly decreasing.[3] Functions that are strictly increasing or decreasing are one-to-one (because for not equal to , either or and so, by monotonicity, either or , thus is not equal to .)

If it is not clear that "increasing" and "decreasing" are taken to include the possibility of repeating the same value at successive arguments, one may use the terms weakly increasing and weakly decreasing to stress this possibility.

The terms "non-decreasing" and "non-increasing" should not be confused with the (much weaker) negative qualifications "not decreasing" and "not increasing". For example, the function of figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.

A function is said to be absolutely monotonic over an interval if the derivatives of all orders of are nonnegative or all nonpositive at all points on the interval.

Monotonic transformation

The term monotonic transformation (or monotone transformation) can also possibly cause some confusion because it refers to a transformation by a strictly increasing function. This is the case in economics with respect to the ordinal properties of a utility function being preserved across a monotonic transform (see also monotone preferences).[4] In this context, what we are calling a "monotonic transformation" is, more accurately, called a "positive monotonic transformation", in order to distinguish it from a “negative monotonic transformation,” which reverses the order of the numbers.[5]

Some basic applications and results

The following properties are true for a monotonic function :

  • has limits from the right and from the left at every point of its domain;
  • has a limit at positive or negative infinity (  ) of either a real number, , or .
  • can only have jump discontinuities;
  • can only have countably many discontinuities in its domain. The discontinuities, however, do not necessarily consist of isolated points and may even be dense in an interval (a,b).

These properties are the reason why monotonic functions are useful in technical work in analysis. Two facts about these functions are:

  • if is a monotonic function defined on an interval , then is differentiable almost everywhere on , i.e. the set of numbers in such that is not differentiable in has Lebesgue measure zero. In addition, this result cannot be improved to countable: see Cantor function.
  • if is a monotonic function defined on an interval , then is Riemann integrable.

An important application of monotonic functions is in probability theory. If is a random variable, its cumulative distribution function is a monotonically increasing function.

A function is unimodal if it is monotonically increasing up to some point (the mode) and then monotonically decreasing.

When is a strictly monotonic function, then is injective on its domain, and if is the range of , then there is an inverse function on for .

Monotonicity in topology

A map is said to be monotone if each of its fibers is connected i.e. for each element in the (possibly empty) set is connected.

Monotonicity in functional analysis

In functional analysis on a topological vector space , a (possibly non-linear) operator is said to be a monotone operator if

Kachurovskii's theorem shows that convex functions on Banach spaces have monotonic operators as their derivatives.

A subset of is said to be a monotone set if for every pair and in ,

is said to be maximal monotone if it is maximal among all monotone sets in the sense of set inclusion. The graph of a monotone operator is a monotone set. A monotone operator is said to be maximal monotone if its graph is a maximal monotone set.

Monotonicity in order theory

Order theory deals with arbitrary partially ordered sets and preordered sets in addition to real numbers. The above definition of monotonicity is relevant in these cases as well. However, the terms "increasing" and "decreasing" are avoided, since their conventional pictorial representation does not apply to orders that are not total. Furthermore, the strict relations < and > are of little use in many non-total orders and hence no additional terminology is introduced for them.

A monotone function is also called isotone, or order-preserving. The dual notion is often called antitone, anti-monotone, or order-reversing. Hence, an antitone function f satisfies the property

xy implies f(x) ≥ f(y),

for all x and y in its domain. The composite of two monotone mappings is also monotone.

A constant function is both monotone and antitone; conversely, if f is both monotone and antitone, and if the domain of f is a lattice, then f must be constant.

Monotone functions are central in order theory. They appear in most articles on the subject and examples from special applications are found in these places. Some notable special monotone functions are order embeddings (functions for which xy if and only if f(x) ≤ f(y)) and order isomorphisms (surjective order embeddings).

Monotonicity in the context of search algorithms

In the context of search algorithms monotonicity (also called consistency) is a condition applied to heuristic functions. A heuristic h(n) is monotonic if, for every node n and every successor n' of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching the goal from n' ,

This is a form of triangle inequality, with n, n', and the goal Gn closest to n. Because every monotonic heuristic is also admissible, monotonicity is a stricter requirement than admissibility. Some heuristic algorithms such as A* can be proven optimal provided that the heuristic they use is monotonic.[6]

Boolean functions

In Boolean algebra, a monotonic function is one such that for all ai and bi in {0,1}, if a1b1, a2b2, ..., anbn (i.e. the Cartesian product {0, 1}n is ordered coordinatewise), then f(a1, ..., an) ≤ f(b1, ..., bn). In other words, a Boolean function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can only cause the output to switch from false to true and not from true to false. Graphically, this means that a Boolean function is monotonic when in its Hasse diagram (dual of its Venn diagram), there is no 1 connected to a higher 0.

The monotonic Boolean functions are precisely those that can be defined by an expression combining the inputs (which may appear more than once) using only the operators and and or (in particular not is forbidden). For instance "at least two of a,b,c hold" is a monotonic function of a,b,c, since it can be written for instance as ((a and b) or (a and c) or (b and c)).

The number of such functions on n variables is known as the Dedekind number of n.

See also

Notes

  1. ^ Clapham, Christopher; Nicholson, James (2014). Oxford Concise Dictionary of Mathematics (5th ed.). Oxford University Press.
  2. ^ a b Stover, Christopher. "Monotonic Function". Wolfram MathWorld. Retrieved 2018-01-29.
  3. ^ a b c d e "Monotone function". Encyclopedia of Mathematics. Retrieved 2018-01-29.
  4. ^ See the section on Cardinal Versus Ordinal Utility in Simon & Blume (1994).
  5. ^ Varian, Hal R. (2010). Intermediate Microeconomics (8th ed.). W. W. Norton & Company. p. 56. ISBN 9780393934243.
  6. ^ Conditions for optimality: Admissibility and consistency pg. 94-95 (Russell & Norvig 2010).

Bibliography

  • Bartle, Robert G. (1976). The elements of real analysis (second ed.).
  • Grätzer, George (1971). Lattice theory: first concepts and distributive lattices. ISBN 0-7167-0442-0.
  • Pemberton, Malcolm; Rau, Nicholas (2001). Mathematics for economists: an introductory textbook. Manchester University Press. ISBN 0-7190-3341-1.
  • Renardy, Michael & Rogers, Robert C. (2004). An introduction to partial differential equations. Texts in Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. p. 356. ISBN 0-387-00444-0.
  • Riesz, Frigyes & Béla Szőkefalvi-Nagy (1990). Functional Analysis. Courier Dover Publications. ISBN 978-0-486-66289-3.
  • Russell, Stuart J.; Norvig, Peter (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, New Jersey: Prentice Hall. ISBN 978-0-13-604259-4.
  • Simon, Carl P.; Blume, Lawrence (April 1994). Mathematics for Economists (first ed.). ISBN 978-0-393-95733-4. (Definition 9.31)

External links

This page was last edited on 3 July 2019, at 18:02
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.