To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Cauchy–Schwarz inequality

From Wikipedia, the free encyclopedia

The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality)[1][2][3][4] is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.[5]

The inequality for sums was published by Augustin-Louis Cauchy (1821). The corresponding inequality for integrals was published by Viktor Bunyakovsky (1859)[2] and Hermann Schwarz (1888). Schwarz gave the modern proof of the integral version.[5]

YouTube Encyclopedic

  • 1/5
    Views:
    539 064
    52 409
    67 144
    16 875
    100 114
  • Proof of the Cauchy-Schwarz inequality | Vectors and spaces | Linear Algebra | Khan Academy
  • Visual Cauchy-Schwarz Inequality
  • Basic Cauchy-Schwarz Inequality - Linear Algebra Made Easy (2016)
  • 05 | DU | Cauchy Schwartz Example | Linear Algebra | GE-2
  • Cauchy Schwarz Proof

Transcription

Let's say that I have two nonzero vectors. Let's say the first vector is x, the second vector is y. They're both in the set Rn and they're nonzero. It turns out that the absolute value of their-- let me do it in a different color. This color's nice. The absolute value of their dot product of the two vectors-- and remember, this is just a scalar quantity-- is less than or equal to the product of their lengths. And we've defined the dot product and we've defined lengths already. It's less than or equal to the product of their lengths and just to push it even further, the only time that this is equal, so the dot product of the two vectors is only going to be equal to the lengths of this-- the equal and the less than or equal apply only in the situation-- let me write that down-- where one of these vectors is a scalar multiple of the other. Or they're collinear. You know, one's just kind of the longer or shorter version of the other one. So only in the situation where let's just say x is equal to some scalar multiple of y. These inequalities or I guess the equality of this inequality, this is called the Cauchy-Schwarz Inequality. So let's prove it because you can't take something like this just at face value. You shouldn't just accept that. So let me just construct a somewhat artificial function. Let me construct some function of-- that's a function of some variables, some scalar t. Let me define p of t to be equal to the length of the vector t times the vector-- some scalar t times the vector y minus the vector x. It's the length of this vector. This is going to be a vector now. That squared. Now before I move forward I want to make one little point here. If I take the length of any vector, I'll do it here. Let's say I take the length of some vector v. I want you to accept that this is going to be a positive number, or it's at least greater than or equal to 0. Because this is just going to be each of its terms squared. v2 squared all the way to vn squared. All of these are real numbers. When you square a real number, you get something greater than or equal to 0. When you sum them up, you're going to have something greater than or equal to 0. And you take the square root of it, the principal square root, the positive square root, you're going to have something greater than or equal to 0. So the length of any real vector is going to be greater than or equal to 0. So this is the length of a real vector. So this is going to be greater than or equal to 0. Now, in the previous video, I think it was two videos ago, I also showed that the magnitude or the length of a vector squared can also be rewritten as the dot product of that vector with itself. So let's rewrite this vector that way. The length of this vector squared is equal to the dot product of that vector with itself. So it's ty minus x dot ty minus x. In the last video, I showed you that you can treat a multiplication or you can treat the dot product very similar to regular multiplication when it comes to the associative, distributive and commutative properties. So when you multiplied these, you know, you could kind of view this as multiplying these two binomials. You can do it the same way as you would just multiply two regular algebraic binomials. You're essentially just using the distributive property. But remember, this isn't just regular multiplication. This is the dot product we're doing. This is vector multiplication or one version of vector multiplication. So if we distribute it out, this will become ty dot ty. So let me write that out. That'll be ty dot ty. And then we'll get a minus-- let me do it this way. Then we get the minus x times this ty. Instead of saying times, I should be very careful to say dot. So minus x dot ty. And then you have this ty times this minus x. So then you have minus ty dot x. And then finally, you have the x's dot with each other. And you can view them as minus 1x dot minus 1x. You could say plus minus 1x. I could just view this as plus minus 1 or plus minus 1. So this is minus 1x dot minus 1x. So let's see. So this is what my whole expression simplified to or expanded to. I can't really call this a simplification. But we can use the fact that this is commutative and associative to rewrite this expression right here. This is equal to y dot y times t squared. t is just a scalar. Minus-- and actually, this is 2. These two things are equivalent. They're just rearrangements of the same thing and we saw that the dot product is associative. So this is just equal to 2 times x dot y times t. And I should do that in maybe a different color. So these two terms result in that term right there. And then if you just rearrange these you have a minus 1 times a minus 1. They cancel out, so those will become plus and you're just left with plus x dot x. And I should do that in a different color as well. I'll do that in an orange color. So those terms end up with that term. Then of course, that term results in that term. And remember, all I did is I rewrote this thing and said, look. This has got to be greater than or equal to 0. So I could rewrite that here. This thing is still just the same thing. I've just rewritten it. So this is all going to be greater than or equal to 0. Now let's make a little bit of a substitution just to clean up our expression a little bit. And we'll later back substitute into this. Let's define this as a. Let's define this piece right here as b. So the whole thing minus 2x dot y. I'll leave the t there. And let's define this or let me just define this right here as c. X dot x as c. So then, what does our expression become? It becomes a times t squared minus-- I want to be careful with the colors-- b times t plus c. And of course, we know that it's going to be greater than or equal to 0. It's the same thing as this up here, greater than or equal to 0. I could write p of t here. Now this is greater than or equal to 0 for any t that I put in here. For any real t that I put in there. Let me evaluate our function at b over 2a. And I can definitely do this because what was a? I just have to make sure I'm not dividing by 0 any place. So a was this vector dotted with itself. And we said this was a nonzero vector. So this is the square of its length. It's a nonzero vector, so some of these terms up here would end up becoming positively when you take its length. So this thing right here is nonzero. This is a nonzero vector. Then 2 times the dot product with itself is also going to be nonzero. So we can do this. We don't worry about dividing by 0, whatever else. But what will this be equal to? This'll be equal to-- and I'll just stick to the green. It takes too long to keep switching between colors. This is equal to a times this expression squared. So it's b squared over 4a squared. I just squared 2a to get the 4a squared. Minus b times this. So b times-- this is just regular multiplication. b times b over 2a. Just write regular multiplication there. Plus c. And we know all of that is greater than or equal to 0. Now if we simplify this a little bit, what do we get? Well this a cancels out with this exponent there and you end up with a b squared right there. So we get b squared over 4a minus b squared over 2a. That's that term over there. Plus c is greater than or equal to 0. Let me rewrite this. If I multiply the numerator and denominator of this by 2, what do I get? I get 2b squared over 4a. And the whole reason I did that is to get a common denominator here. So what do you get? You get b squared over 4a minus 2b squared over 4a. So what do these two terms simplify to? Well the numerator is b squared minus 2b squared. So that just becomes minus b squared over 4a plus c is greater than or equal to 0. These two terms add up to this one right here. Now if we add this to both sides of the equation, we get c is greater than or equal to b squared over 4a. It was a negative on the left-hand side. If I add it to both sides it's going to be a positive on the right-hand side. We're approaching something that looks like an inequality, so let's back substitute our original substitutions to see what we have now. So where was my original substitutions that I made? It was right here. And actually, just to simplify more, let me multiply both sides by 4a. I said a, not only is it nonzero, it's going to be positive. This is the square of its length. And I already showed you that the length of any real vector's going to be positive. And the reason why I'm taking great pains to show that a is positive is because if I multiply both sides of it I don't want to change the inequality sign. So let me multiply both sides of this by a before I substitute. So we get 4ac is greater than or equal to b squared. There you go. And remember, I took great pains. I just said a is definitely a positive number because it is essentially the square of the length. y dot y is the square of the length of y, and that's a positive value. It has to be positive. We're dealing with real vectors. Now let's back substitute this. So 4 times a, 4 times y dot y. y dot y is also-- I might as well just write it there. y dot y is the same thing as the magnitude of y squared. That's y dot y. This is a. y dot y, I showed you that in the previous video. Times c. c is x dot x. Well x dot x is the same thing as the length of vector x squared. So this was c. So 4 times a times c is going to be greater than or equal to b squared. Now what was b? b was this thing here. So b squared would be 2 times x dot y squared. So we've gotten to this result so far. And so what can we do with this? Oh sorry, and this whole thing is squared. This whole thing right here is b. So let's see if we can simplify this. So we get-- let me switch to a different color. 4 times the length of y squared times the length of x squared is greater than or equal to-- if we squared this quantity right here, we get 4 times x dot y. 4 times x dot y times x dot y. Actually, even better, let me just write it like this. Let me just write 4 times x dot y squared. Now we can divide both sides by 4. That won't change our inequality. So that just cancels out there. And now let's take the square root of both sides of this equation. So the square roots of both sides of this equation-- these are positive values, so the square root of this side is the square root of each of its terms. That's just an exponent property. So if you take the square root of both sides you get the length of y times the length of x is greater than or equal to the square root of this. And we're going to take the positive square root. We're going to take the positive square root on both sides of this equation. That keeps us from having to mess with anything on the inequality or anything like that. So the positive square root is going to be the absolute value of x dot y. And I want to be very careful to say this is the absolute value because it's possible that this thing right here is a negative value. But when you square it, you want to be careful that when you take the square root of it that you stay a positive value. Because otherwise when we take the principal square root, we might mess with the inquality. We're taking the positive square root, which will be-- so if you take the absolute value, you're ensuring that it's going to be positive. But this is our result. The absolute value of the dot product of our vectors is less than the product of the two vectors lengths. So we got our Cauchy-Schwarz inequality. Now the last thing I said is look, what happens if x is equal to some scalar multiple of y? Well in that case, what's the absolute value? The absolute value of x dot y? Well that equals-- that equals what? If we make the substitution that equals the absolute value of c times y. That's just x dot y, which is equal to just from the associative property. It's equal to the absolute value of c times-- we want to make sure our absolute value, keep everything positive. y dot y. Well this is just equal to c times the magnitude of y-- the length of y squared. Well that just is equal to the magnitude of c times-- or the absolute value of our scalar c times our length of y. Well this right here, I can rewrite this. I mean you can prove this to yourself if you don't believe it, but this-- we could put the c inside of the magnitude and that could be a good exercise for you to prove. But it's pretty straightforward. You just do the definition of length. And you multiply it by c. This is equal to the magnitude of cy times-- let me say the length of cy times the length of y. I've lost my vector notation someplace over here. There you go. Now, this is x. So this is equal to the length of x times the length of y. So I showed you kind of the second part of the Cauchy-Schwarz Inequality that this is only equal to each other if one of them is a scalar multiple of the other. If you're a little uncomfortable with some of these steps I took, it might be a good exercise to actually prove it. For example, to prove that the absolute value of c times the length of the vector y is the same thing as the length of c times y. Anyway, hopefully you found this pretty useful. The Cauchy-Schwarz Inequality we'll use a lot when we prove other results in linear algebra. And in a future video, I'll give you a little more intuition about why this makes a lot of sense relative to the dot product.

Statement of the inequality

The Cauchy–Schwarz inequality states that for all vectors and of an inner product space

 

 

 

 

(1)

where is the inner product. Examples of inner products include the real and complex dot product; see the examples in inner product. Every inner product gives rise to a Euclidean norm, called the canonical or inducednorm, where the norm of a vector is denoted and defined by

where is always a non-negative real number (even if the inner product is complex-valued). By taking the square root of both sides of the above inequality, the Cauchy–Schwarz inequality can be written in its more familiar form in terms of the norm:[6][7]

 

 

 

 

(2)

Moreover, the two sides are equal if and only if and are linearly dependent.[8][9][10]

Special cases

Sedrakyan's lemma - Positive real numbers

Sedrakyan's inequality, also called Bergström's inequality, Engel's form, the T2 lemma, or Titu's lemma, states that for real numbers and positive real numbers :

It is a direct consequence of the Cauchy–Schwarz inequality, obtained by using the dot product on upon substituting and . This form is especially helpful when the inequality involves fractions where the numerator is a perfect square.

R2 - The plane

Cauchy-Schwarz inequality in a unit circle of the Euclidean plane

The real vector space denotes the 2-dimensional plane. It is also the 2-dimensional Euclidean space where the inner product is the dot product. If and then the Cauchy–Schwarz inequality becomes:

where is the angle between and .

The form above is perhaps the easiest in which to understand the inequality, since the square of the cosine can be at most 1, which occurs when the vectors are in the same or opposite directions. It can also be restated in terms of the vector coordinates , , , and as

where equality holds if and only if the vector is in the same or opposite direction as the vector , or if one of them is the zero vector.

Rn - n-dimensional Euclidean space

In Euclidean space with the standard inner product, which is the dot product, the Cauchy–Schwarz inequality becomes:

The Cauchy–Schwarz inequality can be proved using only elementary algebra in this case by observing that the difference of the right and the left hand side is

or by considering the following quadratic polynomial in

Since the latter polynomial is nonnegative, it has at most one real root, hence its discriminant is less than or equal to zero. That is,

Cn - n-dimensional Complex space

If with and (where and ) and if the inner product on the vector space is the canonical complex inner product (defined by where the bar notation is used for complex conjugation), then the inequality may be restated more explicitly as follows:

That is,

L2

For the inner product space of square-integrable complex-valued functions, the following inequality:

The Hölder inequality is a generalization of this.

Applications

Analysis

In any inner product space, the triangle inequality is a consequence of the Cauchy–Schwarz inequality, as is now shown:

Taking square roots gives the triangle inequality:

The Cauchy–Schwarz inequality is used to prove that the inner product is a continuous function with respect to the topology induced by the inner product itself.[11][12]

Geometry

The Cauchy–Schwarz inequality allows one to extend the notion of "angle between two vectors" to any real inner-product space by defining:[13][14]

The Cauchy–Schwarz inequality proves that this definition is sensible, by showing that the right-hand side lies in the interval [−1, 1] and justifies the notion that (real) Hilbert spaces are simply generalizations of the Euclidean space. It can also be used to define an angle in complex inner-product spaces, by taking the absolute value or the real part of the right-hand side,[15][16] as is done when extracting a metric from quantum fidelity.

Probability theory

Let and be random variables, then the covariance inequality[17][18] is given by:

After defining an inner product on the set of random variables using the expectation of their product,

the Cauchy–Schwarz inequality becomes

To prove the covariance inequality using the Cauchy–Schwarz inequality, let and then

where denotes variance and denotes covariance.

Proofs

There are many different proofs[19] of the Cauchy–Schwarz inequality other than those given below.[5][7] When consulting other sources, there are often two sources of confusion. First, some authors define ⟨⋅,⋅⟩ to be linear in the second argument rather than the first. Second, some proofs are only valid when the field is and not [20]

This section gives two proofs of the following theorem:

Cauchy–Schwarz inequality — Let and be arbitrary vectors in an inner product space over the scalar field where is the field of real numbers or complex numbers Then

 

 

 

 

(Cauchy–Schwarz Inequality)

with equality holding in the Cauchy–Schwarz Inequality if and only if and are linearly dependent.

Moreover, if and then


In both of the proofs given below, the proof in the trivial case where at least one of the vectors is zero (or equivalently, in the case where ) is the same. It is presented immediately below only once to reduce repetition. It also includes the easy part of the proof of the Equality Characterization given above; that is, it proves that if and are linearly dependent then

Proof of the trivial parts: Case where a vector is and also one direction of the Equality Characterization

By definition, and are linearly dependent if and only if one is a scalar multiple of the other. If where is some scalar then

which shows that equality holds in the Cauchy–Schwarz Inequality. The case where for some scalar follows from the previous case:

In particular, if at least one of and is the zero vector then and are necessarily linearly dependent (for example, if then where ), so the above computation shows that the Cauchy-Schwarz inequality holds in this case.

Consequently, the Cauchy–Schwarz inequality only needs to be proven only for non-zero vectors and also only the non-trivial direction of the Equality Characterization must be shown.

Proof via the Pythagorean theorem

The special case of was proven above so it is henceforth assumed that Let

It follows from the linearity of the inner product in its first argument that:

Therefore, is a vector orthogonal to the vector (Indeed, is the projection of onto the plane orthogonal to ) We can thus apply the Pythagorean theorem to

which gives

The Cauchy–Schwarz inequality follows by multiplying by and then taking the square root. Moreover, if the relation in the above expression is actually an equality, then and hence the definition of then establishes a relation of linear dependence between and The converse was proved at the beginning of this section, so the proof is complete.

Proof by analyzing a quadratic

Consider an arbitrary pair of vectors . Define the function defined by , where is a complex number satisfying and . Such an exists since if then can be taken to be 1.

Since the inner product is positive-definite, only takes non-negative real values. On the other hand, can be expanded using the bilinearity of the inner product:

Thus, is a polynomial of degree (unless which is a case that was checked earlier). Since the sign of does not change, the discriminant of this polynomial must be non-positive:
The conclusion follows.[21]

For the equality case, notice that happens if and only if If then and hence

Generalizations

Various generalizations of the Cauchy–Schwarz inequality exist. Hölder's inequality generalizes it to norms. More generally, it can be interpreted as a special case of the definition of the norm of a linear operator on a Banach space (Namely, when the space is a Hilbert space). Further generalizations are in the context of operator theory, e.g. for operator-convex functions and operator algebras, where the domain and/or range are replaced by a C*-algebra or W*-algebra.

An inner product can be used to define a positive linear functional. For example, given a Hilbert space being a finite measure, the standard inner product gives rise to a positive functional by Conversely, every positive linear functional on can be used to define an inner product where is the pointwise complex conjugate of In this language, the Cauchy–Schwarz inequality becomes[22]

which extends verbatim to positive functionals on C*-algebras:

Cauchy–Schwarz inequality for positive functionals on C*-algebras[23][24] — If is a positive linear functional on a C*-algebra then for all

The next two theorems are further examples in operator algebra.

Kadison–Schwarz inequality[25][26] (Named after Richard Kadison) — If is a unital positive map, then for every normal element in its domain, we have and

This extends the fact when is a linear functional. The case when is self-adjoint, that is, is sometimes known as Kadison's inequality.

Cauchy–Schwarz inequality (Modified Schwarz inequality for 2-positive maps[27]) — For a 2-positive map between C*-algebras, for all in its domain,

Another generalization is a refinement obtained by interpolating between both sides of the Cauchy–Schwarz inequality:

Callebaut's Inequality[28] — For reals

This theorem can be deduced from Hölder's inequality.[29] There are also non-commutative versions for operators and tensor products of matrices.[30]

Several matrix versions of the Cauchy–Schwarz inequality and Kantorovich inequality are applied to linear regression models.[31] [32]

See also

Notes

Citations

  1. ^ O'Connor, J.J.; Robertson, E.F. "Hermann Amandus Schwarz". University of St Andrews, Scotland.
  2. ^ a b Bityutskov, V. I. (2001) [1994], "Bunyakovskii inequality", Encyclopedia of Mathematics, EMS Press
  3. ^ Ćurgus, Branko. "Cauchy-Bunyakovsky-Schwarz inequality". Department of Mathematics. Western Washington University.
  4. ^ Joyce, David E. "Cauchy's inequality" (PDF). Department of Mathematics and Computer Science. Clark University. Archived (PDF) from the original on 2022-10-09.
  5. ^ a b c Steele, J. Michael (2004). The Cauchy–Schwarz Master Class: an Introduction to the Art of Mathematical Inequalities. The Mathematical Association of America. p. 1. ISBN 978-0521546775. ...there is no doubt that this is one of the most widely used and most important inequalities in all of mathematics.
  6. ^ Strang, Gilbert (19 July 2005). "3.2". Linear Algebra and its Applications (4th ed.). Stamford, CT: Cengage Learning. pp. 154–155. ISBN 978-0030105678.
  7. ^ a b Hunter, John K.; Nachtergaele, Bruno (2001). Applied Analysis. World Scientific. ISBN 981-02-4191-7.
  8. ^ Bachmann, George; Narici, Lawrence; Beckenstein, Edward (2012-12-06). Fourier and Wavelet Analysis. Springer Science & Business Media. p. 14. ISBN 9781461205050.
  9. ^ Hassani, Sadri (1999). Mathematical Physics: A Modern Introduction to Its Foundations. Springer. p. 29. ISBN 0-387-98579-4. Equality holds iff <c|c>=0 or |c>=0. From the definition of |c>, we conclude that |a> and |b> must be proportional.
  10. ^ Axler, Sheldon (2015). Linear Algebra Done Right, 3rd Ed. Springer International Publishing. p. 172. ISBN 978-3-319-11079-0. This inequality is an equality if and only if one of u, v is a scalar multiple of the other.
  11. ^ Bachman, George; Narici, Lawrence (2012-09-26). Functional Analysis. Courier Corporation. p. 141. ISBN 9780486136554.
  12. ^ Swartz, Charles (1994-02-21). Measure, Integration and Function Spaces. World Scientific. p. 236. ISBN 9789814502511.
  13. ^ Ricardo, Henry (2009-10-21). A Modern Introduction to Linear Algebra. CRC Press. p. 18. ISBN 9781439894613.
  14. ^ Banerjee, Sudipto; Roy, Anindya (2014-06-06). Linear Algebra and Matrix Analysis for Statistics. CRC Press. p. 181. ISBN 9781482248241.
  15. ^ Valenza, Robert J. (2012-12-06). Linear Algebra: An Introduction to Abstract Mathematics. Springer Science & Business Media. p. 146. ISBN 9781461209010.
  16. ^ Constantin, Adrian (2016-05-21). Fourier Analysis with Applications. Cambridge University Press. p. 74. ISBN 9781107044104.
  17. ^ Mukhopadhyay, Nitis (2000-03-22). Probability and Statistical Inference. CRC Press. p. 150. ISBN 9780824703790.
  18. ^ Keener, Robert W. (2010-09-08). Theoretical Statistics: Topics for a Core Course. Springer Science & Business Media. p. 71. ISBN 9780387938394.
  19. ^ Wu, Hui-Hua; Wu, Shanhe (April 2009). "Various proofs of the Cauchy-Schwarz inequality" (PDF). Octogon Mathematical Magazine. 17 (1): 221–229. ISBN 978-973-88255-5-0. ISSN 1222-5657. Archived (PDF) from the original on 2022-10-09. Retrieved 18 May 2016.
  20. ^ Aliprantis, Charalambos D.; Border, Kim C. (2007-05-02). Infinite Dimensional Analysis: A Hitchhiker's Guide. Springer Science & Business Media. ISBN 9783540326960.
  21. ^ Rudin, Walter (1987) [1966]. Real and Complex Analysis (3rd ed.). New York: McGraw-Hill. ISBN 0070542341.
  22. ^ Faria, Edson de; Melo, Welington de (2010-08-12). Mathematical Aspects of Quantum Field Theory. Cambridge University Press. p. 273. ISBN 9781139489805.
  23. ^ Lin, Huaxin (2001-01-01). An Introduction to the Classification of Amenable C*-algebras. World Scientific. p. 27. ISBN 9789812799883.
  24. ^ Arveson, W. (2012-12-06). An Invitation to C*-Algebras. Springer Science & Business Media. p. 28. ISBN 9781461263715.
  25. ^ Størmer, Erling (2012-12-13). Positive Linear Maps of Operator Algebras. Springer Monographs in Mathematics. Springer Science & Business Media. ISBN 9783642343698.
  26. ^ Kadison, Richard V. (1952-01-01). "A Generalized Schwarz Inequality and Algebraic Invariants for Operator Algebras". Annals of Mathematics. 56 (3): 494–503. doi:10.2307/1969657. JSTOR 1969657.
  27. ^ Paulsen, Vern (2002). Completely Bounded Maps and Operator Algebras. Cambridge Studies in Advanced Mathematics. Vol. 78. Cambridge University Press. p. 40. ISBN 9780521816694.
  28. ^ Callebaut, D.K. (1965). "Generalization of the Cauchy–Schwarz inequality". J. Math. Anal. Appl. 12 (3): 491–494. doi:10.1016/0022-247X(65)90016-8.
  29. ^ Callebaut's inequality. Entry in the AoPS Wiki.
  30. ^ Moslehian, M.S.; Matharu, J.S.; Aujla, J.S. (2011). "Non-commutative Callebaut inequality". Linear Algebra and Its Applications. 436 (9): 3347–3353. arXiv:1112.3003. doi:10.1016/j.laa.2011.11.024. S2CID 119592971.
  31. ^ Liu, Shuangzhe; Neudecker, Heinz (1999). "A survey of Cauchy-Schwarz and Kantorovich-type matrix inequalities". Statistical Papers. 40: 55–73. doi:10.1007/BF02927110. S2CID 122719088.
  32. ^ Liu, Shuangzhe; Trenkler, Götz; Kollo, Tõnu; von Rosen, Dietrich; Baksalary, Oskar Maria (2023). "Professor Heinz Neudecker and matrix differential calculus". Statistical Papers. doi:10.1007/s00362-023-01499-w. S2CID 263661094.

References

External links

This page was last edited on 24 March 2024, at 13:53
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.