To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Orthonormal basis

From Wikipedia, the free encyclopedia

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other.[1][2][3] For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis for arises in this fashion.

For a general inner product space an orthonormal basis can be used to define normalized orthogonal coordinates on Under these coordinates, the inner product becomes a dot product of vectors. Thus the presence of an orthonormal basis reduces the study of a finite-dimensional inner product space to the study of under dot product. Every finite-dimensional inner product space has an orthonormal basis, which may be obtained from an arbitrary basis using the Gram–Schmidt process.

In functional analysis, the concept of an orthonormal basis can be generalized to arbitrary (infinite-dimensional) inner product spaces.[4] Given a pre-Hilbert space an orthonormal basis for is an orthonormal set of vectors with the property that every vector in can be written as an infinite linear combination of the vectors in the basis. In this case, the orthonormal basis is sometimes called a Hilbert basis for Note that an orthonormal basis in this sense is not generally a Hamel basis, since infinite linear combinations are required.[5] Specifically, the linear span of the basis must be dense in although not necessarily the entire space.

If we go on to Hilbert spaces, a non-orthonormal set of vectors having the same linear span as an orthonormal basis may not be a basis at all. For instance, any square-integrable function on the interval can be expressed (almost everywhere) as an infinite sum of Legendre polynomials (an orthonormal basis), but not necessarily as an infinite sum of the monomials

A different generalisation is to pseudo-inner product spaces, finite-dimensional vector spaces equipped with a non-degenerate symmetric bilinear form known as the metric tensor. In such a basis, the metric takes the form with positive ones and negative ones.

YouTube Encyclopedic

  • 1/3
    Views:
    357 039
    71 777
    151 979
  • Introduction to orthonormal bases | Linear Algebra | Khan Academy
  • Linear Algebra: Orthonormal Basis
  • Gram Schmidt Method, Orthogonal and Orhonormal Basis Example

Transcription

Let's say I've got me a set of vectors. So let me call my set B. And let's say I have the vectors v1, v2, all the way through vk. Now let's say this isn't just any set of vectors. There's some interesting things about these vectors. The first thing is that all of these guys have length of 1. So we could say the length of vector vi is equal to 1 for i is equal to-- well we could say between 1 and k or i is equal to 1, 2, all the way to k. All of these guys have length equal 1. Or another way to say it is that the square of their lengths are 1. The square of a vi whose length is equal to 1. Or vi dot vi is equal to 1 for i is any of these guys. Any i can be 1, 2, 3, all the way to k. So that's the first interesting thing about it. Let me write it in regular words. All the vectors in B have length 1. Or another way to say is that they've all been normalized. That's another way to say that is that they have all been normalized. Or they're all unit vectors. Normalized vectors are vectors that you've made their lengths 1. You're turned them into unit vectors. They have all been normalized. So that's the first interesting thing about my set, B. And then the next interesting thing about my set B is that all of the vectors are orthogonal to each other. So if you dot it with itself, if you dot a vector with itself, you get length 1. But if you take a vector and dot it with any other vector-- if you take vi and you were to dot it with vj. So if you took v2 and dotted it with v1, it's going to be equal to 0 for i does not equal j. All of these guys are orthogonal. Let me write that down. All of the vectors are orthogonal to each other. And of course they're not orthogonal to themselves because they all have length 1. So if you take the dot product with itself, you get 1. If you take a dot product with some other guy in your set you're going to get 0. Maybe I can write it this way. vi dot vj for all the members of the set is going to be equal to 0 for i does not equal j. And then if these guys are the same vector-- I'm dotting with myself-- I'm going to have length 1. So it would equal length 1 for i is equal to j. So I've got a special set. All of these guys have length 1 and they're all orthogonal with each other. They're normalized and they're all orthogonal. And we have a special word for this. This is called an orthonormal set. So B is an orthonormal set. Normal for normalized. Everything is orthogonal. They're all orthogonal relative to each other. And everything has been normalized. Everything has length 1. Now, the first interesting thing about an orthonormal set is that it's also going to be a linearly independent set. So if B is orthonormal, B is also going to be linearly independent. And how can I show that to you? Well let's assume that it isn't linearly independent. Let me take vi, let me take vj that are members of my set. And let's assume that i does not equal j. Now, we already know that it's an orthonormal set. So vi dot vj is going to be equal to 0. They are orthogonal. These are two vectors in my set. Now, let's assume that they are linearly dependent. I want to prove that they are linearly independent and the way I'm going to prove that is by assuming they are linearly dependent and then arriving at a contradiction. So let's assume that vi and vj are linearly dependent. Well then that means that I can represent one of these guys as a scalar multiple the other. And I can pick either way. So let's just say, for the sake of argument, that I can represent vi-- let's say that vi is equal to sum scalar c times vj. That's what linear dependency means. That one of them can be represented as a scalar multiple of the other. Well if this is true, then I can just substitute this back in for vi. And what do I get? I get c times vj-- which is just another way of writing vi because I assumed linear dependence. That dot vj has got to be equal to 0. This guy was vi. This is vj. They are orthogonal to each other. But this right here is just equal to c times vj dot vj which is just equal to c times the length of vj squared. And that has to equal 0. They are orthogonal so that has to equal 0. Which implies that the length of vj has to be equal to 0. If we assume that this is some non-zero multiple, and this has to be some non-zero multiple-- I should have written it there-- c does not equal 0. Why does this have to be a non-zero multiple? Because these were both non-zero vectors. This is a non-zero vector. So this guy can't be 0. This guy has length 1. So if this is a non-zero vector, there's no way that I can just put a 0 here. Because if I put a 0 then I would get a 0 vector. So c can't be 0. So if c isn't 0, then this guy right here has to be 0. And so we get that the length of vj is 0. Which we know is false. The length of vj is 1. This is an orthonormal set. The length of all of the members of B are 1. So we reach a contradiction. This is our contradiction. Vj is not the 0 vector. It has length 1. Contradiction. So if you have a bunch of vectors that are orthogonal and they're non-zero, they have to be linearly independent. Which is pretty interesting. So if I have this set, this orthonormal set right here, it's also a set of linearly independent vectors, so it can be a basis for a subspace. So let's say that B is the basis for some subspace, v. Or we could say that v is equal to the span of v1, v2, all the way to vk. Then we called B-- if it was just a set, we'd call it a orthonormal set, but it can be an orthonormal basis when it's spans some subspace. So we can write, we can say that B is an orthonormal basis for v. Now everything I've done is very abstract, but let me do some quick examples for you. Just so you understand what an orthonormal basis looks like with real numbers. So let's say I have two vectors. Let's say I have the vector, v1, that is-- say we're dealing in R3 so it's 1/3, 2/3, 2/3 and 2/3. And let's say I have another vector, v2, that is equal to 2/3, 1/3, and minus 2/3. And let's say that B is the set of v1 and v2. So the first question is, is what are the lengths of these guys? So let's take the length. The length of v1 squared is just v1 dot v1. Which is just 1/3 squared, which is just 1 over 0. Plus 2/3 squared, which is 4/9. Plus 2/3 squared, which is 4/9. Which is equal to 1. So if the length squared is 1, then that tells us that the length of our first vector is equal to 1. If the square of the length is 1, you take the square root, so the length is 1. What about vector 2? Well the length of vector 2 squared is equal to v2 dot v2. Which is equal to-- let's see, two 2/3 squared is 4/9-- plus 1/3 squared is 1/9. Plus 2/3 squared is 4/9. So that is 9/9, which is equal to 1. Which tells us that the length of v2, the length of vector v2 is equal to 1. So we know that these guys are definitely normalized. We can call this a normalized set. But is it an orthonormal set? Are these guys orthogonal to each other? And to test that out we just take their dot product. So v1 dot v2 is equal to 1/3 times 2/3, which is 2/9. Plus 2/3 times 1/3, which is 2/9. Plus 2/3 times the minus 2/3. That's minus 4/9. 2 plus 2 minus 4 is 0. So it equals 0. So these guys are indeed orthogonal. So B is an orthonormal set. And if I have some subspace, let's say that B is equal to the span of v1 and v2, then we can say that the basis for v, or we could say that B is an orthonormal basis. for V.

Examples

  • For , the set of vectors is called the standard basis and forms an orthonormal basis of with respect to the standard dot product. Note that both the standard basis and standard dot product rely on viewing as the Cartesian product
    Proof: A straightforward computation shows that the inner products of these vectors equals zero, and that each of their magnitudes equals one, This means that is an orthonormal set. All vectors can be expressed as a sum of the basis vectors scaled
    so spans and hence must be a basis. It may also be shown that the standard basis rotated about an axis through the origin or reflected in a plane through the origin also forms an orthonormal basis of .
  • For , the standard basis and inner product are similarly defined. Any other orthonormal basis is related to the standard basis by an orthogonal transformation in the group O(n).
  • For pseudo-Euclidean space , an orthogonal basis with metric instead satisfies if , if , and if . Any two orthonormal bases are related by a pseudo-orthogonal transformation. In the case , these are Lorentz transformations.
  • The set with where denotes the exponential function, forms an orthonormal basis of the space of functions with finite Lebesgue integrals, with respect to the 2-norm. This is fundamental to the study of Fourier series.
  • The set with if and otherwise forms an orthonormal basis of
  • Eigenfunctions of a Sturm–Liouville eigenproblem.
  • The column vectors of an orthogonal matrix form an orthonormal set.

Basic formula

If is an orthogonal basis of then every element may be written as

When is orthonormal, this simplifies to

and the square of the norm of can be given by

Even if is uncountable, only countably many terms in this sum will be non-zero, and the expression is therefore well-defined. This sum is also called the Fourier expansion of and the formula is usually known as Parseval's identity.

If is an orthonormal basis of then is isomorphic to in the following sense: there exists a bijective linear map such that

Incomplete orthogonal sets

Given a Hilbert space and a set of mutually orthogonal vectors in we can take the smallest closed linear subspace of containing Then will be an orthogonal basis of which may of course be smaller than itself, being an incomplete orthogonal set, or be when it is a complete orthogonal set.

Existence

Using Zorn's lemma and the Gram–Schmidt process (or more simply well-ordering and transfinite recursion), one can show that every Hilbert space admits an orthonormal basis;[6] furthermore, any two orthonormal bases of the same space have the same cardinality (this can be proven in a manner akin to that of the proof of the usual dimension theorem for vector spaces, with separate cases depending on whether the larger basis candidate is countable or not). A Hilbert space is separable if and only if it admits a countable orthonormal basis. (One can prove this last statement without using the axiom of choice.)

Choice of basis as a choice of isomorphism

For concreteness we discuss orthonormal bases for a real, dimensional vector space with a positive definite symmetric bilinear form .

One way to view an orthonormal basis with respect to is as a set of vectors , which allow us to write for , and or . With respect to this basis, the components of are particularly simple:

We can now view the basis as a map which is an isomorphism of inner product spaces: to make this more explicit we can write

Explicitly we can write where is the dual basis element to .

The inverse is a component map

These definitions make it manifest that there is a bijection

The space of isomorphisms admits actions of orthogonal groups at either the side or the side. For concreteness we fix the isomorphisms to point in the direction , and consider the space of such maps, .

This space admits a left action by the group of isometries of , that is, such that , with the action given by composition:

This space also admits a right action by the group of isometries of , that is, , with the action again given by composition: .

As a principal homogeneous space

The set of orthonormal bases for with the standard inner product is a principal homogeneous space or G-torsor for the orthogonal group and is called the Stiefel manifold of orthonormal -frames.[7]

In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given the space of orthonormal bases, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a given basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any orthogonal basis to any other orthogonal basis.

The other Stiefel manifolds for of incomplete orthonormal bases (orthonormal -frames) are still homogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any -frame can be taken to any other -frame by an orthogonal map, but this map is not uniquely determined.

  • The set of orthonormal bases for is a G-torsor for .
  • The set of orthonormal bases for is a G-torsor for .
  • The set of orthonormal bases for is a G-torsor for .
  • The set of right-handed orthonormal bases for is a G-torsor for

See also

References

  1. ^ Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4.
  2. ^ Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.
  3. ^ Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.
  4. ^ Rudin, Walter (1987). Real & Complex Analysis. McGraw-Hill. ISBN 0-07-054234-1.
  5. ^ Roman 2008, p. 218, ch. 9.
  6. ^ Linear Functional Analysis Authors: Rynne, Bryan, Youngson, M.A. page 79
  7. ^ "CU Faculty". engfac.cooper.edu. Retrieved 2021-04-15.

External links

This page was last edited on 4 March 2024, at 20:28
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.