To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

From Wikipedia, the free encyclopedia

In mathematics, a linear form (also known as a linear functional,[1] a one-form, or a covector) is a linear map[nb 1] from a vector space to its field of scalars (often, the real numbers or the complex numbers).

If V is a vector space over a field k, the set of all linear functionals from V to k is itself a vector space over k with addition and scalar multiplication defined pointwise. This space is called the dual space of V, or sometimes the algebraic dual space, when a topological dual space is also considered. It is often denoted Hom(V, k),[2] or, when the field k is understood, ;[3] other notations are also used, such as ,[4][5] or [2] When vectors are represented by column vectors (as is common when a basis is fixed), then linear functionals are represented as row vectors, and their values on specific vectors are given by matrix products (with the row vector on the left).

YouTube Encyclopedic

  • 1/5
    Views:
    1 166 597
    33 276
    1 310 575
    391 836
    391 625
  • Linear transformations | Matrix transformations | Linear Algebra | Khan Academy
  • Linear Transformations on Vector Spaces
  • Linear subspaces | Vectors and spaces | Linear Algebra | Khan Academy
  • VECTOR SPACES - LINEAR ALGEBRA
  • Lecture - 2 Introduction to linear vector spaces

Transcription

You now know what a transformation is, so let's introduce a special kind of transformation called a linear transformation. It only makes sense that we have something called a linear transformation because we're studying linear algebra. We already had linear combinations so we might as well have a linear transformation. And a linear transformation, by definition, is a transformation-- which we know is just a function. We could say it's from the set rn to rm -- It might be obvious in the next video why I'm being a little bit particular about that, although they are just arbitrary letters -- where the following two things have to be true. So something is a linear transformation if and only if the following thing is true. Let's say that we have two vectors. Say vector a and let's say vector b, are both members of rn. So they're both in our domain. So then this is a linear transformation if and only if I take the transformation of the sum of our two vectors. If I add them up first, that's equivalent to taking the transformation of each of the vectors and then summing them. That's my first condition for this to be a linear transformation. And the second one is, if I take the transformation of any scaled up version of a vector -- so let me just multiply vector a times some scalar or some real number c . If this is a linear transformation then this should be equal to c times the transformation of a. That seems pretty straightforward. Let's see if we can apply these rules to figure out if some actual transformations are linear or not. So let me define a transformation. Let's say that I have the transformation T. Part of my definition I'm going to tell you, it maps from r2 to r2. So if you give it a 2-tuple, right? Its domain is 2-tuple. So you give it an x1 and an x2 let's say it maps to, so this will be equal to, or it's associated with x1 plus x2. And then let's just say it's 3 times x1 is the second tuple. Or we could have written this more in vector form. This is kind of our tuple form. We could have written it -- and it's good to see all the different notations that you might encounter -- you could write it a transformation of some vector x, where the vector looks like this, x1, x2. Let me put a bracket there. It equals some new vector, x1 plus x2. And then the second component of the new vector would be 3x1. That's a completely legitimate way to express our transformation. And a third way, which I never see, but to me it kind of captures the essence of what a transformation is. It's just a mapping or it's just a function. We could say that the transformation is a mapping from any vector in r2 that looks like this: x1, x2, to-- and I'll do this notation-- a vector that looks like this. x1 plus x2 and then 3x1. All of these statements are equivalent. But our whole point of writing this is to figure out whether T is linearly independent. Sorry, not linearly independent. Whether it's a linear transformation. I was so obsessed with linear independence for so many videos, it's hard to get it out of my brain in this one. Whether it's a linear transformation. So let's test our two conditions. I have them up here. So let's take T of, let's say I have to vectors a and b. They're members of r2. So let me write it. A is equal to a1, a2, and b is equal to b1, b2. Sorry that's not a vector. I have to make sure that those are scalars. These are the components of a vector. And b2. So what is a1 plus b? Sorry, what is vector a plus vector b? Brain's malfunctioning. All right. Well, you just add up their components. This is the definition of vector addition. So it's a1 plus b1. Add up the first components. And the second components is just the sum of each of the vector's second compnents. a2 plus b2. Nothing new here. But what is the transformation of this vector? So the transformation of vector a plus vector b, we could write it like this. That would be the same thing as the transformation of this vector, which is just a1 plus b1 and a2 plus b2. Which we know it equals a vector. It equals this vector. Or what we do is for the first component here, we add up the two components on this side. So the first component here is going to be these two guys added up. So it's a1 plus a2 plus b1 plus b2. And then the second component by our transformation or function definition is just 3 times the first component in our domain, I guess you could say. So it's 3 times the first one. So it's going to be 3 times this first guy. So it's 3a1 plus 3b1. Fair enough. Now what is the transformation individually of a and b? So the transformation of a is equal to the transformation of a -- let me write it this way -- is equal to the transformation of a1 a2 in brackets. That's another way of writing vector a. And what is that equal to? That's our definition of our transformation right up here, so this is going to be equal to the vector a1 plus a2 and then 3 times a1. It just comes straight out of the definition. I essentially just replaced an x with a's. By the same argument, what is the transformation of our vector b? Well, it's just going to be the same thing with the a's replaced by the b's. So the transformation of our vector b is going to be -- b is just b1 b2 -- so it's going to be b1 plus b2. And then the second component in the transformation will be 3 times b1. Now, what is the transformation of vector a plus the transformation of vector b? Well, it's this vector plus that vector. And what is that equal to? Well, this is just pure vector addition so we just add up their components. So it's a1 plus a2 plus b1 plus b2. That's just that component plus that component. The second component is 3a1 and we're going to add it to that second component. So it's 3a1 plus 3b1. Now, we just showed you that if I take the transformations separately of each of the vectors and then add them up, I get the exact same thing as if I took the vectors and added them up first and then took the transformation. So we've met our first criteria. That the transformation of the sum of the vectors is the same thing as the sum of the transformations. Now let's see if this works with a random scalar. So we know what the transformation of a looks like. What does ca look like, first of all? I guess that's a good place to start. c times our vector a is going to be equal to c times a1. And then c times a2. That's our definition of scalar multiplication time's a vector. So what's our transformation -- let me go to a new color. What is our -- let me do a color I haven't used in a long time, white. What is our transformation of ca going to be? Well, that's the same thing as our transformation of ca1, ca2 which is equal to a new vector, where the first term -- let's go to our definition -- is you sum the first and second components. And then the second term is 3 times the first component. So our first term you sum them. So it's going to be ca1 plus ca2. And then our second term is 3 times our first term, so it's 3ca1. Now, what is this equal to? This is the same thing. We can view it as factoring out the c. This the same thing as c times the vector a1 plus a2. And then the second component is 3a1. But this thing right here, we already saw. This is the same thing as the transformation of a. So just like that, you see that the transformation of c times our vector a, for any vector a in r2 -- anything in r2 can be represented this way -- is the same thing as c times the transformation of a. So we've met our second condition, that when you when you -- well I just stated it, so I don't have to restate it. So we meet both conditions, which tells us that this is a linear transformation. And you might be thinking, OK, Sal, fair enough. How do I know that all transformations aren't linear transformations? Show me something that won't work. And here I'll do a very simple example. Let me define my transformation. Well, I'll do it from r2 to r2 just to kind of compare the two. I could have done it from r to r if wanted a simpler example. But I'm going to define my transformation. Let's say, my transformation of the vector x1, x2. Let's say it is equal to x1 squared and then 0, just like that. Let me see if this is a linear transformation. So the first question is, what's my transformation of a vector a? So my transformation of a vector a-- where a is just the same a that I did before-- it would look like this. It would look like a1 squared and then a 0. Now, what would be my transformation if I took c times a? Well, this is the same thing as c times a1 and c times a2. And by our transformation definition -- sorry, the transformation of c times this thing right here, because I'm taking the transformation on both sides. And by our transformation definition this will just be equal to a new vector that would be in our codomain, where the first term is just the first term of our input squared. So it's ca1 squared. And the second term is 0. What is this equal to? Let me switch colors. This is equal to c squared a1 squared and this is equal to 0. Now, if we can assume that c does not equal 0, this would be equal to what? Actually, it doesn't even matter. We don't even have to make that assumption. So this is the same thing. This is equal to c squared times the vector a1 squared 0. Which is equal to what? This expression right here is a transformation of a. So this is equal to c squared times the transformation of a. Let me do it in the same color. So what I've just showed you is, if I take the transformation of a vector being multiplied by a scalar quantity first, that that's equal to -- for this T, for this transformation that I've defined right here -- c squared times the transformation of a. And clearly this statement right here, or this choice of transformation, conflicts with this requirement for a linear transformation. If I have a c here I should see a c here. But in our case, I have a c here and I have a c squared here. So clearly this negates that statement. So this is not a linear transformation. And just to get a gut feel if you're just looking at something, whether it's going to be a linear transformation or not, if the transformation just involves linear combinations of the different components of the inputs, you're probably dealing with a linear transformation. If you start seeing things where the components start getting multiplied by each other or you start seeing squares or exponents, you're probably not dealing with a linear transformation. And then there's some functions that might be in a bit of a grey area, but it tends to be just linear combinations are going to lead to a linear transformation. But hopefully that gives you a good sense of things. And this leads up to what I think is one of the neatest outcomes, in the next video.

Examples

The constant zero function, mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) is surjective (that is, its range is all of k).

  • Indexing into a vector: The second element of a three-vector is given by the one-form That is, the second element of is
  • Mean: The mean element of an -vector is given by the one-form That is,
  • Sampling: Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location.
  • Net present value of a net cash flow, is given by the one-form where is the discount rate. That is,

Linear functionals in Rn

Suppose that vectors in the real coordinate space are represented as column vectors

For each row vector there is a linear functional defined by

and each linear functional can be expressed in this form.

This can be interpreted as either the matrix product or the dot product of the row vector and the column vector :

Trace of a square matrix

The trace of a square matrix is the sum of all elements on its main diagonal. Matrices can be multiplied by scalars and two matrices of the same dimension can be added together; these operations make a vector space from the set of all matrices. The trace is a linear functional on this space because and for all scalars and all matrices

(Definite) Integration

Linear functionals first appeared in functional analysis, the study of vector spaces of functions. A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral

is a linear functional from the vector space of continuous functions on the interval to the real numbers. The linearity of follows from the standard facts about the integral:

Evaluation

Let denote the vector space of real-valued polynomial functions of degree defined on an interval If then let be the evaluation functional

The mapping is linear since

If are distinct points in then the evaluation functionals form a basis of the dual space of (Lax (1996) proves this last fact using Lagrange interpolation).

Non-example

A function having the equation of a line with (for example, ) is not a linear functional on , since it is not linear.[nb 2] It is, however, affine-linear.

Visualization

Geometric interpretation of a 1-form α as a stack of hyperplanes of constant value, each corresponding to those vectors that α maps to a given scalar value shown next to it along with the "sense" of increase. The   zero plane is through the origin.

In finite dimensions, a linear functional can be visualized in terms of its level sets, the sets of vectors which map to a given value. In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel hyperplanes. This method of visualizing linear functionals is sometimes introduced in general relativity texts, such as Gravitation by Misner, Thorne & Wheeler (1973).

Applications

Application to quadrature

If are distinct points in [a, b], then the linear functionals defined above form a basis of the dual space of Pn, the space of polynomials of degree The integration functional I is also a linear functional on Pn, and so can be expressed as a linear combination of these basis elements. In symbols, there are coefficients for which

for all This forms the foundation of the theory of numerical quadrature.[6]

In quantum mechanics

Linear functionals are particularly important in quantum mechanics. Quantum mechanical systems are represented by Hilbert spaces, which are antiisomorphic to their own dual spaces. A state of a quantum mechanical system can be identified with a linear functional. For more information see bra–ket notation.

Distributions

In the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions.

Dual vectors and bilinear forms

Linear functionals (1-forms) α, β and their sum σ and vectors u, v, w, in 3d Euclidean space. The number of (1-form) hyperplanes intersected by a vector equals the inner product.[7]

Every non-degenerate bilinear form on a finite-dimensional vector space V induces an isomorphism VV : vv such that

where the bilinear form on V is denoted (for instance, in Euclidean space, is the dot product of v and w).

The inverse isomorphism is VV : vv, where v is the unique element of V such that

for all

The above defined vector vV is said to be the dual vector of

In an infinite dimensional Hilbert space, analogous results hold by the Riesz representation theorem. There is a mapping VV from V into its continuous dual space V.

Relationship to bases

Basis of the dual space

Let the vector space V have a basis , not necessarily orthogonal. Then the dual space has a basis called the dual basis defined by the special property that

Or, more succinctly,

where δ is the Kronecker delta. Here the superscripts of the basis functionals are not exponents but are instead contravariant indices.

A linear functional belonging to the dual space can be expressed as a linear combination of basis functionals, with coefficients ("components") ui,

Then, applying the functional to a basis vector yields

due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals. Then

So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector.

The dual basis and inner product

When the space V carries an inner product, then it is possible to write explicitly a formula for the dual basis of a given basis. Let V have (not necessarily orthogonal) basis In three dimensions (n = 3), the dual basis can be written explicitly

for where ε is the Levi-Civita symbol and the inner product (or dot product) on V.

In higher dimensions, this generalizes as follows

where is the Hodge star operator.

Over a ring

Modules over a ring are generalizations of vector spaces, which removes the restriction that coefficients belong to a field. Given a module M over a ring R, a linear form on M is a linear map from M to R, where the latter is considered as a module over itself. The space of linear forms is always denoted Homk(V, k), whether k is a field or not. It is a right module, if V is a left module.

The existence of "enough" linear forms on a module is equivalent to projectivity.[8]

Dual Basis Lemma — An R-module M is projective if and only if there exists a subset and linear forms such that, for every only finitely many are nonzero, and

Change of field

Suppose that is a vector space over Restricting scalar multiplication to gives rise to a real vector space[9] called the realification of Any vector space over is also a vector space over endowed with a complex structure; that is, there exists a real vector subspace such that we can (formally) write as -vector spaces.

Real versus complex linear functionals

Every linear functional on is complex-valued while every linear functional on is real-valued. If then a linear functional on either one of or is non-trivial (meaning not identically ) if and only if it is surjective (because if then for any scalar ), where the image of a linear functional on is while the image of a linear functional on is Consequently, the only function on that is both a linear functional on and a linear function on is the trivial functional; in other words, where denotes the space's algebraic dual space. However, every -linear functional on is an -linear operator (meaning that it is additive and homogeneous over ), but unless it is identically it is not an -linear functional on because its range (which is ) is 2-dimensional over Conversely, a non-zero -linear functional has range too small to be a -linear functional as well.

Real and imaginary parts

If then denote its real part by and its imaginary part by Then and are linear functionals on and The fact that for all implies that for all [9]

and consequently, that and [10]

The assignment defines a bijective[10] -linear operator whose inverse is the map defined by the assignment that sends to the linear functional defined by

The real part of is and the bijection is an -linear operator, meaning that and for all and [10] Similarly for the imaginary part, the assignment induces an -linear bijection whose inverse is the map defined by sending to the linear functional on defined by

This relationship was discovered by Henry Löwig in 1934 (although it is usually credited to F. Murray),[11] and can be generalized to arbitrary finite extensions of a field in the natural way. It has many important consequences, some of which will now be described.

Properties and relationships

Suppose is a linear functional on with real part and imaginary part

Then if and only if if and only if

Assume that is a topological vector space. Then is continuous if and only if its real part is continuous, if and only if 's imaginary part is continuous. That is, either all three of and are continuous or none are continuous. This remains true if the word "continuous" is replaced with the word "bounded". In particular, if and only if where the prime denotes the space's continuous dual space.[9]

Let If for all scalars of unit length (meaning ) then[proof 1][12]

Similarly, if denotes the complex part of then implies
If is a normed space with norm and if is the closed unit ball then the supremums above are the operator norms (defined in the usual way) of and so that [12]
This conclusion extends to the analogous statement for polars of balanced sets in general topological vector spaces.
  • If is a complex Hilbert space with a (complex) inner product that is antilinear in its first coordinate (and linear in the second) then becomes a real Hilbert space when endowed with the real part of Explicitly, this real inner product on is defined by for all and it induces the same norm on as because for all vectors Applying the Riesz representation theorem to (resp. to ) guarantees the existence of a unique vector (resp. ) such that (resp. ) for all vectors The theorem also guarantees that and It is readily verified that Now and the previous equalities imply that which is the same conclusion that was reached above.

In infinite dimensions

Below, all vector spaces are over either the real numbers or the complex numbers

If is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If is a Banach space, then so is its (continuous) dual. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual space. In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensions the continuous dual is a proper subspace of the algebraic dual.

A linear functional f on a (not necessarily locally convex) topological vector space X is continuous if and only if there exists a continuous seminorm p on X such that [13]

Characterizing closed subspaces

Continuous linear functionals have nice properties for analysis: a linear functional is continuous if and only if its kernel is closed,[14] and a non-trivial continuous linear functional is an open map, even if the (topological) vector space is not complete.[15]

Hyperplanes and maximal subspaces

A vector subspace of is called maximal if (meaning and ) and does not exist a vector subspace of such that A vector subspace of is maximal if and only if it is the kernel of some non-trivial linear functional on (that is, for some linear functional on that is not identically 0). An affine hyperplane in is a translate of a maximal vector subspace. By linearity, a subset of is a affine hyperplane if and only if there exists some non-trivial linear functional on such that [11] If is a linear functional and is a scalar then This equality can be used to relate different level sets of Moreover, if then the kernel of can be reconstructed from the affine hyperplane by

Relationships between multiple linear functionals

Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other). This fact can be generalized to the following theorem.

Theorem[16][17] — If are linear functionals on X, then the following are equivalent:

  1. f can be written as a linear combination of ; that is, there exist scalars such that ;
  2. ;
  3. there exists a real number r such that for all and all

If f is a non-trivial linear functional on X with kernel N, satisfies and U is a balanced subset of X, then if and only if for all [15]

Hahn–Banach theorem

Any (algebraic) linear functional on a vector subspace can be extended to the whole space; for example, the evaluation functionals described above can be extended to the vector space of polynomials on all of However, this extension cannot always be done while keeping the linear functional continuous. The Hahn–Banach family of theorems gives conditions under which this extension can be done. For example,

Hahn–Banach dominated extension theorem[18](Rudin 1991, Th. 3.2) — If is a sublinear function, and is a linear functional on a linear subspace which is dominated by p on M, then there exists a linear extension of f to the whole space X that is dominated by p, i.e., there exists a linear functional F such that

for all and
for all

Equicontinuity of families of linear functionals

Let X be a topological vector space (TVS) with continuous dual space

For any subset H of the following are equivalent:[19]

  1. H is equicontinuous;
  2. H is contained in the polar of some neighborhood of in X;
  3. the (pre)polar of H is a neighborhood of in X;

If H is an equicontinuous subset of then the following sets are also equicontinuous: the weak-* closure, the balanced hull, the convex hull, and the convex balanced hull.[19] Moreover, Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact).[20][19]

See also

Notes

Footnotes

  1. ^ In some texts the roles are reversed and vectors are defined as linear maps from covectors to scalars
  2. ^ For instance,

Proofs

  1. ^ It is true if so assume otherwise. Since for all scalars it follows that If then let and be such that and where if then take Then and because is a real number, By assumption so Since was arbitrary, it follows that

References

  1. ^ Axler (2015) p. 101, §3.92
  2. ^ a b Tu (2011) p. 19, §3.1
  3. ^ Katznelson & Katznelson (2008) p. 37, §2.1.3
  4. ^ Axler (2015) p. 101, §3.94
  5. ^ Halmos (1974) p. 20, §13
  6. ^ Lax 1996
  7. ^ Misner, Thorne & Wheeler (1973) p. 57
  8. ^ Clark, Pete L. Commutative Algebra (PDF). Unpublished. Lemma 3.12.
  9. ^ a b c Rudin 1991, pp. 57.
  10. ^ a b c Narici & Beckenstein 2011, pp. 9–11.
  11. ^ a b Narici & Beckenstein 2011, pp. 10–11.
  12. ^ a b Narici & Beckenstein 2011, pp. 126–128.
  13. ^ Narici & Beckenstein 2011, p. 126.
  14. ^ Rudin 1991, Theorem 1.18
  15. ^ a b Narici & Beckenstein 2011, p. 128.
  16. ^ Rudin 1991, pp. 63–64.
  17. ^ Narici & Beckenstein 2011, pp. 1–18.
  18. ^ Narici & Beckenstein 2011, pp. 177–220.
  19. ^ a b c Narici & Beckenstein 2011, pp. 225–273.
  20. ^ Schaefer & Wolff 1999, Corollary 4.3.

Bibliography

This page was last edited on 12 February 2024, at 13:33
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.