To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Linear combination

From Wikipedia, the free encyclopedia

In mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants).[1][2][3] The concept of linear combinations is central to linear algebra and related fields of mathematics. Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article.

YouTube Encyclopedic

  • 1/5
    Views:
    926 007
    576 259
    45 106
    704 207
    1 331
  • ✪ Linear combinations and span | Vectors and spaces | Linear Algebra | Khan Academy
  • ✪ Linear combinations, span, and basis vectors | Essence of linear algebra, chapter 2
  • ✪ Linear Algebra Example Problems - Linear Combination of Vectors #1
  • ✪ ❖ The Span of a Set of Vectors ❖
  • ✪ Linear Combinations and Spanning

Transcription

One term you are going to hear a lot of in these videos, and in linear algebra in general, is the idea of a linear combination. And all a linear combination of vectors are, they're just a linear combination. Let me show you what that means. So let's say I have a couple of vectors, v1, v2, and it goes all the way to vn. And they're all in, you know, it can be in R2 or Rn. Let's say that they're all in Rn. They're in some dimension of real space, I guess you could call it, but the idea is fairly simple. A linear combination of these vectors means you just add up the vectors. It's some combination of a sum of the vectors, so v1 plus v2 plus all the way to vn, but you scale them by arbitrary constants. So you scale them by c1, c2, all the way to cn, where everything from c1 to cn are all a member of the real numbers. That's all a linear combination is. Let me show you a concrete example of linear combinations. Let me make the vector. Let me define the vector a to be equal to-- and these are all bolded. These purple, these are all bolded, just because those are vectors, but sometimes it's kind of onerous to keep bolding things. So let's just say I define the vector a to be equal to 1, 2. And I define the vector b to be equal to 0, 3. What is the linear combination of a and b? Well, it could be any constant times a plus any constant times b. So it could be 0 times a plus-- well, it could be 0 times a plus 0 times b, which, of course, would be what? That would be 0 times 0, that would be 0, 0. That would be the 0 vector, but this is a completely valid linear combination. And we can denote the 0 vector by just a big bold 0 like that. I could do 3 times a. I'm just picking these numbers at random. 3 times a plus-- let me do a negative number just for fun. So I'm going to do plus minus 2 times b. What is that equal to? Let's figure it out. Let me write it out. It's 3 minus 2 times 0, so minus 0, and it's 3 times 2 is 6. 6 minus 2 times 3, so minus 6, so it's the vector 3, 0. This is a linear combination of a and b. I can keep putting in a bunch of random real numbers here and here, and I'll just get a bunch of different linear combinations of my vectors a and b. If I had a third vector here, if I had vector c, and maybe that was just, you know, 7, 2, then I could add that to the mix and I could throw in plus 8 times vector c. These are all just linear combinations. Now why do we just call them combinations? Why do you have to add that little linear prefix there? Because we're just scaling them up. We're not multiplying the vectors times each other. We haven't even defined what it means to multiply a vector, and there's actually several ways to do it. But, you know, we can't square a vector, and we haven't even defined what this means yet, but this would all of a sudden make it nonlinear in some form. So all we're doing is we're adding the vectors, and we're just scaling them up by some scaling factor, so that's why it's called a linear combination. Now you might say, hey Sal, why are you even introducing this idea of a linear combination? Because I want to introduce the idea, and this is an idea that confounds most students when it's first taught. I think it's just the very nature that it's taught. Over here, I just kept putting different numbers for the weights, I guess we could call them, for c1 and c2 in this combination of a and b, right? Let's ignore c for a little bit. I just put in a bunch of different numbers there. But it begs the question: what is the set of all of the vectors I could have created? And this is just one member of that set. But what is the set of all of the vectors I could've created by taking linear combinations of a and b? So let me draw a and b here. Maybe we can think about it visually, and then maybe we can think about it mathematically. So let's say a and b. So a is 1, 2. So 1, 2 looks like that. That's vector a. Let me do vector b in a different color. We're going to do it in yellow. Vector b is 0, 3. So vector b looks like that: 0, 3. So what's the set of all of the vectors that I can represent by adding and subtracting these vectors? And we said, if we multiply them both by zero and add them to each other, we end up there. If we take 3 times a, that's the equivalent of scaling up a by 3. So you go 1a, 2a, 3a. So that's 3a, 3 times a will look like that. So this vector is 3a, and then we added to that 2b, right? Oh no, we subtracted 2b from that, so minus b looks like this. Minus 2b looks like this. This is minus 2b, all the way, in standard form, standard position, minus 2b. So if you add 3a to minus 2b, we get to this vector. 3a to minus 2b, you get this vector right here, and that's exactly what we did when we solved it mathematically. You get the vector 3, 0. You get this vector right here, 3, 0. But this is just one combination, one linear combination of a and b. Instead of multiplying a times 3, I could have multiplied a times 1 and 1/2 and just gotten right here. So 1 and 1/2 a minus 2b would still look the same. It would look like something like this. It would look something like-- let me make sure I'm doing this-- it would look something like this. And so our new vector that we would find would be something like this. So I just showed you, I can find this vector with a linear combination. I can find this vector with a linear combination. And actually, it turns out that you can represent any vector in R2 with some linear combination of these vectors right here, a and b. Now, let's just think of an example, or maybe just try a mental visual example. Wherever we want to go, we could go arbitrarily-- we could scale a up by some arbitrary value. So this is some weight on a, and then we can add up arbitrary multiples of b. B goes straight up and down, so we can add up arbitrary multiples of b to that. So we could get any point on this line right there. Now, if we scaled a up a little bit more, and then added any multiple b, we'd get anything on that line. If we multiplied a times a negative number and then added a b in either direction, we'll get anything on that line. We can keep doing that. And there's no reason why we can't pick an arbitrary a that can fill in any of these gaps. If we want a point here, we just take a little smaller a, and then we can add all the b's that fill up all of that line. So we can fill up any point in R2 with the combinations of a and b. So what we can write here is that the span-- let me write this word down. The span of the vectors a and b-- so let me write that down-- it equals R2 or it equals all the vectors in R2, which is, you know, it's all the tuples. R2 is all the tuples made of two ordered tuples of two real numbers. So it equals all of R2. This just means that I can represent any vector in R2 with some linear combination of a and b. And you're like, hey, can't I do that with any two vectors? Well, what if a and b were the vector-- let's say the vector 2, 2 was a, so a is equal to 2, 2, and let's say that b is the vector minus 2, minus 2, so b is that vector. So b is the vector minus 2, minus 2. Now, can I represent any vector with these? Well, I can scale a up and down, so I can scale a up and down to get anywhere on this line, and then I can add b anywhere to it, and b is essentially going in the same direction. It's just in the opposite direction, but I can multiply it by a negative and go anywhere on the line. So any combination of a and b will just end up on this line right here, if I draw it in standard form. It'll be a vector with the same slope as either a or b, or same inclination, whatever you want to call it. I could never-- there's no combination of a and b that I could represent this vector, that I could represent vector c. I just can't do it. I can add in standard form. I could just keep adding scale up a, scale up b, put them heads to tails, I'll just get the stuff on this line. I'll never get to this. So in this case, the span-- and I want to be clear. This is for this particular a and b, not for the a and b-- for this blue a and this yellow b, the span here is just this line. It's just this line. It's not all of R2. So this isn't just some kind of statement when I first did it with that example. It's like, OK, can any two vectors represent anything in R2? Well, no. I just showed you two vectors that can't represent that. What is the span of the 0 vector? I'll put a cap over it, the 0 vector, make it really bold. Well, the 0 vector is just 0, 0, so I don't care what multiple I put on it. The span of it is all of the linear combinations of this, so essentially, I could put arbitrary real numbers here, but I'm just going to end up with a 0, 0 vector. So the span of the 0 vector is just the 0 vector. The only vector I can get with a linear combination of this, the 0 vector by itself, is just the 0 vector itself. Likewise, if I take the span of just, you know, let's say I go back to this example right here. My a vector was right like that. Let me draw it in a better color. My a vector looked like that. If I were to ask just what the span of a is, it's all the vectors you can get by creating a linear combination of just a. So it's really just scaling. You can't even talk about combinations, really. So it's just c times a, all of those vectors. And we saw in the video where I parametrized or showed a parametric representation of a line, that this, the span of just this vector a, is the line that's formed when you just scale a up and down. So span of a is just a line. You have to have two vectors, and they can't be collinear, in order span all of R2. And I haven't proven that to you yet, but we saw with this example, if you pick this a and this b, you can represent all of R2 with just these two vectors. Now, the two vectors that you're most familiar with to that span R2 are, if you take a little physics class, you have your i and j unit vectors. And in our notation, i, the unit vector i that you learned in physics class, would be the vector 1, 0. So this is i, that's the vector i, and then the vector j is the unit vector 0, 1. This is what you learned in physics class. Let me do it in a different color. This is j. j is that. And you learned that they're orthogonal, and we're going to talk a lot more about what orthogonality means, but in our traditional sense that we learned in high school, it means that they're 90 degrees. But you can clearly represent any angle, or any vector, in R2, by these two vectors. And the fact that they're orthogonal makes them extra nice, and that's why these form-- and I'm going to throw out a word here that I haven't defined yet. These form the basis. These form a basis for R2. In fact, you can represent anything in R2 by these two vectors. line. I'm not going to even define what basis is. That's going to be a future video. But let me just write the formal math-y definition of span, just so you're satisfied. So if I were to write the span of a set of vectors, v1, v2, all the way to vn, that just means the set of all of the vectors, where I have c1 times v1 plus c2 times v2 all the way to cn-- let me scroll over-- all the way to cn vn. So this is a set of vectors because I can pick my ci's to be any member of the real numbers, and that's true for i-- so I should write for i to be anywhere between 1 and n. All I'm saying is that look, I can multiply each of these vectors by any value, any arbitrary value, real value, and then I can add them up. And now the set of all of the combinations, scaled-up combinations I can get, that's the span of these vectors. You can kind of view it as the space of all of the vectors that can be represented by a combination of these vectors right there. And so the word span, I think it does have an intuitive sense. I mean, if I say that, you know, in my first example, I showed you those two vectors span, or a and b spans R2. I wrote it right here. That tells me that any vector in R2 can be represented by a linear combination of a and b. And actually, just in case that visual kind of pseudo-proof doesn't do you justice, let me prove it to you algebraically. I'm telling you that I can take-- let's say I want to represent, you know, I have some-- let me rewrite my a's and b's again. So this was my vector a. It was 1, 2, and b was 0, 3. Let me remember that. So my vector a is 1, 2, and my vector b was 0, 3. Now my claim was that I can represent any point. Let's say I want to represent some arbitrary point x in R2, so its coordinates are x1 and x2. I need to be able to prove to you that I can get to any x1 and any x2 with some combination of these guys. So let's say that my combination, I say c1 times a plus c2 times b has to be equal to my vector x. Let me show you that I can always find a c1 or c2 given that you give me some x's. So let's just write this right here with the actual vectors being represented in their kind of column form. So we have c1 times this vector plus c2 times the b vector 0, 3 should be able to be equal to my x vector, should be able to be equal to my x1 and x2, where these are just arbitrary. So let's see if I can set that to be true. So if this is true, then the following must be true. c1 times 1 plus 0 times c2 must be equal to x1. We just get that from our definition of multiplying vectors times scalars and adding vectors. And then we also know that 2 times c2-- sorry. c1 times 2 plus c2 times 3, 3c2, should be equal to x2. Now, if I can show you that I can always find c1's and c2's given any x1's and x2's, then I've proven that I can get to any point in R2 using just these two vectors. So let me see if I can do that. So this is just a system of two unknowns. This is just 0. We can ignore it. So let's multiply this equation up here by minus 2 and put it here. So we get minus 2, c1-- I'm just multiplying this times minus 2. We get a 0 here, plus 0 is equal to minus 2x1. And then you add these two. You get 3c2, right? These cancel out. You get 3-- let me write it in a different color. You get 3c2 is equal to x2 minus 2x1. Or divide both sides by 3, you get c2 is equal to 1/3 x2 minus x1. Now we'd have to go substitute back in for c1. But we have this first equation right here, that c1, this first equation that says c1 plus 0 is equal to x1, so c1 is equal to x1. So that one just gets us there. So c1 is equal to x1. So you give me any point in R2-- these are just two real numbers-- and I can just perform this operation, and I'll tell you what weights to apply to a and b to get to that point. If you say, OK, what combination of a and b can get me to the point-- let's say I want to get to the point-- let me go back up here. Oh, it's way up there. Let's say I'm looking to get to the point 2, 2. So x1 is 2. Let me write it down here. Say I'm trying to get to the point the vector 2, 2. What combinations of a and b can be there? Well, I know that c1 is equal to x1, so that's equal to 2, and c2 is equal to 1/3 times 2 minus 2. So 2 minus 2 is 0, so c2 is equal to 0. So if I want to just get to the point 2, 2, I just multiply-- oh, I just realized. This was looking suspicious. I made a slight error here, and this was good that I actually tried it out with real numbers. Over here, when I had 3c2 is equal to x2 minus 2x1, I got rid of this 2 over here. There's a 2 over here. I divide both sides by 3. I get 1/3 times x2 minus 2x1. And that's why I was like, wait, this is looking strange. So I had to take a moment of pause. So let's go to my corrected definition of c2. C2 is equal to 1/3 times x2. So 2 minus 2 times x1, so minus 2 times 2. So it's equal to 1/3 times 2 minus 4, which is equal to minus 2, so it's equal to minus 2/3. So if I multiply 2 times my vector a minus 2/3 times my vector b, I will get to the vector 2, 2. And you can verify it for yourself. 2 times my vector a 1, 2, minus 2/3 times my vector b 0, 3, should equal 2, 2.

Contents

Definition

Suppose that K is a field (for example, the real numbers) and V is a vector space over K. As usual, we call elements of V vectors and call elements of K scalars. If v1,...,vn are vectors and a1,...,an are scalars, then the linear combination of those vectors with those scalars as coefficients is

There is some ambiguity in the use of the term "linear combination" as to whether it refers to the expression or to its value. In most cases the value is emphasized, like in the assertion "the set of all linear combinations of v1,...,vn always forms a subspace". However, one could also say "two different linear combinations can have the same value" in which case the expression must have been meant. The subtle difference between these uses is the essence of the notion of linear dependence: a family F of vectors is linearly independent precisely if any linear combination of the vectors in F (as value) is uniquely so (as expression). In any case, even when viewed as expressions, all that matters about a linear combination is the coefficient of each vi; trivial modifications such as permuting the terms or adding terms with zero coefficient do not give distinct linear combinations.

In a given situation, K and V may be specified explicitly, or they may be obvious from context. In that case, we often speak of a linear combination of the vectors v1,...,vn, with the coefficients unspecified (except that they must belong to K). Or, if S is a subset of V, we may speak of a linear combination of vectors in S, where both the coefficients and the vectors are unspecified, except that the vectors must belong to the set S (and the coefficients must belong to K). Finally, we may speak simply of a linear combination, where nothing is specified (except that the vectors must belong to V and the coefficients must belong to K); in this case one is probably referring to the expression, since every vector in V is certainly the value of some linear combination.

Note that by definition, a linear combination involves only finitely many vectors (except as described in Generalizations below). However, the set S that the vectors are taken from (if one is mentioned) can still be infinite; each individual linear combination will only involve finitely many vectors. Also, there is no reason that n cannot be zero; in that case, we declare by convention that the result of the linear combination is the zero vector in V.

Examples and counterexamples

Euclidean vectors

Let the field K be the set R of real numbers, and let the vector space V be the Euclidean space R3. Consider the vectors e1 = (1,0,0), e2 = (0,1,0) and e3 = (0,0,1). Then any vector in R3 is a linear combination of e1, e2 and e3.

To see that this is so, take an arbitrary vector (a1,a2,a3) in R3, and write:

Functions

Let K be the set C of all complex numbers, and let V be the set CC(R) of all continuous functions from the real line R to the complex plane C. Consider the vectors (functions) f and g defined by f(t) := eit and g(t) := eit. (Here, e is the base of the natural logarithm, about 2.71828..., and i is the imaginary unit, a square root of −1.) Some linear combinations of f and g are:

On the other hand, the constant function 3 is not a linear combination of f and g. To see this, suppose that 3 could be written as a linear combination of eit and eit. This means that there would exist complex scalars a and b such that aeit + beit = 3 for all real numbers t. Setting t = 0 and t = π gives the equations a + b = 3 and a + b = −3, and clearly this cannot happen. See Euler's identity.

Polynomials

Let K be R, C, or any field, and let V be the set P of all polynomials with coefficients taken from the field K. Consider the vectors (polynomials) p1 := 1, p2 := x + 1, and p3 := x2 + x + 1.

Is the polynomial x2 − 1 a linear combination of p1, p2, and p3? To find out, consider an arbitrary linear combination of these vectors and try to see when it equals the desired vector x2 − 1. Picking arbitrary coefficients a1, a2, and a3, we want

Multiplying the polynomials out, this means

and collecting like powers of x, we get

Two polynomials are equal if and only if their corresponding coefficients are equal, so we can conclude

This system of linear equations can easily be solved. First, the first equation simply says that a3 is 1. Knowing that, we can solve the second equation for a2, which comes out to −1. Finally, the last equation tells us that a1 is also −1. Therefore, the only possible way to get a linear combination is with these coefficients. Indeed,

so x2 − 1 is a linear combination of p1, p2, and p3.

On the other hand, what about the polynomial x3 − 1? If we try to make this vector a linear combination of p1, p2, and p3, then following the same process as before, we’ll get the equation

However, when we set corresponding coefficients equal in this case, the equation for x3 is

which is always false. Therefore, there is no way for this to work, and x3 − 1 is not a linear combination of p1, p2, and p3.

The linear span

Take an arbitrary field K, an arbitrary vector space V, and let v1,...,vn be vectors (in V). It’s interesting to consider the set of all linear combinations of these vectors. This set is called the linear span (or just span) of the vectors, say S ={v1,...,vn}. We write the span of S as span(S) or sp(S):

Linear independence

For some sets of vectors v1,...,vn, a single vector can be written in two different ways as a linear combination of them:

Equivalently, by subtracting these () a non-trivial combination is zero:

If that is possible, then v1,...,vn are called linearly dependent; otherwise, they are linearly independent. Similarly, we can speak of linear dependence or independence of an arbitrary set S of vectors.

If S is linearly independent and the span of S equals V, then S is a basis for V.

Affine, conical, and convex combinations

By restricting the coefficients used in linear combinations, one can define the related concepts of affine combination, conical combination, and convex combination, and the associated notions of sets closed under these operations.

Type of combination Restrictions on coefficients Name of set Model space
Linear combination no restrictions Vector subspace
Affine combination Affine subspace Affine hyperplane
Conical combination Convex cone Quadrant or Octant
Convex combination and Convex set Simplex

Because these are more restricted operations, more subsets will be closed under them, so affine subsets, convex cones, and convex sets are generalizations of vector subspaces: a vector subspace is also an affine subspace, a convex cone, and a convex set, but a convex set need not be a vector subspace, affine, or a convex cone.

These concepts often arise when one can take certain linear combinations of objects, but not any: for example, probability distributions are closed under convex combination (they form a convex set), but not conical or affine combinations (or linear), and positive measures are closed under conical combination but not affine or linear – hence one defines signed measures as the linear closure.

Linear and affine combinations can be defined over any field (or ring), but conical and convex combination require a notion of "positive", and hence can only be defined over an ordered field (or ordered ring), generally the real numbers.

If one allows only scalar multiplication, not addition, one obtains a (not necessarily convex) cone; one often restricts the definition to only allowing multiplication by positive scalars.

All of these concepts are usually defined as subsets of an ambient vector space (except for affine spaces, which are also considered as "vector spaces forgetting the origin"), rather than being axiomatized independently.

Operad theory

More abstractly, in the language of operad theory, one can consider vector spaces to be algebras over the operad (the infinite direct sum, so only finitely many terms are non-zero; this corresponds to only taking finite sums), which parametrizes linear combinations: the vector for instance corresponds to the linear combination . Similarly, one can consider affine combinations, conical combinations, and convex combinations to correspond to the sub-operads where the terms sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories.

From this point of view, we can think of linear combinations as the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations.

The basic operations of addition and scalar multiplication, together with the existence of an additive identity and additive inverses, cannot be combined in any more complicated way than the generic linear combination: the basic operations are a generating set for the operad of all linear combinations.

Ultimately, this fact lies at the heart of the usefulness of linear combinations in the study of vector spaces.

Generalizations

If V is a topological vector space, then there may be a way to make sense of certain infinite linear combinations, using the topology of V. For example, we might be able to speak of a1v1 + a2v2 + a3v3 + ..., going on forever. Such infinite linear combinations do not always make sense; we call them convergent when they do. Allowing more linear combinations in this case can also lead to a different concept of span, linear independence, and basis. The articles on the various flavours of topological vector spaces go into more detail about these.

If K is a commutative ring instead of a field, then everything that has been said above about linear combinations generalizes to this case without change. The only difference is that we call spaces like this V modules instead of vector spaces. If K is a noncommutative ring, then the concept still generalizes, with one caveat: Since modules over noncommutative rings come in left and right versions, our linear combinations may also come in either of these versions, whatever is appropriate for the given module. This is simply a matter of doing scalar multiplication on the correct side.

A more complicated twist comes when V is a bimodule over two rings, KL and KR. In that case, the most general linear combination looks like

where a1,...,an belong to KL, b1,...,bn belong to KR, and v1,...,vn belong to V.

Application

An important application of linear combinations is to wave functions in quantum mechanics.

References

  1. ^ Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4.
  2. ^ Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.
  3. ^ Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.

External links

This page was last edited on 22 October 2018, at 16:06
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.