To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Row and column spaces

From Wikipedia, the free encyclopedia

The row vectors of a matrix. The row space of this matrix is the vector space spanned by the row vectors.
The column vectors of a matrix. The column space of this matrix is the vector space spanned by the column vectors.

In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.

Let be a field. The column space of an m × n matrix with components from is a linear subspace of the m-space . The dimension of the column space is called the rank of the matrix and is at most min(m, n).[1] A definition for matrices over a ring is also possible.

The row space is defined similarly.

The row space and the column space of a matrix A are sometimes denoted as C(AT) and C(A) respectively.[2]

This article considers matrices of real numbers. The row and column spaces are subspaces of the real spaces and respectively.[3]

YouTube Encyclopedic

  • 1/5
    Views:
    366 699
    615 659
    27 753
    72 711
    111 817
  • Column space of a matrix | Vectors and spaces | Linear Algebra | Khan Academy
  • Null space and column space basis | Vectors and spaces | Linear Algebra | Khan Academy
  • Finding the Null Space, Row Space, and Column Space of a Matrix
  • The Column Space of a Matrix
  • Rowspace and left nullspace | Matrix transformations | Linear Algebra | Khan Academy

Transcription

We spent a good deal of time on the idea of a null space. What I'm going to do in this video is introduce you to a new type of space that can be defined around a matrix, it's called a column space. And you could probably guess what it means just based on what it's called. But let's say I have some matrix A. Let's say it's an m by n matrix. So I can write my matrix A and we've seen this multiple times, I can write it as a collection of columns vectors. So this first one, second one, and I'll have n of them. How do I know that I have n of them? Because I have n columns. And each of these column vectors, we're going to have how many components? So v1, v2, all the way to vn. This matrix has m rows. So each of these guys are going to have m components. So they're all members of Rm. So the column space is defined as all of the possible linear combinations of these columns vectors. So the column space of A, this is my matrix A, the column space of that is all the linear combinations of these column vectors. What's all of the linear combinations of a set of vectors? It's the span of those vectors. So it's the span of vector 1, vector 2, all the way to vector n. And we've done it before when we first talked about span and subspaces. But it's pretty easy to show that the span of any set of vectors is a legitimate subspace. It definitely contains the 0 vector. If you multiply all of these guys by 0, which is a valid linear combination added up, you'll see that it contains the 0 vector. If, let's say that I have some vector a that is a member of the column space of a. That means it can be represented as some linear combination. So a is equal to c1 times vector 1, plus c2 times vector 2, all the way to Cn times vector n. Now, the question is, is this closed under multiplication? If I multiply a times some new-- let me say I multiply it times some scale or s, I'm just picking a random letter-- so s times a, is this in my span? Well s times a would be equal to s c1 v1 plus s c2 v2, all the way to s Cn Vn Which is once again just a linear combination of these column vectors. So this Sa, would clearly be a member of the column space of a. And then finally, to make sure it's a valid subspace-- and this actually doesn't apply just to column space, so this applies to any span. This is actually a review of what we've done the past. We just have to make sure it's closed under addition. So let's say a is a member of our column space. Let's say b is also a member of our column space, or our span of all these column vectors. Then b could be written as b1 times v1, plus b2 times v2, all the way to Bn times Vn. And my question is, is a plus b a member of our span, of our column space, the span of these vectors? Well sure, what's a plus b? a plus b is equal to c1 plus b1 times v1, plus c2 plus v2 times v2. I'm just literally adding this term to that term, to get that term. This term to this term to get this term. And then it goes all the way to Bn and plus Cn times Vn. Which is clearly just another linear combination of these guys. So this guy is definitely within the span. It doesn't have to be unique to a matrix. A matrix is just really just a way of writing a set of column vectors. So this applies to any span. So this is clearly a valid subspace. So the column space of a is clearly a valid subspace. Let's think about other ways we can interpret this notion of a column space. Let's think about it in terms of the expression-- let me get a good color-- if I were to multiply my-- let's think about this. Let's think about the set of all the values of if I take my m by n matrix a and I multiply it by any vector x, where x is a member of-- remember x has to be a member of Rn. It has to have n components in order for this multiplication to be well defined. So x has to be a member of Rn. Let's think about what this means. This says, look, I can take any member, any n component vector and multiply it by a, and I care about all of the possible products that this could equal, all the possible values of Ax, when I can pick and choose any possible x from Rn. Let's think about what that means. If I write a like that, and if I write x like this-- let me write it a little bit better, let me write x like this-- x1, x2, all the way to Xn. What is Ax? Well Ax could be rewritten as x1-- and we've seen this before-- Ax is equal to x1 times v1 plus x2 times v2, all the way to plus Xn times Vn. We've seen this multiple times. This comes out of our definition of matrix vector products. Now if Ax is equal to this, and I'm essentially saying that I can pick any vector x in Rn, I'm saying that I can pick all possible values of the entries here, all possible real values and all possible combinations of them. So what is this equal to? What is the set of all possible? So I could rewrite this statement here as the set of all possible x1 v1 plus x2 v2 all the way to Xn Vn, where x1, x2, all the way to Xn, are a member of the real numbers. That's all I'm saying here. This statement is the equivalent of this. When I say that the vector x can be any member of Rn, I'm saying that its components can be any members of the real numbers. So if I just take the set of all of the, essentially, the combinations of these column vectors where their real numbers, where their coefficients, are members of the real numbers. What am I doing? This is all the possible linear combinations of the column vectors of a. So this is equal to the span v1 v2, all the way to Vn, which is the exact same thing as the column space of A. So the column space of A, you could say what are all of the possible vectors, or the set of all vectors I can create by taking linear combinations of these guys, or the span of these guys. Or you can view it as, what are all of the possible values that Ax can take on if x is a member of Rn? So let's think about it this way. Let's say that I were to tell you that I need to solve the equation Ax is equal to-- well the convention is to write a b there-- but let me put a special b there, let me put b1. Let's say that I need to solve this equation Ax is equal to b1. And then I were to tell you-- let's say that I were to figure out the column space of A-- and I say b1 is not a member of the column space of A So what does that tell me? That tells me that this right here can never take on the value b1 because all of the values that this can take on is the column space of A. So if b1 is not in this, it means that this cannot take on the value of b1. So this would imply that this equation we're trying to set up, Ax is equal to b1, has no solution. If it had a solution, so let's say that Ax equals b2 has at least one solution. What does this mean? Well, that means that this, for a particular x or maybe for many different x's, you can definitely achieve this value. For there are some x's that when you multiply it by a, you definitely are able to get this value. So this implies that b2 is definitely a member of the column space of A. Some of this stuff on some level it's almost obvious. This comes out of the definition of the column space. The column space is all of the linear combinations of the column vectors, which another interpretation is all of the values that Ax can take on. So if I try to set Ax to some value that it can't take on, clearly I'm not going to have some solution. If I am able to find a solution, I am able to find some x value where Ax is equal to b2, then b2 definitely is one of the values that Ax can take on. Anyway, I think I'll leave you there. Now that you have at least a kind of abstract understanding of what a column space is. In the next couple of videos I'm going to try to bring everything together of what we know about column spaces, and null spaces, and whatever else to kind of understand a matrix and a matrix vector product from every possible direction.

Overview

Let A be an m-by-n matrix. Then

  1. rank(A) = dim(rowsp(A)) = dim(colsp(A)),[4]
  2. rank(A) = number of pivots in any echelon form of A,
  3. rank(A) = the maximum number of linearly independent rows or columns of A.[5]

If one considers the matrix as a linear transformation from to , then the column space of the matrix equals the image of this linear transformation.

The column space of a matrix A is the set of all linear combinations of the columns in A. If A = [a1an], then colsp(A) = span({a1, ..., an}).

The concept of row space generalizes to matrices over , the field of complex numbers, or over any field.

Intuitively, given a matrix A, the action of the matrix A on a vector x will return a linear combination of the columns of A weighted by the coordinates of x as coefficients. Another way to look at this is that it will (1) first project x into the row space of A, (2) perform an invertible transformation, and (3) place the resulting vector y in the column space of A. Thus the result y = Ax must reside in the column space of A. See singular value decomposition for more details on this second interpretation.[clarification needed]

Example

Given a matrix J:

the rows are , , , . Consequently, the row space of J is the subspace of spanned by { r1, r2, r3, r4 }. Since these four row vectors are linearly independent, the row space is 4-dimensional. Moreover, in this case it can be seen that they are all orthogonal to the vector n = [6, −1, 4, −4, 0], so it can be deduced that the row space consists of all vectors in that are orthogonal to n.

Column space

Definition

Let K be a field of scalars. Let A be an m × n matrix, with column vectors v1, v2, ..., vn. A linear combination of these vectors is any vector of the form

where c1, c2, ..., cn are scalars. The set of all possible linear combinations of v1, ..., vn is called the column space of A. That is, the column space of A is the span of the vectors v1, ..., vn.

Any linear combination of the column vectors of a matrix A can be written as the product of A with a column vector:

Therefore, the column space of A consists of all possible products Ax, for xKn. This is the same as the image (or range) of the corresponding matrix transformation.

Example

If , then the column vectors are v1 = [1, 0, 2]T and v2 = [0, 1, 0]T. A linear combination of v1 and v2 is any vector of the form

The set of all such vectors is the column space of A. In this case, the column space is precisely the set of vectors (x, y, z) ∈ R3 satisfying the equation z = 2x (using Cartesian coordinates, this set is a plane through the origin in three-dimensional space).

Basis

The columns of A span the column space, but they may not form a basis if the column vectors are not linearly independent. Fortunately, elementary row operations do not affect the dependence relations between the column vectors. This makes it possible to use row reduction to find a basis for the column space.

For example, consider the matrix

The columns of this matrix span the column space, but they may not be linearly independent, in which case some subset of them will form a basis. To find this basis, we reduce A to reduced row echelon form:

[6]

At this point, it is clear that the first, second, and fourth columns are linearly independent, while the third column is a linear combination of the first two. (Specifically, v3 = −2v1 + v2.) Therefore, the first, second, and fourth columns of the original matrix are a basis for the column space:

Note that the independent columns of the reduced row echelon form are precisely the columns with pivots. This makes it possible to determine which columns are linearly independent by reducing only to echelon form.

The above algorithm can be used in general to find the dependence relations between any set of vectors, and to pick out a basis from any spanning set. Also finding a basis for the column space of A is equivalent to finding a basis for the row space of the transpose matrix AT.

To find the basis in a practical setting (e.g., for large matrices), the singular-value decomposition is typically used.

Dimension

The dimension of the column space is called the rank of the matrix. The rank is equal to the number of pivots in the reduced row echelon form, and is the maximum number of linearly independent columns that can be chosen from the matrix. For example, the 4 × 4 matrix in the example above has rank three.

Because the column space is the image of the corresponding matrix transformation, the rank of a matrix is the same as the dimension of the image. For example, the transformation described by the matrix above maps all of to some three-dimensional subspace.

The nullity of a matrix is the dimension of the null space, and is equal to the number of columns in the reduced row echelon form that do not have pivots.[7] The rank and nullity of a matrix A with n columns are related by the equation:

This is known as the rank–nullity theorem.

Relation to the left null space

The left null space of A is the set of all vectors x such that xTA = 0T. It is the same as the null space of the transpose of A. The product of the matrix AT and the vector x can be written in terms of the dot product of vectors:

because row vectors of AT are transposes of column vectors vk of A. Thus ATx = 0 if and only if x is orthogonal (perpendicular) to each of the column vectors of A.

It follows that the left null space (the null space of AT) is the orthogonal complement to the column space of A.

For a matrix A, the column space, row space, null space, and left null space are sometimes referred to as the four fundamental subspaces.

For matrices over a ring

Similarly the column space (sometimes disambiguated as right column space) can be defined for matrices over a ring K as

for any c1, ..., cn, with replacement of the vector m-space with "right free module", which changes the order of scalar multiplication of the vector vk to the scalar ck such that it is written in an unusual order vectorscalar.[8]

Row space

Definition

Let K be a field of scalars. Let A be an m × n matrix, with row vectors r1, r2, ..., rm. A linear combination of these vectors is any vector of the form

where c1, c2, ..., cm are scalars. The set of all possible linear combinations of r1, ..., rm is called the row space of A. That is, the row space of A is the span of the vectors r1, ..., rm.

For example, if

then the row vectors are r1 = [1, 0, 2] and r2 = [0, 1, 0]. A linear combination of r1 and r2 is any vector of the form

The set of all such vectors is the row space of A. In this case, the row space is precisely the set of vectors (x, y, z) ∈ K3 satisfying the equation z = 2x (using Cartesian coordinates, this set is a plane through the origin in three-dimensional space).

For a matrix that represents a homogeneous system of linear equations, the row space consists of all linear equations that follow from those in the system.

The column space of A is equal to the row space of AT.

Basis

The row space is not affected by elementary row operations. This makes it possible to use row reduction to find a basis for the row space.

For example, consider the matrix

The rows of this matrix span the row space, but they may not be linearly independent, in which case the rows will not be a basis. To find a basis, we reduce A to row echelon form:

r1, r2, r3 represents the rows.

Once the matrix is in echelon form, the nonzero rows are a basis for the row space. In this case, the basis is { [1, 3, 2], [2, 7, 4] }. Another possible basis { [1, 0, 2], [0, 1, 0] } comes from a further reduction.[9]

This algorithm can be used in general to find a basis for the span of a set of vectors. If the matrix is further simplified to reduced row echelon form, then the resulting basis is uniquely determined by the row space.

It is sometimes convenient to find a basis for the row space from among the rows of the original matrix instead (for example, this result is useful in giving an elementary proof that the determinantal rank of a matrix is equal to its rank). Since row operations can affect linear dependence relations of the row vectors, such a basis is instead found indirectly using the fact that the column space of AT is equal to the row space of A. Using the example matrix A above, find AT and reduce it to row echelon form:

The pivots indicate that the first two columns of AT form a basis of the column space of AT. Therefore, the first two rows of A (before any row reductions) also form a basis of the row space of A.

Dimension

The dimension of the row space is called the rank of the matrix. This is the same as the maximum number of linearly independent rows that can be chosen from the matrix, or equivalently the number of pivots. For example, the 3 × 3 matrix in the example above has rank two.[9]

The rank of a matrix is also equal to the dimension of the column space. The dimension of the null space is called the nullity of the matrix, and is related to the rank by the following equation:

where n is the number of columns of the matrix A. The equation above is known as the rank–nullity theorem.

Relation to the null space

The null space of matrix A is the set of all vectors x for which Ax = 0. The product of the matrix A and the vector x can be written in terms of the dot product of vectors:

where r1, ..., rm are the row vectors of A. Thus Ax = 0 if and only if x is orthogonal (perpendicular) to each of the row vectors of A.

It follows that the null space of A is the orthogonal complement to the row space. For example, if the row space is a plane through the origin in three dimensions, then the null space will be the perpendicular line through the origin. This provides a proof of the rank–nullity theorem (see dimension above).

The row space and null space are two of the four fundamental subspaces associated with a matrix A (the other two being the column space and left null space).

Relation to coimage

If V and W are vector spaces, then the kernel of a linear transformation T: VW is the set of vectors vV for which T(v) = 0. The kernel of a linear transformation is analogous to the null space of a matrix.

If V is an inner product space, then the orthogonal complement to the kernel can be thought of as a generalization of the row space. This is sometimes called the coimage of T. The transformation T is one-to-one on its coimage, and the coimage maps isomorphically onto the image of T.

When V is not an inner product space, the coimage of T can be defined as the quotient space V / ker(T).

See also

References & Notes

  1. ^ Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang 2005.
  2. ^ Strang, Gilbert (2016). Introduction to linear algebra (Fifth ed.). Wellesley, MA: Wellesley-Cambridge Press. pp. 128, 168. ISBN 978-0-9802327-7-6. OCLC 956503593.
  3. ^ Anton (1987, p. 179)
  4. ^ Anton (1987, p. 183)
  5. ^ Beauregard & Fraleigh (1973, p. 254)
  6. ^ This computation uses the Gauss–Jordan row-reduction algorithm. Each of the shown steps involves multiple elementary row operations.
  7. ^ Columns without pivots represent free variables in the associated homogeneous system of linear equations.
  8. ^ Important only if K is not commutative. Actually, this form is merely a product Ac of the matrix A to the column vector c from Kn where the order of factors is preserved, unlike the formula above.
  9. ^ a b The example is valid over the real numbers, the rational numbers, and other number fields. It is not necessarily correct over fields and rings with non-zero characteristic.

Further reading

External links

This page was last edited on 24 October 2023, at 16:25
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.