Onedimensional subspaces in the twodimensional vector space over the finite field F_{5}. The origin (0, 0), marked with green circles, belongs to any of six 1subspaces, while each of 24 remaining points belongs to exactly one; a property which holds for 1subspaces over any field and in all dimensions. All F_{5}^{2} (i.e. a 5 × 5 square) is pictured four times for a better visualization 
In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspace^{[1]}^{[2]} is a vector space that is a subset of some larger vector space. A linear subspace is usually called simply a subspace when the context serves to distinguish it from other types of subspaces.
YouTube Encyclopedic

1/5Views:1 030 870123 790848 935193 12088 953

✪ Linear subspaces  Vectors and spaces  Linear Algebra  Khan Academy

✪ [Linear Algebra] Vector Spaces

✪ Basis of a subspace  Vectors and spaces  Linear Algebra  Khan Academy

✪ What is a Vector Space? (Abstract Algebra)

✪ Linear Algebra Example Problems  Subspace Example #1
Transcription
We now have the tools, I think, to understand the idea of a linear subspace of Rn. Let me write that down. I'll just always call it a subspace of Rn. Everything we're doing is linear. Subspace of Rn. I'm going to make a definition here. I'm going to say that a set of vectors V. So V is some subset of vectors, some subset of Rn. So we already said Rn, when we think about it, it's really just really an infinitely large set of vectors, where each of those vectors have n components. I'm going to not formally define it, but this is just a set of vectors. I mean sometimes we visualize it as multidimensional space and all that, but if we wanted to be just as abstract about it as possible, it's just all the set. It's the set of all of the  you know we could call x1, x2, all the way to xn where each of these, where each of the xi's are a member of the real numbers for all of the i's. Right? That was our definition of Rn. It's just a huge set of vectors. An infinitely large set of vectors. V, I'm calling that, I'm going to call that a subset of Rn, and which means it's just some  you know, it could be all of these vectors, and I'll talk about that in a second. Or it could be some subset of these vectors. Maybe it's all of them but one particular vector. In order for this V to be a subspace so I'm already saying it's a subset of Rn. Maybe this'll help you. If I draw all of Rn here as this big blob. So these are all of the vectors that are in Rn. V is some subset of it. It could be all of Rn. I'll show that a second. But let's just say that this is V. V is a subset of vectors. Now in order for V to be a subspace, and this is a definition, if V is a subspace, or linear subspace of Rn, this means, this is my definition, this means three things. This means that V contains the 0 vector. I'll do it really, that's the 0 vector. This is equal to 0 all the way and you have n 0's. So V contains the 0 vector, and this is a big V right there. If we have some vector x in V. So let me write this, if my vector x is in V, if x is one of these vectors that's included in my V, then when I multiply x times any member of the reals. So if x is in V, then if V is a subspace of Rn, then x times any scalar is also in V. This has to be the case. For those of you who are familiar with the term, this term is called closure. If I have any element of a set, this is closure under multiplication. Let me write that down in a new color. This is closure under scalar multiplication. And that's just a fancy way of saying, look, if I take some member of my set and I multiply it by some scalar, I'm still going to be in my set. If I multiplied it by some scalar and I end up outside of my set, if I ended up with some other vector that's not included in my subset, then this wouldn't be a subspace. In order for it to be a subspace, if I multiply any vector in my subset by a real scalar, I'm defining this subspace over real numbers, if I multiply it by any real number, I should also get another member of this subset. So this is one of the requirements. And then the other requirement is if I take two vectors, let's say I have vector a, it's in here, and I have vector b in here. So this is my other requirement for v being a subspace. If a is in a sorry if vector a is in my set V, and vector b is in my set V, then if V is a subspace of Rn, that tells me that a and b must be in V as well. So this is closure under addition. Let me write that down. Closure under addition. Once again, just a very fancy way of saying, look, if you give me two elements that's in my subset, and if I add them to each other  these could be any two arbitrary elements in my subset  and I add them to each other, I'm going to get another element in my subset. That's what closure under addition means. That when you add two vectors in your set, you still end up with another vector in your set. You don't somehow end up with a vector that's outside of your set. If I have a subset of Rn, so some subset of vectors of Rn, that contains the 0 vector, and it's closed under multiplication and addition, then I have a subspace. So subspace implies all of these things, and all of these things imply a subspace. This is the definition of a subspace. This might seem all abstract to you right now, so let's do a couple of examples. And I don't know if these examples will make it any more concrete, but I think if we do it enough, you'll kind of get the intuitive sense of what a space implies. Let me just do some examples. Because I want to stay relatively mathematically formal. Let's just say I have the almost trivially basic set. Let's say my set of vectors, I only have one vector in it and I have the 0 vector. So I'll just do a really bold 0 there. Or I could write it like this, the only vector in my set is the 0 vector. Now Let's say? We're talking about R3. So let's say my 0 vector in R3 looks like that. What I want to know is, is my set V a subspace of R3? Well, in order for it to be a subspace, three conditions. It has to contain the 0 vector. Well the only thing it does contain is the 0 vector. So it definitely contains the 0 vector. So 0 vector, check. Now, is it closed under multiplication? So that means, if I take any member of the set, there's only one of them, and I multiply it by any scalar, I should get another member of the set. Or I should get maybe itself. So let's see, there's only one member of the set. So the one member of the set is the 0 vector. If I multiply it times any scalar, what am I going to get? I'm going to get c times 0 which is 0, c times 0, which is 0, and c times 0. I'm going to get its only member. But it is closed. So it is closed under multiplication. You can multiply this one vector times any scalar, and you're just going to get this vector again. So you're going to end up being in your 0 vector set. That's a check. Is it closed under addition? Well, clearly if I add any member of this set to itself, I mean, there's only one member, to another member of the set. There's only one option here. If I just add that to that, what do I get? I just get that. I just get it again. So it definitely is closed under addition. Check. So it does turn out that this trivially basic subset of r3, that just contains the 0 vector, it is a subspace. Maybe a trivially simple subspace, but it satisfies our constraints of a subspace. You can't do anything with the vectors in it, they'll somehow get you out of that subspace. Or at least if you're dealing with scalar multiplication or addition. Let me do one that maybe the idea will be a little clearer if I show you an example of something that is not a subspace. Let me get my coordinate axes is over here. Let's say I were to find some subspace, some subset. I don't know whether it's a subspace. Let me call it my set S. And it equals all the vectors x1, x2 that are a member of R2 such that, I'm going to make a little constraint here, such that x1 is greater than or equal to 0. It contains all of the vectors in R2 that are at least is 0 or greater for the first term. So if we were to graph that here, what do you get? We can get anything. We can move up or down in any direction. Right? We can go up and down in any direction, but we're constraining ourselves. These are all going to be 0 or greater. So all of these first coordinates are going to be 0 or greater. And this one, we can go up and down arbitrarily. So we're essentially, this subset of R2, R2 is my entire Cartesian plane. But this subset of R2 will include the vertical axis, often referred to as the yaxis. It will include the vertical axis, and essentially the first and fourth quadrants. If you remember your quadrant labelling. So that's the first quadrant and that's the fourth quadrant. So my question to you is, is S a subspace of R2. So the first question, does it contain the 0 vector? So in the case of R2, does it contain 0, 0? Well, sure. It includes 0, 0 right there. We said x is greater than or equal to 0, so this could be 0 and obviously, there's no constraint on this, so definitely the 0, 0 vector is definitely contained in our set S. So that is a check. Let's try another one. If I add any two vectors in this set, is that also going to show up in my set? Let me just do a couple of examples. Maybe this isn't a proof. If I add that vector to that vector, what am I going to get? If I put this up here, I'm going to get that vector. If I add that vector to that vector, what am I going to get? I could put this one heads to tails, I would get a vector that looks like that. And if I did it formally, if I add  let's say that I have two vectors that are a member of our set. Let's say the first one is a, b and I add it to c, d, what do I get? I get a plus c over  this was a d  over b plus d. So this thing is going to be greater than 0. This thing is also going to be greater than 0. That was my requirement for being in the set. So if both of these are greater than 0 and we add them to each other, this thing is also going to be greater than 0. And we don't care what these, these can be anything, I didn't put any constraints on the second component of my vector. So it does seem like it is closed under addition. Now what about scalar multiplication? Let's take a particular case here. Let's take my a, b again. I have my vector a, b. Now I can pick any real scalar. So any real scalar. What if I just multiply it by minus 1? So minus 1. So if I multiply it by minus 1, I get minus a, minus b. If I were to draw it visually, if this is let's say a, b was the vector 2, 4. So it's like this. When I multiply it by minus 1, what do I get? I get minus a, minus b. I get this vector. Which you can be visually clearly see falls out of, if we view these as kind of position vectors, it falls out of our subspace. Or if you just view it not even visually, if you just do it mathematically, clearly if this is positive then this is going to and let's say if we assume this is positive, and definitely not 0. So it's definitely a positive number. So this is definitely going to be a negative number. So when we multiply it by negative 1, for really any element of this that doesn't have a 0 there, you're going to end up with something that falls out of it, right? This is not a member of this set, because to be a member of the set, your first component had to be greater than 0. This first component is less than 0. So this subset that I drew out here, the subset of R2, is not a subspace, Because? It's not closed under multiplication or scalar multiplication. This is not a sub space of R2. Now I'll ask you one interesting question. What if I ask you just the span of some set of vectors? Let's say I want to know the span of, I don't know, let's sat I have vector v1, v2, and v3. I'm not even going to tell you how many elements each of these vectors have. Is this a valid subspace of Rn? Where n is the number of elements that each of these have. Let's pick one of the elements. Let me define, let me just call u to be the set the set of all linear combinations of this is the span. So let me just define u to be the span. So I want to know, is u a valid subspace? So let's think about it this way. Let me just pick out a random element of u. Actually, does this contain the 0 vector? Well, sure. If we just multiply all of these times 0, if we just say 0 times v1, plus 0 times v2, these are all the vectors, I didn't write them bold, plus 0 times v3, we get the 0 vector, right? We did everything just zeroed out. So it definitely contains the 0 vector. This is a linear combination of those three vectors, so it's included in the span. Now let me just pick some arbitrary member of this span. So in order to be a member of this set, it just means that you can be represented let me just call it the vector x it means that you can be represented as a linear combination of these vectors. So some combination, c1 times v1 plus c2 times v2 plus c3 times v3. Right? I'm just representing this vector x, it's a member of this, so it can be represented as a linear combination of those three vectors. Is this set closed under multiplication? Well let's just multiply this times some arbitrary constant. What is c times x? Let me scroll down a little bit. What does c times x equal? Let me do a different constant actually. Let me multiply it times some arbitrary constant a. What is a times x? What's a times c1 times v1 I'm just multiplying both sides equation times a a times c2 times v2 plus a times c3, v3. Right? If this was an arbitrary constant, you could just write this as another arbitrary constant. This is another arbitrary constant. This is another arbitrary constant. I want to be clear. All I did is I just multiplied both sides of the equation times a scalar. But clearly, this expression right here, I mean I could write this, I could rewrite this as c4 times v1 plus c5 times v2, where this is c5, this is c4. Plus c6 times v3. This is clearly another linear combination of these three vectors. So, the span is the set of all of the linear combinations of these three vectors. So clearly this is one of the linear combinations, so it's also included in the span. So this is also in u. It's also in the span of those three vectors. So it is closed under multiplication. Now we just have to show that it's closed under addition, and then we know that the span of  and I did three, here but you can extend an arbitrary n number of vectors if the span of any set of vectors is a valid subspace. Let me prove that. We already defined one x here. Let me define another vector that's in u, or that's in the span of these vectors. And it equals, I don't know, let's say it equals d1 times v1 plus d2 times v2 plus d3 times v3. Now what is x plus y? If I add these two vectors, what does it equal to? Well, I could just add. x plus y means all of this stuff plus all of this stuff. So what does that equal? It means if you just add these together you get c1 plus d1 times v1 plus c2 plus d2 times v2 plus c3 plus d3 times v3. Right? You had a v3 here, you had a v3 there, you just add up their coefficients. Clearly this is just another linear combination. These are just constants again. That's an arbitrary constant, that's an arbitrary constant, that's an arbitrary constant. So this thing is just a linear combination of v1, v2, and v3. So it must be, by definition, in the span of v1, v2, and v3. So we are definitely closed under addition. Now, you might say, hey, Sal, you're saying that the span of any vector is a valid subspace, but let me show you an example that clearly, if I just took the span of one vector, let me just define u to be equal to the span of just the vector, let me just do a really simple one. Just the vector 1, 1. Clearly this can't be a valid subspace. Let's think about this visually. What does vector 1, 1 look like? Vector 1, 1 looks like this. Right? And the span of vector 1, 1 this is in its standard position  the span of vector 1, 1 is all of the linear combinations of this vector. Well, there's nothing else to add it to, so it's really just going to be all of the scaled up and scaled down versions of this. So if you scale it up you get things that look more like that. If you scale it down, you get things that look more like that if you go into the negative domain. So just by multiplying this vector times different values, and if you're going to put them all into a standard position, you'd essentially get a line that looks like that. You say, gee, that doesn't look like a whole subspace. But a couple of things. Clearly, it contains the 0 vector. We can just scale it by 0. The span is just all of the different scales of this. And if there are other vectors, you would add it to those as well. But this is clearly going to be the 0 vector. So it contains the 0 vector. Is it closed under multiplication? Well, the span is the set of all the vectors, where, if you take all of the real numbers for c and you multiply it times 1, 1, that is the span. Clearly, you multiply this times anything it's going to equal another thing that's definitely in your span. The last thing, is it closed under addition? So any two vectors in the span could, let's say that I have one vector a that's in my span. I can represent it as c1 some scalar times my vector there. And then I have another vector b, and I could represent it with c2 times my one vector in my set right there. And so what is this going to be equal to? This is going to be equal to, this is essentially going to be equal to c well, get a little more space this is going to be equal to c1 plus c2 times my vector. This is almost trivially obvious. But clearly this is in the span. It's just a scaled up version of this. This is in the span, it's in a scaled up version of this. And this is also going to be in the span of this vector, because this is just another scalar. We could call that c3. If you just do it visually, if I take this vector right there and I were to add it to this vector, if you put them head to tails, you would end up with this vector. Right there in green. I don't know if you can see it. I'll do it in red right there. You end up with that vector. And you could do that any vector plus any other vector on this line is going to equal another vector on this line. Any vector on this line multiplied by some scalar is just going to be another vector on this line. So you're closed under multiplication. Your closed under addition. And you include the 0 vector. So even this trivially simple span is a valid subspace. And that just backs up the idea that we showed here. That, in general, I could have just made this a set of n vectors. I picked three vectors right here, but it could've been n vectors and I could have used the same argument that the span of n vectors is a valid subspace of Rn. And I showed it right there.
Contents
Definition
If V is a vector space over a field K and if W is a subset of V, then W is a subspace of V if under the operations of V, W is a vector space over K. Equivalently, a nonempty subset W is a subspace of V if, whenever are elements of W and are elements of K, it follows that is in W.^{[3]}^{[4]}^{[5]}^{[6]}^{[7]}
Examples
Example I
Let the field K be the set R of real numbers, and let the vector space V be the real coordinate space R^{3}. Take W to be the set of all vectors in V whose last component is 0. Then W is a subspace of V.
Proof:
 Given u and v in W, then they can be expressed as u = (u_{1}, u_{2}, 0) and v = (v_{1}, v_{2}, 0). Then u + v = (u_{1}+v_{1}, u_{2}+v_{2}, 0+0) = (u_{1}+v_{1}, u_{2}+v_{2}, 0). Thus, u + v is an element of W, too.
 Given u in W and a scalar c in R, if u = (u_{1}, u_{2}, 0) again, then cu = (cu_{1}, cu_{2}, c0) = (cu_{1}, cu_{2},0). Thus, cu is an element of W too.
Example II
Let the field be R again, but now let the vector space V be the Cartesian plane R^{2}. Take W to be the set of points (x, y) of R^{2} such that x = y. Then W is a subspace of R^{2}.
Proof:
 Let p = (p_{1}, p_{2}) and q = (q_{1}, q_{2}) be elements of W, that is, points in the plane such that p_{1} = p_{2} and q_{1} = q_{2}. Then p + q = (p_{1}+q_{1}, p_{2}+q_{2}); since p_{1} = p_{2} and q_{1} = q_{2}, then p_{1} + q_{1} = p_{2} + q_{2}, so p + q is an element of W.
 Let p = (p_{1}, p_{2}) be an element of W, that is, a point in the plane such that p_{1} = p_{2}, and let c be a scalar in R. Then cp = (cp_{1}, cp_{2}); since p_{1} = p_{2}, then cp_{1} = cp_{2}, so cp is an element of W.
In general, any subset of the real coordinate space R^{n} that is defined by a system of homogeneous linear equations will yield a subspace. (The equation in example I was z = 0, and the equation in example II was x = y.) Geometrically, these subspaces are points, lines, planes, and so on, that pass through the point 0.
Example III
Again take the field to be R, but now let the vector space V be the set R^{R} of all functions from R to R. Let C(R) be the subset consisting of continuous functions. Then C(R) is a subspace of R^{R}.
Proof:
 We know from calculus that 0 ∈ C(R) ⊂ R^{R}.
 We know from calculus that the sum of continuous functions is continuous.
 Again, we know from calculus that the product of a continuous function and a number is continuous.
Example IV
Keep the same field and vector space as before, but now consider the set Diff(R) of all differentiable functions. The same sort of argument as before shows that this is a subspace too.
Examples that extend these themes are common in functional analysis.
Properties of subspaces
From the definition of vector spaces, it follows that subspaces are nonempty and are closed under sums and under scalar multiples. Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W. The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time.
In a topological vector space X, a subspace W need not be topologically closed, but a finitedimensional subspace is always closed.^{[8]} The same is true for subspaces of finite codimension, that is, subspaces determined by a finite number of continuous linear functionals.
Descriptions
Descriptions of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically (especially, over the field of real numbers and its subfields), a subspace is a flat in an nspace that passes through the origin.
A natural description of a 1subspace is the scalar multiplication of one nonzero vector v to all possible scalar values. 1subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication:
This idea is generalized for higher dimensions with linear span, but criteria for equality of kspaces specified by sets of k vectors are not so simple.
A dual description is provided with linear functionals (usually implemented as linear equations). One nonzero linear functional F specifies its kernel subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal if and only if one functional can be obtained from another with scalar multiplication (in the dual space):
It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span.
Systems of linear equations
The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space K^{n}:
For example (over real or rational numbers), the set of all vectors (x, y, z) satisfying the equations
is a onedimensional subspace. More generally, that is to say that given a set of n independent functions, the dimension of the subspace in K^{k} will be the dimension of the null set of A, the composite matrix of the n functions.
Null space of a matrix
In a finitedimensional space, a homogeneous system of linear equations can be written as a single matrix equation:
The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix
Every subspace of K^{n} can be described as the null space of some matrix (see algorithms, below).
Linear parametric equations
The subset of K^{n} described by a system of homogeneous linear parametric equations is a subspace:
For example, the set of all vectors (x, y, z) parameterized by the equations
is a twodimensional subspace of K^{3}, if K is a number field (such as real or rational numbers).^{[9]}
Span of vectors
In linear algebra, the system of parametric equations can be written as a single vector equation:
The expression on the right is called a linear combination of the vectors (2, 5, −1) and (3, −4, 2). These two vectors are said to span the resulting subspace.
In general, a linear combination of vectors v_{1}, v_{2}, ... , v_{k} is any vector of the form
The set of all possible linear combinations is called the span:
If the vectors v_{1}, ... , v_{k} have n components, then their span is a subspace of K^{n}. Geometrically, the span is the flat through the origin in ndimensional space determined by the points v_{1}, ... , v_{k}.
 Example
 The xzplane in R^{3} can be parameterized by the equations
 As a subspace, the xzplane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in the xzplane can be written as a linear combination of these two:
 Geometrically, this corresponds to the fact that every point on the xzplane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1).
Column space and row space
A system of linear parametric equations in a finitedimensional space can also be written as a single matrix equation:
In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space (or image) of the matrix A. It is precisely the subspace of K^{n} spanned by the column vectors of A.
The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below).
Independence, basis, and dimension
In general, a subspace of K^{n} determined by k parameters (or spanned by k vectors) has dimension k. However, there are exceptions to this rule. For example, the subspace of K^{3} spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xzplane, with each point on the plane described by infinitely many different values of t_{1}, t_{2}, t_{3}.
In general, vectors v_{1}, ... , v_{k} are called linearly independent if
for (t_{1}, t_{2}, ... , t_{k}) ≠ (u_{1}, u_{2}, ... , u_{k}).^{[10]} If v_{1}, ..., v_{k} are linearly independent, then the coordinates t_{1}, ..., t_{k} for a vector in the span are uniquely determined.
A basis for a subspace S is a set of linearly independent vectors whose span is S. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see algorithms, below).
 Example
 Let S be the subspace of R^{4} defined by the equations
 Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis for S. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors:
 The subspace S is twodimensional. Geometrically, it is the plane in R^{4} passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1).
Operations and relations on subspaces
Inclusion
The settheoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension).
A subspace cannot lie in any subspace of lesser dimension. If dim U = k, a finite number, and U ⊂ W, then dim W = k if and only if U = W.
Intersection
Given subspaces U and W of a vector space V, then their intersection U ∩ W := {v ∈ V : v is an element of both U and W} is also a subspace of V.^{[11]}
Proof:
 Let v and w be elements of U ∩ W. Then v and w belong to both U and W. Because U is a subspace, then v + w belongs to U. Similarly, since W is a subspace, then v + w belongs to W. Thus, v + w belongs to U ∩ W.
 Let v belong to U ∩ W, and let c be a scalar. Then v belongs to both U and W. Since U and W are subspaces, cv belongs to both U and W.
 Since U and W are vector spaces, then 0 belongs to both sets. Thus, 0 belongs to U ∩ W.
For every vector space V, the set {0} and V itself are subspaces of V.^{[12]}
Sum
If U and W are subspaces, their sum is the subspace
 ^{[13]}
For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality
Here the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related:
 ^{[14]}
Lattice of subspaces
The operations intersection and sum make the set of all subspaces a bounded modular lattice, where the {0} subspace, the least element, is an identity element of the sum operation, and the identical subspace V, the greatest element, is an identity element of the intersection operation.
Orthogonal complements
If V is an inner product space and N is a subset of V, then the orthogonal complement N^{⊥} of N is again a subspace.^{[15]} If V is finitedimensional and N is a subspace, then the dimensions of N and N^{⊥} satisfy the complementation relationship dim(N) + dim(N^{⊥}) = dim(V).^{[16]} Moreover, no vector is orthogonal to itself, so and V is the direct sum of N and N^{⊥}.^{[17]} Applying orthogonal complements twice returns the original subspace: (N^{⊥})^{⊥} = N for every subspace N.^{[18]}
This operation, understood as negation (¬), makes the lattice of subspaces a (possibly infinite) orthocomplemented lattice (although not a distributive lattice).^{[citation needed]}
In spaces with other bilinear forms, some but not all of these results still hold. In pseudoEuclidean spaces and symplectic vector spaces, for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces N such that N ∩ N^{⊥} ≠ {0}. As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor a Heyting algebra).^{[citation needed]}
Algorithms
Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties:
 The reduced matrix has the same null space as the original.
 Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original.
 Row reduction does not affect the linear dependence of the column vectors.
Basis for a row space
 Input An m × n matrix A.
 Output A basis for the row space of A.
 Use elementary row operations to put A into row echelon form.
 The nonzero rows of the echelon form are a basis for the row space of A.
See the article on row space for an example.
If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of K^{n} are equal.
Subspace membership
 Input A basis {b_{1}, b_{2}, ..., b_{k}} for a subspace S of K^{n}, and a vector v with n components.
 Output Determines whether v is an element of S
 Create a (k + 1) × n matrix A whose rows are the vectors b_{1}, ... , b_{k} and v.
 Use elementary row operations to put A into row echelon form.
 If the echelon form has a row of zeroes, then the vectors {b_{1}, ..., b_{k}, v} are linearly dependent, and therefore v ∈ S.
Basis for a column space
 Input An m × n matrix A
 Output A basis for the column space of A
 Use elementary row operations to put A into row echelon form.
 Determine which columns of the echelon form have pivots. The corresponding columns of the original matrix are a basis for the column space.
See the article on column space for an example.
This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.
Coordinates for a vector
 Input A basis {b_{1}, b_{2}, ..., b_{k}} for a subspace S of K^{n}, and a vector v ∈ S
 Output Numbers t_{1}, t_{2}, ..., t_{k} such that v = t_{1}b_{1} + ··· + t_{k}b_{k}
 Create an augmented matrix A whose columns are b_{1},...,b_{k} , with the last column being v.
 Use elementary row operations to put A into reduced row echelon form.
 Express the final column of the reduced echelon form as a linear combination of the first k columns. The coefficients used are the desired numbers t_{1}, t_{2}, ..., t_{k}. (These should be precisely the first k entries in the final column of the reduced echelon form.)
If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S.
Basis for a null space
 Input An m × n matrix A.
 Output A basis for the null space of A
 Use elementary row operations to put A in reduced row echelon form.
 Using the reduced row echelon form, determine which of the variables x_{1}, x_{2}, ..., x_{n} are free. Write equations for the dependent variables in terms of the free variables.
 For each free variable x_{i}, choose a vector in the null space for which x_{i} = 1 and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of A.
See the article on null space for an example.
Basis for the sum and intersection of two subspaces
Given two subspaces U and W of V, a basis of the sum and the intersection can be calculated using the Zassenhaus algorithm
Equations for a subspace
 Input A basis {b_{1}, b_{2}, ..., b_{k}} for a subspace S of K^{n}
 Output An (n − k) × n matrix whose null space is S.
 Create a matrix A whose rows are b_{1}, b_{2}, ..., b_{k}.
 Use elementary row operations to put A into reduced row echelon form.
 Let c_{1}, c_{2}, ..., c_{n} be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots.
 This results in a homogeneous system of n − k linear equations involving the variables c_{1},...,c_{n}. The (n − k) × n matrix corresponding to this system is the desired matrix with nullspace S.
 Example
 If the reduced row echelon form of A is
 then the column vectors c_{1}, ..., c_{6} satisfy the equations
 It follows that the row vectors of A satisfy the equations
 In particular, the row vectors of A are a basis for the null space of the corresponding matrix.
See also
Notes
 ^ Halmos, P. R. (1942). FiniteDimensional Vector Spaces. Princeton, NJ: Princeton University Press. p. 14. ISBN 9781614272816.
 ^ The term linear subspace is sometimes used for referring to flats and affine subspaces. In the case of vector spaces over the reals, linear subspaces, flats, and affine subspaces are also called linear manifolds for emphasizing that there are also manifolds.
 ^ Anton (2005, p. 155)
 ^ Beauregard & Fraleigh (1973, p. 176)
 ^ Herstein (1964, p. 132)
 ^ Kreyszig (1972, p. 200)
 ^ Nering (1970, p. 20)
 ^ See Paul DuChateau. "Basic Facts About Hilbert Space" (PDF). Retrieved September 17, 2012. for Hilbert spaces
 ^ Generally, K can be any field of such characteristic that the given integer matrix has the appropriate rank in it. All fields include integers, but some integers may equal to zero in some fields.
 ^ This definition is often stated differently: vectors v_{1}, ..., v_{k} are linearly independent if t_{1}v_{1} + ··· + t_{k}v_{k} ≠ 0 for (t_{1}, t_{2}, ..., t_{k}) ≠ (0, 0, ..., 0). The two definitions are equivalent.
 ^ Nering (1970, p. 21)
 ^ Nering (1970, p. 20)
 ^ Nering (1970, p. 21)
 ^ Nering (1970, p. 22)
 ^ Axler (2015), 6.46.
 ^ Axler (2015), 6.50.
 ^ Axler (2015), 6.47.
 ^ Axler (2015), 6.51.
Textbooks
 Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
 Axler, Sheldon Jay (2015), Linear Algebra Done Right (3rd ed.), SpringerVerlag, ISBN 9783319110790
 Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 039514017X
 Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 9781114541016
 Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0471507288
 Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 9780321287137
 Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall
 Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 9780898714548, archived from the original on March 1, 2001
 Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76091646
 Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0534998453
External links
 "Vector subspace". PlanetMath..
 Gilbert Strang, MIT Linear Algebra Lecture on the Four Fundamental Subspaces at Google Video, from MIT OpenCourseWare