To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Nullity theorem

From Wikipedia, the free encyclopedia

The nullity theorem is a mathematical theorem about the inverse of a partitioned matrix, which states that the nullity of a block in a matrix equals the nullity of the complementary block in its inverse matrix. Here, the nullity is the dimension of the kernel. The theorem was proven in an abstract setting by Gustafson (1984), and for matrices by (Fiedler & Markham 1986).

Partition a matrix and its inverse in four submatrices:

The partition on the right-hand side should be the transpose of the partition on the left-hand side, in the sense that if A is an m-by-n block then E should be an n-by-m block.

The statement of the nullity theorem is now that the nullities of the blocks on the right equal the nullities of the blocks on the left (Strang & Nguyen 2004):

More generally, if a submatrix is formed from the rows with indices {i1, i2, …, im} and the columns with indices {j1, j2, …, jn}, then the complementary submatrix is formed from the rows with indices {1, 2, …, N} \ {j1, j2, …, jn} and the columns with indices {1, 2, …, N} \ {i1, i2, …, im}, where N is the size of the whole matrix. The nullity theorem states that the nullity of any submatrix equals the nullity of the complementary submatrix of the inverse.

YouTube Encyclopedic

  • 1/3
    Views:
    199 268
    12 183
    52 211
  • Dimension of the null space or nullity | Vectors and spaces | Linear Algebra | Khan Academy
  • The rank nullity relation and examples
  • How to find the null space and the nullity of a matrix: Example

Transcription

Let's say I have this matrix B, here, and I want to know what the null space of B is. And we've done this multiple times but just as a review, the null space of B is just all of the x's that are a member. It's all the vector x's that are member of what? 1, 2, 3, 4, 5 that are members of r to the fifth, where B, my matrix B, times any of these vector x's, is equal to 0. That's the definition of the null space. I'm just trying to find the solution set to this equation right here. And we've seen before, that the null set of the reduced row echelon form of B is equal to the null set of B. So what's the reduced row echelon form of B? And this is actually almost trivially easy. Let me just take a couple of steps right here-- to get a 0 here, let's just replace row 2 with row 2 minus row 1 So what do we get? Row 2 minus row 1. Row 1 doesn't change, it's just 1, 1, 2, 3, 2. And then row 2 minus row 1. 1 minus 1 is 0. 1 minus 1 is 0. 3 minus 2 is 1. 1 minus 3 is minus 2. 4 minus is 2 is 2. We're almost there. Let's see, so this is a free variable right here. This is a pivot variable right here. We have a 1. So let me get rid of that guy right there. And I can get rid of that guy right there, by replacing row 1 with row 1 minus 2 times row 2. So now row 2 is going to be the same. 0, 0, 1 minus 2, 2. And let me replace row 1 with row 1 minus 2 times row 2. So 1 minus 2 times 0 is 1. 1 minus 2 times 0 is 1. 2 minus 2 times 1 is 0. 3 minus 2 times minus 2. So that's 3 plus 4 is 7, right? 2 times this is minus 4 and we're subtracting it. And then 2 minus 2 times 2-- that's 2 minus 4-- it's minus 2. So this is the reduced row echelon form of B is equal to that right there. And then if I wanted to figure out its null space, I have x1, x2, x3, x4, and x5 equaling-- I'm going to have two 0's right here. Now I can just write this as just a set of or a system of equations. So let me do that. I get x1. I'm going to write my pivot variables in a green color. x1 plus 1 times x2, so plus x2, plus 0 times x3. Plus 7 times x4. Minus 2 times x5 is equal to that 0 right there. And then I get my-- this is x3, right? 0 times x1 plus 0 times x2 plus 1 times x3. So I get x3 minus 2 times x4 plus 2 times x5 is equal to that 0 right there. And then if we solve for our pivot variables, right? These are our free variables. We can set them equal to anything. If we solve for our pivot variables what do we get? We get x1 is equal to-- I should do that in green. The color coding helps. I get x1 is equal to minus x2 minus 7x4 plus 2x5, just subtracted these from both sides of the equation. And I get x3 is equal to-- we've done this multiple times-- 2x4 minus 2x5. And so if I wanted to write the solution set in vector form, I could write my solution set or my null space, really, is-- or all the possible x's. x1, x2, x3, x4, x5. This is my vector x, that's in r5. It is equal to a linear combination of these. So let me write it out. The free variables are x2 times some vector right there. Plus x-- is x3, no x3 is not a free variable. Plus x4, that's my next free variable, times some factor. Plus x5 times some vector. I've run out of space. Plus x5 times some vector. And what are those vectors? Let's see. I don't want to make this too dirty, so let me see if I can maybe move-- nope that's not what I wanted to do. Let me just rewrite this. I haven't mastered this pen tool yet, so let me rewrite this here. So x3 is equal to 2x4 minus 2x5. Let me delete this right over here so I get some extra space. Cross that out. I think that's good enough. So I can go back to what I was doing before. x5 times some vector right here. And now what are those vectors? We just have to look at these formulas. x1 is equal to minus 1 times x2. So minus 1 times x2. Minus 7 times x4. Plus 2 times x5. Fair enough. And what is x3 equal to? x3 is equal to 2x4. 2x4, right? It had nothing to do with x2 right here, so it's equal to 2x4 minus 2x5. And then 0 times x2, right? Because it had no x2 term right here. And then what is x2 equal to? Well x2 is just equal to 1 times x2. And so all of these terms are 0 right there. And I want you to pay attention to that. I'll write it right here. x2 is a free variable, so it's just equal to itself, right? 1 and you write a 0 and a 0. x4 is a free variable. And this is the important point of this exercise. So it's just equal to 1 times itself. You don't have to throw in any of the other free variables. And x5 is a free variable. So it just equals 1 times itself and none of the other free variables. So right here we now say that all of the solutions of our equation Bx equals 0, or the reduced row echelon form of B times x is equal to 0, will take this form. Or they are linear combinations of these vectors. Let's call this v1, v2, and v3. These are just random real numbers. I can pick any combination here to create this solution set, or to create our null space. So the null space of A, which is of course equal to the null space of the reduced row echelon form of A, is equal to all the possible linear combinations of these 3 vectors, is equal to the span of my vector v1, v2, and v3. Just like that. Now, the whole reason I went through this exercise-- because we've done this multiple times already-- is to think about whether these guys form a linear independent set. So my question is are these guys linearly independent? And the reason why I care is because if they are linearly independent then they form a basis for the null space, right? That we know that they span the null space, but if they're linearly independent, then that's the 2 constraints for a basis. You have to span the subspace, and you have to be linearly independent. So let's just inspect these guys right here. This v1, he has a 1 right here. He has a 1 in the second term because he corresponds to the free variable x2, which is the second entry, so we just throw a 1 here. And we have a 0 everywhere else in all of the other vectors in our spanning set. And that's because for the other free variables we always wanted to multiply them times a 0, right? And this is going to be true of any null space problem we do. For any free variable, if this free variable represents a second entry, we're going to have a 1 in the second entry here. And then a 0 for the second entry for all of the other vectors associated with the other free variables. So can this guy ever be represented as a linear combination of this guy and that guy? Well there's nothing that I can multiply this 0 by and add to something that I multiply this 0 by to get a 1 here. It's just going to get 0's. So this guy can't be representated as a linear combination of these guys. Likewise, this vector right here has a 1 in the fourth position. Why is it a fourth position? Because the fourth position corresponds to its corresponding free variable, x4. So this guy's a 1 here. These other guys will definitely always have a 0 here. So you can't take any linear combination of them to get this guy. So this guy can't be represented as a linear combination of those guys. And last, this x5 guy, right here, has a 1 here. And these guys have 0's here. So no linear combination of these 0's can equal this 1. So all of these guys are linearly independent. You can't construct any of these vectors with some combination of the other. So they are linearly independent. So v1, the set v1, v2, and v3 is actually a basis for the null space, for the null space of-- Oh, you know what, I have to be very careful. For the null space of B. Just for variety, I defined my initial matrix as matrix B, so let me be very careful here. So the null space of B was equal to the null space of the reduced row echelon form of B. It's good to switch things up every once in a while, you start thinking that every matrix is named A if you don't. And that's equal to the span of these vectors. So these vectors, and we just said that they're linearly independent. We just showed that because there's no way to get that one from these guys, that one from these guys, or that one from these guys. These guys form a basis for the null space of B. Now this raises an interesting question. In the last video, I defined what dimensionality is. And maybe you missed it because that video was kind of proofy. But the dimensionality, the dimension, of a subspace-- I'll redefine it here-- is the number of elements in a basis for the subspace. And in the last video I took great pains to show that all bases for any given subspace will have the same number of elements. So this is well defined. So my question to you now is: what is the dimension of my null space of B? What is that the dimension of my null space of B? Well, the dimension is just the number of vectors in a basis set for B. Well this is a basis set for B right there. And how many vectors do I have in it? I have 1, 2 3 vectors. So the dimension of the null space of B is 3. Or another way to think about it-- or another name for the dimension of the null space of B-- is the nullity, the nullity of B. And that is also equal to 3. And let's think about it, you know I went through all this exercise. But what is the nullity of any matrix going to be equal to? It's the dimension of the null space. Well the dimension of the null space-- you're always going to have as many factors here as you have free variables. So in general, the nullity of any matrix of any matrix-- let's say matrix A-- is equal to the number of I guess you could call it free variable columns or the number free variables in, well, I guess we call it in the reduced row echelon form, or I guess we could say the number of non-pivot columns. The number of non-pivot columns in the reduced row echelon form of A. Because that's essentially the number of free variables-- all of those free variables have an associated, linearly independent vector with each of them, right? So the number of variables is the number of vectors you're going to have in your basis for your null space. And the number of free variables is essentially the number of non-pivot columns in your reduced row echelon form, right? This was a non-pivot column, that's a non-pivot column, that's a non-pivot column. And they're associated with the free variables x2, x4, and x5. So the nullity of a matrix is essentially the number of non-pivot columns in the reduced row echelon form of that matrix. Anyway, hopefully you found that vaguely useful.

References

  • Gustafson, William H. (1984), "A note on matrix inversion", Linear Algebra and Its Applications, 57: 71–73, doi:10.1016/0024-3795(84)90177-0, ISSN 0024-3795.
  • Fiedler, Miroslav; Markham, Thomas L. (1986), "Completing a matrix when certain entries of its inverse are specified", Linear Algebra and Its Applications, 74 (1–3): 225–237, doi:10.1016/0024-3795(86)90125-4, ISSN 0024-3795.
  • Strang, Gilbert; Nguyen, Tri (2004), "The interplay of ranks of submatrices" (PDF), SIAM Review, 46 (4): 637–646, doi:10.1137/S0036144503434381, hdl:1721.1/3885, ISSN 1095-7200.
This page was last edited on 6 May 2021, at 20:51
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.