To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Linear relation

From Wikipedia, the free encyclopedia

In linear algebra, a linear relation, or simply relation, between elements of a vector space or a module is a linear equation that has these elements as a solution.

More precisely, if are elements of a (left) module M over a ring R (the case of a vector space over a field is a special case), a relation between is a sequence of elements of R such that

The relations between form a module. One is generally interested in the case where is a generating set of a finitely generated module M, in which case the module of the relations is often called a syzygy module of M. The syzygy module depends on the choice of a generating set, but it is unique up to the direct sum with a free module. That is, if and are syzygy modules corresponding to two generating sets of the same module, then they are stably isomorphic, which means that there exist two free modules and such that and are isomorphic.

Higher order syzygy modules are defined recursively: a first syzygy module of a module M is simply its syzygy module. For k > 1, a kth syzygy module of M is a syzygy module of a (k – 1)-th syzygy module. Hilbert's syzygy theorem states that, if is a polynomial ring in n indeterminates over a field, then every nth syzygy module is free. The case n = 0 is the fact that every finite dimensional vector space has a basis, and the case n = 1 is the fact that K[x] is a principal ideal domain and that every submodule of a finitely generated free K[x] module is also free.

The construction of higher order syzygy modules is generalized as the definition of free resolutions, which allows restating Hilbert's syzygy theorem as a polynomial ring in n indeterminates over a field has global homological dimension n.

If a and b are two elements of the commutative ring R, then (b, –a) is a relation that is said trivial. The module of trivial relations of an ideal is the submodule of the first syzygy module of the ideal that is generated by the trivial relations between the elements of a generating set of an ideal. The concept of trivial relations can be generalized to higher order syzygy modules, and this leads to the concept of the Koszul complex of an ideal, which provides information on the non-trivial relations between the generators of an ideal.

YouTube Encyclopedic

  • 1/5
    Views:
    150 080
    1 158 824
    65 069
    1 214 000
    510 996
  • Null space 3: Relation to linear independence | Vectors and spaces | Linear Algebra | Khan Academy
  • Graphs of linear equations | Linear equations and functions | 8th grade | Khan Academy
  • Relation of null space to linear independence of columns
  • Linear Independence and Linear Dependence, Ex 1
  • What is a Vector Space? (Abstract Algebra)

Transcription

The neat thing about linear algebra in general is some very seemingly simple concepts can be interpreted in a bunch of different ways, and can be shown to represent different ideas or different problems. And that's what I'm going to do in this video. I'm going to explore the nullspace, or even better, I'm going to explore the relationship-- if I have some matrix A times some vector x, and that that is equal to the 0 vector. And we, of course, saw on the last few videos that the nullspace of A, is equal to all of the vectors x in Rn. So this will have n components. This would have to be an m by n matrix. If this was an m by a matrix, I'd say all of the vectors in Ra. So this number right here has to be the same as that number in order for the matrix vector multiplication to be valid. But the nullspace of A is all of the vectors in Rn that satisfy this equation. Where if I take A and I multiply it times any one of the vectors in the nullspace, I should get the 0 vector. And this is going to have m components, and we've seen that in previous-- this is going to have the 0 vectors. I'll put it this way, the 0 vector's going to be a member of Rm. So that's what our nullspace is. Let's explore it a little bit. We know already that our vector, our matrix, can be rewritten like this. We could just write it as a set of column vectors. I could say this right here, that's v1, then I'm going to have v2, and I have n columns. So this last column right here is going to be v sub n. So if I define my vectors this way, that's the first vector, that's the second vector, than I can rewrite my matrix A. I could say A is equal to just a bunch of column vectors. v1, v2, all the way to vn. And multiplying this matrix times a vector x, so times x, so times x1, x2, all the way to xn. We've seen in the past on the matrix vector product definition video that this can be interpreted as, this has actually just coming straight out of the definition. This is the same thing as x1 times vector 1, times the first column, plus x2 times the second column, times that column, all the way to, and you just keep adding them up, all the way to xn. Times the nth column. This just comes straight out of our definition of matrix vector products. Now, if we're saying that Ax is equal to 0, we're looking for the solution set to that. If we're looking for the solution set to Ax is equal to 0, then that means-- is equal to the 0 vector, that that means that this sum, we're trying to find a solution set of this sum is equaling 0. We want to figure out the x1's, x2's, x3's, all the way to xn's, that make this equal the 0 vector. What are we doing? We're taking linear combinations of our column vectors. We're taking linear combinations of our column vectors, and seeing if we can take some linear combination and get it to the 0 vector. Now, this should start ringing bells in your head. This little equation, or this little expression right here, should start ringing bells. This was part of how we defined what linear independence was. We said that if this was the definition of linear independence, or we proved this fell out of the definition of linear independence, and if I have a bunch of vectors, v1, v2, all the way to vn, we say that they are linearly independent. There's kind of the non-mathematical way of describing it, I guess this is mathematical as well, is that look, none of those vectors can be represented as a combination of the other ones. And then we show that that means that the only solution to this equation would be that x1, x2, all of the coefficients on this, has to be equal to 0. That this is the only solution. Linear independence means that this is the only solution to this equation right now. If the only way that you get the 0 vector, by taking combination of all of these common vectors, the only way to do that is to have all of these guys equal 0. Then you are linearly independent. Likewise, if v1, v2, all the way to vn are linearly independent, then the only solution to this is for these coefficients to be 0. And we saw that in our video on linear independence. Now, if all of these coefficients are 0, what does that mean? That means that our vector x is the 0 vector, and only the 0 vector. That's the only solution. So we have something interesting here. If our column vectors are linearly independent, if v1, v2, all the way to vn, are linearly independent, then that means that the only solution to Ax equals 0, is that x has to be equal to 0 vector. Or put another way, the solution set of this equation, which is really just a nullspace, the nullspace is all of the x's that satisfy this equation. So that the nullspace of A has to only contain the 0 vector. So that's an interesting result. If we're linearly independent, then the nullspace of A only contains the 0 vector. Which is another way of saying that-- let me write this-- well, I already wrote it down, that x1, x2, all of them, have to be equal to 0. Now if I were to multiply this equation out and get it into reduced row echelon form, what does that mean? We saw in a previous video that the nullspace of A is equal to the nullspace of the reduced row echelon form of A. And that's-- the nullspace of A is 0, because its column vectors are each linearly independent, and that means that the nullspace of the reduced row echelon form of A must also equal the 0 vector. And that means that if I take the reduced row echelon form of A, times-- maybe I'm being a little redundant-- the reduced row echelon form of A, and I multiply that times x, or I want to solve this equation, the only solution right here is x is equal to the 0 vector. And if you think about what that means, if this is the only solution, that means that this reduced row echelon form has no free variables. It literally would just have to look like this. So this is x, x1, x2, all the way to xn, the reduced row echelon form of A, in order for this to have a unique solution, and that unique solution being 0, the reduced echelon form is going to have to look like this. 1 times x1 plus 0 times all the other ones, so you're going to have just a bunch of n0's, and you're going to have 1 times x2, plus 0's times everything else. And those 1's are going to go all the way down the diagonal, so it's going to look like that, and then that is going to be equal to the 0 vector. And this is going to be a square matrix, where this has to be n, and this has to be n. How do I know that? Because I said that x1, x2, and all of these have to be equal to 0. So they have to be equal to 0. If I just write them as a system of equations, if I write x1 is equal to 0, x2 is equal to 0, x3 is equal to 0, all the way to xn is equal to 0. This system of equations, if I wrote it as an augmented matrix, remember this is x1 plus 0, x2 plus 0. This as in augmented matrix, and we've done this multiple times, it would look like this. 1, you just have a bunch of 0's, n0's, and then the 1's would just go down the diagonal, and then you'd have n0's right there. So that's where I'm getting it from. If we are linearly independent, the nullspace of A is going to be just a 0 vector, and if the nullspace of A is just a 0 vector, then the nullspace of the reduced row echelon form is only the 0 vector. The only solution is all of the x's equal to 0. Which means a reduced row echelon form of A has to essentially just be 1's down the diaganol, with 0's everywhere else. So anyway, I just want to make this-- this is kind of a neat by-product of an interpretation of the nullspace. Let me write that. Let me summarize our results. The nullspace of A, if it just equals 0, then that means, you can go both ways, that's true if and only if the column vectors of A are linearly independent. And all of that's only true-- this is true, I was going to do a triangle, it might turn into a square-- if x1, x2, all of these have to be equal 0. This is the only solution. And then that implies that the reduced row echelon, and I didn't do it as precisely as I would have liked, but the reduced row echelon form of A is essentially going to be a square n by n matrix. And, by the way, this can only be true if we're dealing with an n by n matrix to begin with. And maybe I'll do that a little bit more precisely in a future video. But then the reduced row echelon form of A is going to have to look like this, just a bunch of 1's down the diagonal, with 0's everywhere else. And these all imply each other. Now, what if the nullspace of A contains some other vectors? Well, then we would have to say that the column vectors of A are linearly dependent. And if they're linearly dependent, then we wouldn't have a reduced row echelon form of A that looked like this, you would have something that would have some free variables that allows you to create more solutions there. But anyway, I just wanted to give you this angle on how you can interpret the nullspace and how it relates to linear Independence.

Basic definitions

Let R be a ring, and M be a left R-module. A linear relation, or simply a relation between k elements of M is a sequence of elements of R such that

If is a generating set of M, the relation is often called a syzygy of M. It makes sense to call it a syzygy of without regard to because, although the syzygy module depends on the chosen generating set, most of its properties are independent; see § Stable properties, below.

If the ring R is Noetherian, or, at least coherent, and if M is finitely generated, then the syzygy module is also finitely generated. A syzygy module of this syzygy module is a second syzygy module of M. Continuing this way one can define a kth syzygy module for every positive integer k.

Hilbert's syzygy theorem asserts that, if M is a finitely generated module over a polynomial ring over a field, then any nth syzygy module is a free  module.

Stable properties

Generally speaking, in the language of K-theory, a property is stable if it becomes true by making a direct sum with a sufficiently large free module. A fundamental property of syzygies modules is that there are "stably independent" on choices of generating sets for involved modules. The following result is the basis of these stable properties.

Proposition — Let be a generating set of an R-module M, and be other elements of M. The module of the relations between is the direct sum of the module of the relations between and a free module of rank n.

Proof. As is a generating set, each can be written This provides a relation between Now, if is any relation, then is a relation between the only. In other words, every relation between is a sum of a relation between and a linear combination of the s. It is straightforward to prove that this decomposition is unique, and this proves the result.

This proves that the first syzygy module is "stably unique". More precisely, given two generating sets and of a module M, if and are the corresponding modules of relations, then there exist two free modules and such that and are isomorphic. For proving this, it suffices to apply twice the preceding proposition for getting two decompositions of the module of the relations between the union of the two generating sets.

For obtaining a similar result for higher syzygy modules, it remains to prove that, if M is any module, and L is a free module, then M and ML have isomorphic syzygy modules. It suffices to consider a generating set of ML that consists of a generating set of M and a basis of L. For every relation between the elements of this generating set, the coefficients of the basis elements of L are all zero, and the syzygies of ML are exactly the syzygies of M extended with zero coefficients. This completes the proof to the following theorem.

Theorem — For every positive integer k, the kth syzygy module of a given module depends on choices of generating sets, but is unique up to the direct sum with a free module. More precisely, if and are kth syzygy modules that are obtained by different choices of generating sets, then there are free modules and such that and are isomorphic.

Relationship with free resolutions

Given a generating set of an R-module, one can consider a free module of L of basis where are new indeterminates. This defines an exact sequence

where the left arrow is the linear map that maps each to the corresponding The kernel of this left arrow is a first syzygy module of M.

One can repeat this construction with this kernel in place of M. Repeating again and again this construction, one gets a long exact sequence

where all are free modules. By definition, such a long exact sequence is a free resolution of M.

For every k ≥ 1, the kernel of the arrow starting from is a kth syzygy module of M. It follows that the study of free resolutions is the same as the study of syzygy modules.

A free resolution is finite of length n if is free. In this case, one can take and (the zero module) for every k > n.

This allows restating Hilbert's syzygy theorem: If is a polynomial ring in n indeterminates over a field K, then every free resolution is finite of length at most n.

The global dimension of a commutative Noetherian ring is either infinite, or the minimal n such that every free resolution is finite of length at most n. A commutative Noetherian ring is regular if its global dimension is finite. In this case, the global dimension equals its Krull dimension. So, Hilbert's syzygy theorem may be restated in a very short sentence that hides much mathematics: A polynomial ring over a field is a regular ring.

Trivial relations

In a commutative ring R, one has always abba = 0. This implies trivially that (b, –a) is a linear relation between a and b. Therefore, given a generating set of an ideal I, one calls trivial relation or trivial syzygy every element of the submodule the syzygy module that is generated by these trivial relations between two generating elements. More precisely, the module of trivial syzygies is generated by the relations

such that and otherwise.

History

The word syzygy came into mathematics with the work of Arthur Cayley.[1] In that paper, Cayley used it in the theory of resultants and discriminants.[2] As the word syzygy was used in astronomy to denote a linear relation between planets, Cayley used it to denote linear relations between minors of a matrix, such as, in the case of a 2×3 matrix:

Then, the word syzygy was popularized (among mathematicians) by David Hilbert in his 1890 article, which contains three fundamental theorems on polynomials, Hilbert's syzygy theorem, Hilbert's basis theorem and Hilbert's Nullstellensatz.

In his article, Cayley makes use, in a special case, of what was later[3] called the Koszul complex, after a similar construction in differential geometry by the mathematician Jean-Louis Koszul.

Notes

  1. ^ 1847[Cayley 1847] A. Cayley, “On the theory of involution in geometry”, Cambridge Math. J. 11 (1847), 52–61. See also Collected Papers, Vol. 1 (1889), 80–94, Cambridge Univ. Press, Cambridge.
  2. ^ [Gel’fand et al. 1994] I. M. Gel’fand, M. M. Kapranov, and A. V. Zelevinsky, Discriminants, resultants, and multidimensional determinants, Mathematics: Theory & Applications, Birkhäuser, Boston, 1994.
  3. ^ Serre, Jean-Pierre Algèbre locale. Multiplicités. (French) Cours au Collège de France, 1957–1958, rédigé par Pierre Gabriel. Seconde édition, 1965. Lecture Notes in Mathematics, 11 Springer-Verlag, Berlin-New York 1965 vii+188 pp.; this is the published form of mimeographed notes from Serre's lectures at the College de France in 1958.

References

  • Cox, David; Little, John; O’Shea, Donal (2007). "Ideals, Varieties, and Algorithms". Undergraduate Texts in Mathematics. New York, NY: Springer New York. doi:10.1007/978-0-387-35651-8. ISBN 978-0-387-35650-1. ISSN 0172-6056.
  • Cox, David; Little, John; O’Shea, Donal (2005). "Using Algebraic Geometry". Graduate Texts in Mathematics. New York: Springer-Verlag. doi:10.1007/b138611. ISBN 0-387-20706-6.
  • Eisenbud, David (1995). Commutative Algebra with a View Toward Algebraic Geometry. Graduate Texts in Mathematics. Vol. 150. Springer-Verlag. doi:10.1007/978-1-4612-5350-1. ISBN 0-387-94268-8.
  • David Eisenbud, The Geometry of Syzygies, Graduate Texts in Mathematics, vol. 229, Springer, 2005.
This page was last edited on 13 December 2023, at 17:00
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.