To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Principal homogeneous space

From Wikipedia, the free encyclopedia

In mathematics, a principal homogeneous space,[1] or torsor, for a group G is a homogeneous space X for G in which the stabilizer subgroup of every point is trivial. Equivalently, a principal homogeneous space for a group G is a non-empty set X on which G acts freely and transitively (meaning that, for any x, y in X, there exists a unique g in G such that x·g = y, where · denotes the (right) action of G on X). An analogous definition holds in other categories, where, for example,

YouTube Encyclopedic

  • 1/3
    Views:
    774 520
    707 312
    6 725
  • Matrices: Reduced row echelon form 1 | Vectors and spaces | Linear Algebra | Khan Academy
  • Introduction to Eigenvalues and Eigenvectors - Part 1
  • Homogeneity of physics equations

Transcription

I have here three equations of four unknowns. You can already guess, or you already know, that if you have more unknowns than equations, you are probably not constraining it enough. You actually are going to have an infinite number of solutions. Those infinite number of solutions could still be constrained. Let's say we're in four dimensions, in this case, because we have four variables. Maybe we were constrained into a plane in four dimensions, or if we were in three dimensions, maybe we're constrained to a line. A line is an infinite number of solutions, but it's a more constrained set. Let's solve this set of linear equations. We've done this by elimination in the past. What I want to do is I want to introduce the idea of matrices. The matrices are really just arrays of numbers that are shorthand for this system of equations. Let me create a matrix here. I could just create a coefficient matrix, where the coefficient matrix would just be, let me write it neatly, the coefficient matrix would just be the coefficients on the left hand side of these linear equations. The coefficient there is 1. The coefficient there is 1. The coefficient there is 2. You have 2, 2, 4. 2, 2, 4. 1, 2, 0. 1, 2, there is no coefficient the x3 term here, because there is no x3 term there. We'll say the coefficient on the x3 term there is 0. And then we have 1, minus 1, and 6. Now if I just did this right there, that would be the coefficient matrix for this system of equations right there. What I want to do is I want to augment it, I want to augment it with what these equations need to be equal to. Let me augment it. What I am going to do is I'm going to just draw a little line here, and write the 7, the 12, and the 4. I think you can see that this is just another way of writing this. And just by the position, we know that these are the coefficients on the x1 terms. We know that these are the coefficients on the x2 terms. And what this does, it really just saves us from having to write x1 and x2 every time. We can essentially do the same operations on this that we otherwise would have done on that. What we can do is, we can replace any equation with that equation times some scalar multiple, plus another equation. We can divide an equation, or multiply an equation by a scalar. We can subtract them from each other. We can swap them. Let's do that in an attempt to solve this equation. The first thing I want to do, just like I've done in the past, I want to get this equation into the form of, where if I can, I have a 1. My leading coefficient in any of my rows is a 1. And that every other entry in that column is a 0. In the past, I made sure that every other entry below it is a 0. That's what I was doing in some of the previous videos, when we tried to figure out of things were linearly independent, or not. Now I'm going to make sure that if there is a 1, if there is a leading 1 in any of my rows, that everything else in that column is a 0. That form I'm doing is called reduced row echelon form. Let me write that. Reduced row echelon form. If we call this augmented matrix, matrix A, then I want to get it into the reduced row echelon form of matrix A. And matrices, the convention is, just like vectors, you make them nice and bold, but use capital letters, instead of lowercase letters. We'll talk more about how matrices relate to vectors in the future. Let's just solve this system of equations. The first thing I want to do is, in an ideal world I would get all of these guys right here to be 0. Let me replace this guy with that guy, with the first entry minus the second entry. Let me do that. The first row isn't going to change. It's going to be 1, 2, 1, 1. And then I get a 7 right there. That's my first row. Now the second row, I'm going to replace it with the first row minus the second row. So what do I get. 1 minus 1 is 0. 2 minus 2 is 0. 1 minus 2 is minus 1. And then 1 minus minus 1 is 2. That's 1 plus 1. And then 7 minus 12 is minus 5. Now I want to get rid of this row here. I don't want to get rid of it. I want to get rid of this 2 right here. I want to turn it into a 0. Let's replace this row with this row minus 2 times that row. What I'm going to do is, this row minus 2 times the first row. I'm going to replace this row with that. 2 minus 2 times 1 is 0. That was the whole point. 4 minus 2 times 2 is 0. 0 minus 2 times 1 is minus 2. 6 minus 2 times 1 is 6 minus 2, which is 4. 4 minus 2 times 7, is 4 minus 14, which is minus 10. Now what can I do next. You can kind of see that this row, well talk more about what this row means. When all of a sudden it's all been zeroed out, there's nothing here. If I had non-zero term here, then I'd want to zero this guy out, although it's already zeroed out. I'm just going to move over to this row. The first thing I want to do is I want to make this leading coefficient here a 1. What I want to do is, I'm going to multiply this entire row by minus 1. If I multiply this entire row times minus 1. I don't even have to rewrite the matrix. This becomes plus 1, minus 2, plus 5. I think you can accept that. Now what can we do? Well, let's turn this right here into a 0. Let me rewrite my augmented matrix in the new form that I have. I'm going to keep the middle row the same this time. My middle row is 0, 0, 1, minus 2, and then it's augmented, and I get a 5 there. What I want to do is I want to eliminate this minus 2 here. Why don't I add this row to 2 times that row. Then I would have minus 2, plus 2, and that'll work out. What do I get. Well, these are just leading 0's. Then I have minus 2, plus 2 times 1. That's just 0. 4 plus 2 times minus 2, that is minus 4. That's 4 plus minus 4, that's 0 as well. Then you have minus 10 plus 2 times 5. Well, that's just minus 10 plus 10, which is 0. That one just got zeroed out. Normally, when I just did regular elimination, I was happy just having the situation where I had these leading 1's. Everything below it were 0's. I wasn't too concerned about what was above our 1's. What I want to do is, I want to make those into a 0 as well. I want to make this guy a 0 as well. What I can do is, I can replace this first row with that first row minus this second row. What is 1 minus 0? That's just 1. 2 minus 0 is 2. 1 minus 1 is 0. 1 minus minus 2 is 3. 7 minus 5 is 2. There you have it. We have our matrix in reduced row echelon form. This is the reduced row echelon form of our matrix, I'll write it in bold, of our matrix A right there. You know it's in reduced row echelon form because all of your leading 1's in each row-- so what are my leading 1's in each row? I have this 1 and I have that 1. They're the only non-zero entry in their columns. These are called the pivot entries. Let me label that for you. That's called a pivot entry. Pivot entry. They're the only non-zero entry in their respective columns. If I have any zeroed out rows, and I do have a zeroed out row, it's right there. This is zeroed out row. Just the style, or just the convention, is that for reduced row echelon form, that has to be your last row. We have the leading entries are the only -- they're all 1. That's one case. You can't have this a 5. You'd want to divide that equation by 5 if this was a 5. So your leading entries in each row are a 1. That the leading entry in each successive row is to the right of the leading entry of the row before it. This guy right here is to the right of that guy. This is just the style, the convention, of reduced row echelon form. If you have any zeroed out rows, it's in the last row. And finally, of course, and I think I've said this multiple times, this is the only non-zero entry in the row. What does this do for me? Now I can go back from this world, back to my linear equations. We remember that these were the coefficients on x1, these were the coefficients on x2. These were the coefficients on x3, on x4, and then these were my constants out here. I can rewrite this system of equations using my reduced row echelon form as x1, x1 plus 2x2. There's no x3 there. So plus 3x4 is equal to 2. This equation, no x1, no x2, I have an x3. I have x3 minus 2x4 is equal to 5. I have no other equation here. This one got completely zeroed out. I was able to reduce this system of equations to this system of equations. The variables that you associate with your pivot entries, we call these pivot variables. x1 and x3 are pivot variables. The variables that aren't associated with the pivot entry, we call them free variables. x2 and x4 are free variables. Now let's solve for, essentially you can only solve for your pivot variables. The free variables we can set to any variable. I said that in the beginning of this equation. We have fewer equations than unknowns. This is going to be a not well constrained solution. You're not going to have just one point in R4 that solves this equation. You're going to have multiple points. Let's solve for our pivot variables, because that's all we can solve for. This equation tells us, right here, it tells us x3, let me do it in a good color, x3 is equal to 5 plus 2x4. Then we get x1 is equal to 2 minus x2, 2 minus 2x2. 2 minus 2x2 plus, sorry, minus 3x4. I just subtracted these from both sides of the equation. This right here is essentially as far as we can go to the solution of this system of equations. I can pick, really, any values for my free variables. I can pick any values for my x2's and my x4's and I can solve for x3. What I want to do right now is write this in a slightly different form so we can visualize a little bit better. Of course, it's always hard to visualize things in four dimensions. So we can visualize things a little bit better, as to the set of this solution. Let's write it this way. If I were to write it in vector form, our solution is the vector x1, x3, x3, x4. What is it equal to? Well it's equal to-- let me write it like this. It's equal to-- I'm just rewriting, I'm just essentially rewriting this solution set in vector form. So x1 is equal to 2-- let me write a little column there-- plus x2. Let me write it this way. Plus x2 times something plus x4 times something. x1 is equal to 2 minus 2 times x2, or plus x2 minus 2. I put a minus 2 there. I can say plus x4 times minus 3. I can put a minus 3 there. This right here, the first entries of these vectors literally represent that equation right there. x1 is equal to 2 plus x2 times minus 2 plus x4 times minus 3. What does x3 equal? x3 is equal to 5. Put that 5 right there. Plus x4 times 2. x2 doesn't apply to it. We can just put a 0. 0 times x2 plus 2 times x4. Now what does x2 equal? You could say, x2 is equal to 0 plus 1 times x2 plus 0 times x4. x2 is just equal to x2. It's a free variable. Similarly, what does x4 equal to? x4 is equal to 0 plus 0 times x2 plus 1 times x4. What does this do for us? Well, all of a sudden here, we've expressed our solution set as essentially the linear combination of the linear combination of three vectors. This is a vector. You can view it as a coordinate. Either a position vector. It is a vector in R4. You can view it as a position vector or a coordinate in R4. You could say, look, our solution set is essentially-- this is in R4. Each of these have four components, but you can imagine it in r3. That my solution set is equal to some vector, some vector there. That's the vector. Think of it is as a position vector. It would be the coordinate 2, 0, 5, 0. Which obviously, this is four dimensions right there. It's equal to multiples of these two vectors. Let's call this vector, right here, let's call this vector a. Let's call this vector, right here, vector b. Our solution set is all of this point, which is right there, or I guess we could call it that position vector. That position vector will look like that. Where you're starting at the origin right there, plus multiples of these two guys. If this is vector a, let's do vector a in a different color. Vector a looks like that. Let's say vector a looks like that, and then vector b looks like that. This is vector b, and this is vector a. I don't know if this is going to be easier or harder for you to visualize, because obviously we are dealing in four dimensions right here, and I'm just drawing on a two dimensional surface. What you can imagine is, is that the solution set is equal to this fixed point, this position vector, plus linear combinations of a and b. We're dealing, of course, in R4. Let me write that down. We're dealing in R4. But linear combinations of a and b are going to create a plane. You can multiply a times 2, and b times 3, or a times minus 1, and b times minus 100. You can keep adding and subtracting these linear combinations of a and b. They're going to construct a plane that contains the position vector, or contains the point 2, 0, 5, 0. The solution for these three equations with four unknowns, is a plane in R4. I know that's really hard to visualize, and maybe I'll do another one in three dimensions. Hopefully this at least gives you a decent understanding of what an augmented matrix is, what reduced row echelon form is, and what are the valid operations I can perform on a matrix without messing up the system.

Definition

If G is nonabelian then one must distinguish between left and right torsors according to whether the action is on the left or right. In this article, we will use right actions.

To state the definition more explicitly, X is a G-torsor or G-principal homogeneous space if X is nonempty and is equipped with a map (in the appropriate category) X × GX such that

x·1 = x
x·(gh) = (x·gh

for all xX and all g,hG, and such that the map X × GX × X given by

is an isomorphism (of sets, or topological spaces or ..., as appropriate, i.e. in the category in question).

Note that this means that X and G are isomorphic (in the category in question; not as groups: see the following). However—and this is the essential point—there is no preferred 'identity' point in X. That is, X looks exactly like G except that which point is the identity has been forgotten. (This concept is often used in mathematics as a way of passing to a more intrinsic point of view, under the heading 'throw away the origin'.)

Since X is not a group, we cannot multiply elements; we can, however, take their "quotient". That is, there is a map X × XG that sends (x,y) to the unique element g = x \ yG such that y = x·g.

The composition of the latter operation with the right group action, however, yields a ternary operation X × (X × X) → X, which serves as an affine generalization of group multiplication and which is sufficient to both characterize a principal homogeneous space algebraically and intrinsically characterize the group it is associated with. If we denote the result of this ternary operation, then the following identities

will suffice to define a principal homogeneous space, while the additional property

identifies those spaces that are associated with abelian groups. The group may be defined as formal quotients subject to the equivalence relation

,

with the group product, identity and inverse defined, respectively, by

,
,

and the group action by

Examples

Every group G can itself be thought of as a left or right G-torsor under the natural action of left or right multiplication.

Another example is the affine space concept: the idea of the affine space A underlying a vector space V can be said succinctly by saying that A is a principal homogeneous space for V acting as the additive group of translations.

The flags of any regular polytope form a torsor for its symmetry group.

Given a vector space V we can take G to be the general linear group GL(V), and X to be the set of all (ordered) bases of V. Then G acts on X in the way that it acts on vectors of V; and it acts transitively since any basis can be transformed via G to any other. What is more, a linear transformation fixing each vector of a basis will fix all v in V, and hence be the neutral element of the general linear group GL(V) : so that X is indeed a principal homogeneous space. One way to follow basis-dependence in a linear algebra argument is to track variables x in X. Similarly, the space of orthonormal bases (the Stiefel manifold of n-frames) is a principal homogeneous space for the orthogonal group.

In category theory, if two objects X and Y are isomorphic, then the isomorphisms between them, Iso(X,Y), form a torsor for the automorphism group of X, Aut(X), and likewise for Aut(Y); a choice of isomorphism between the objects gives rise to an isomorphism between these groups and identifies the torsor with these two groups, giving the torsor a group structure (as it has now a base point).

Applications

The principal homogeneous space concept is a special case of that of principal bundle: it means a principal bundle with base a single point. In other words the local theory of principal bundles is that of a family of principal homogeneous spaces depending on some parameters in the base. The 'origin' can be supplied by a section of the bundle—such sections are usually assumed to exist locally on the base—the bundle being locally trivial, so that the local structure is that of a cartesian product. But sections will often not exist globally. For example a differential manifold M has a principal bundle of frames associated to its tangent bundle. A global section will exist (by definition) only when M is parallelizable, which implies strong topological restrictions.

In number theory there is a (superficially different) reason to consider principal homogeneous spaces, for elliptic curves E defined over a field K (and more general abelian varieties). Once this was understood, various other examples were collected under the heading, for other algebraic groups: quadratic forms for orthogonal groups, and Severi–Brauer varieties for projective linear groups being two.

The reason of the interest for Diophantine equations, in the elliptic curve case, is that K may not be algebraically closed. There can exist curves C that have no point defined over K, and which become isomorphic over a larger field to E, which by definition has a point over K to serve as identity element for its addition law. That is, for this case we should distinguish C that have genus 1, from elliptic curves E that have a K-point (or, in other words, provide a Diophantine equation that has a solution in K). The curves C turn out to be torsors over E, and form a set carrying a rich structure in the case that K is a number field (the theory of the Selmer group). In fact a typical plane cubic curve C over Q has no particular reason to have a rational point; the standard Weierstrass model always does, namely the point at infinity, but you need a point over K to put C into that form over K.

This theory has been developed with great attention to local analysis, leading to the definition of the Tate–Shafarevich group. In general the approach of taking the torsor theory, easy over an algebraically closed field, and trying to get back 'down' to a smaller field is an aspect of descent. It leads at once to questions of Galois cohomology, since the torsors represent classes in group cohomology H1.

Other usage

The concept of a principal homogeneous space can also be globalized as follows. Let X be a "space" (a scheme/manifold/topological space etc.), and let G be a group over X, i.e., a group object in the category of spaces over X. In this case, a (right, say) G-torsor E on X is a space E (of the same type) over X with a (right) G action such that the morphism

given by

is an isomorphism in the appropriate category, and such that E is locally trivial on X, in that EX acquires a section locally on X. Isomorphism classes of torsors in this sense correspond to classes in the cohomology group H1(X,G).

When we are in the smooth manifold category, then a G-torsor (for G a Lie group) is then precisely a principal G-bundle as defined above.

Example: if G is a compact Lie group (say), then is a G-torsor over the classifying space .

See also

Notes

  1. ^ Serge Lang and John Tate (1958). "Principal Homogeneous Space Over Abelian Varieties". American Journal of Mathematics. 80 (3): 659–684. doi:10.2307/2372778.

Further reading

External links

This page was last edited on 4 April 2024, at 01:58
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.