To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Schwarzian derivative

From Wikipedia, the free encyclopedia

In mathematics, the Schwarzian derivative is an operator similar to the derivative which is invariant under Möbius transformations. Thus, it occurs in the theory of the complex projective line, and in particular, in the theory of modular forms and hypergeometric functions. It plays an important role in the theory of univalent functions, conformal mapping and Teichmüller spaces. It is named after the German mathematician Hermann Schwarz.

YouTube Encyclopedic

  • 1/5
    Views:
    532 468
    504
    51 359
    97 574
    123 310
  • Proof of the Cauchy-Schwarz inequality | Vectors and spaces | Linear Algebra | Khan Academy
  • Schwarzian derivatives and Epstein surfaces (Lecture 01) by Ken Bromberg
  • Visual Cauchy-Schwarz Inequality
  • Cauchy Schwarz Proof
  • SCHWARZ'S INEQUALITY PROOF || SCHWARZ'S INEQUALITY IN LINEAR ALGEBRA

Transcription

Let's say that I have two nonzero vectors. Let's say the first vector is x, the second vector is y. They're both in the set Rn and they're nonzero. It turns out that the absolute value of their-- let me do it in a different color. This color's nice. The absolute value of their dot product of the two vectors-- and remember, this is just a scalar quantity-- is less than or equal to the product of their lengths. And we've defined the dot product and we've defined lengths already. It's less than or equal to the product of their lengths and just to push it even further, the only time that this is equal, so the dot product of the two vectors is only going to be equal to the lengths of this-- the equal and the less than or equal apply only in the situation-- let me write that down-- where one of these vectors is a scalar multiple of the other. Or they're collinear. You know, one's just kind of the longer or shorter version of the other one. So only in the situation where let's just say x is equal to some scalar multiple of y. These inequalities or I guess the equality of this inequality, this is called the Cauchy-Schwarz Inequality. So let's prove it because you can't take something like this just at face value. You shouldn't just accept that. So let me just construct a somewhat artificial function. Let me construct some function of-- that's a function of some variables, some scalar t. Let me define p of t to be equal to the length of the vector t times the vector-- some scalar t times the vector y minus the vector x. It's the length of this vector. This is going to be a vector now. That squared. Now before I move forward I want to make one little point here. If I take the length of any vector, I'll do it here. Let's say I take the length of some vector v. I want you to accept that this is going to be a positive number, or it's at least greater than or equal to 0. Because this is just going to be each of its terms squared. v2 squared all the way to vn squared. All of these are real numbers. When you square a real number, you get something greater than or equal to 0. When you sum them up, you're going to have something greater than or equal to 0. And you take the square root of it, the principal square root, the positive square root, you're going to have something greater than or equal to 0. So the length of any real vector is going to be greater than or equal to 0. So this is the length of a real vector. So this is going to be greater than or equal to 0. Now, in the previous video, I think it was two videos ago, I also showed that the magnitude or the length of a vector squared can also be rewritten as the dot product of that vector with itself. So let's rewrite this vector that way. The length of this vector squared is equal to the dot product of that vector with itself. So it's ty minus x dot ty minus x. In the last video, I showed you that you can treat a multiplication or you can treat the dot product very similar to regular multiplication when it comes to the associative, distributive and commutative properties. So when you multiplied these, you know, you could kind of view this as multiplying these two binomials. You can do it the same way as you would just multiply two regular algebraic binomials. You're essentially just using the distributive property. But remember, this isn't just regular multiplication. This is the dot product we're doing. This is vector multiplication or one version of vector multiplication. So if we distribute it out, this will become ty dot ty. So let me write that out. That'll be ty dot ty. And then we'll get a minus-- let me do it this way. Then we get the minus x times this ty. Instead of saying times, I should be very careful to say dot. So minus x dot ty. And then you have this ty times this minus x. So then you have minus ty dot x. And then finally, you have the x's dot with each other. And you can view them as minus 1x dot minus 1x. You could say plus minus 1x. I could just view this as plus minus 1 or plus minus 1. So this is minus 1x dot minus 1x. So let's see. So this is what my whole expression simplified to or expanded to. I can't really call this a simplification. But we can use the fact that this is commutative and associative to rewrite this expression right here. This is equal to y dot y times t squared. t is just a scalar. Minus-- and actually, this is 2. These two things are equivalent. They're just rearrangements of the same thing and we saw that the dot product is associative. So this is just equal to 2 times x dot y times t. And I should do that in maybe a different color. So these two terms result in that term right there. And then if you just rearrange these you have a minus 1 times a minus 1. They cancel out, so those will become plus and you're just left with plus x dot x. And I should do that in a different color as well. I'll do that in an orange color. So those terms end up with that term. Then of course, that term results in that term. And remember, all I did is I rewrote this thing and said, look. This has got to be greater than or equal to 0. So I could rewrite that here. This thing is still just the same thing. I've just rewritten it. So this is all going to be greater than or equal to 0. Now let's make a little bit of a substitution just to clean up our expression a little bit. And we'll later back substitute into this. Let's define this as a. Let's define this piece right here as b. So the whole thing minus 2x dot y. I'll leave the t there. And let's define this or let me just define this right here as c. X dot x as c. So then, what does our expression become? It becomes a times t squared minus-- I want to be careful with the colors-- b times t plus c. And of course, we know that it's going to be greater than or equal to 0. It's the same thing as this up here, greater than or equal to 0. I could write p of t here. Now this is greater than or equal to 0 for any t that I put in here. For any real t that I put in there. Let me evaluate our function at b over 2a. And I can definitely do this because what was a? I just have to make sure I'm not dividing by 0 any place. So a was this vector dotted with itself. And we said this was a nonzero vector. So this is the square of its length. It's a nonzero vector, so some of these terms up here would end up becoming positively when you take its length. So this thing right here is nonzero. This is a nonzero vector. Then 2 times the dot product with itself is also going to be nonzero. So we can do this. We don't worry about dividing by 0, whatever else. But what will this be equal to? This'll be equal to-- and I'll just stick to the green. It takes too long to keep switching between colors. This is equal to a times this expression squared. So it's b squared over 4a squared. I just squared 2a to get the 4a squared. Minus b times this. So b times-- this is just regular multiplication. b times b over 2a. Just write regular multiplication there. Plus c. And we know all of that is greater than or equal to 0. Now if we simplify this a little bit, what do we get? Well this a cancels out with this exponent there and you end up with a b squared right there. So we get b squared over 4a minus b squared over 2a. That's that term over there. Plus c is greater than or equal to 0. Let me rewrite this. If I multiply the numerator and denominator of this by 2, what do I get? I get 2b squared over 4a. And the whole reason I did that is to get a common denominator here. So what do you get? You get b squared over 4a minus 2b squared over 4a. So what do these two terms simplify to? Well the numerator is b squared minus 2b squared. So that just becomes minus b squared over 4a plus c is greater than or equal to 0. These two terms add up to this one right here. Now if we add this to both sides of the equation, we get c is greater than or equal to b squared over 4a. It was a negative on the left-hand side. If I add it to both sides it's going to be a positive on the right-hand side. We're approaching something that looks like an inequality, so let's back substitute our original substitutions to see what we have now. So where was my original substitutions that I made? It was right here. And actually, just to simplify more, let me multiply both sides by 4a. I said a, not only is it nonzero, it's going to be positive. This is the square of its length. And I already showed you that the length of any real vector's going to be positive. And the reason why I'm taking great pains to show that a is positive is because if I multiply both sides of it I don't want to change the inequality sign. So let me multiply both sides of this by a before I substitute. So we get 4ac is greater than or equal to b squared. There you go. And remember, I took great pains. I just said a is definitely a positive number because it is essentially the square of the length. y dot y is the square of the length of y, and that's a positive value. It has to be positive. We're dealing with real vectors. Now let's back substitute this. So 4 times a, 4 times y dot y. y dot y is also-- I might as well just write it there. y dot y is the same thing as the magnitude of y squared. That's y dot y. This is a. y dot y, I showed you that in the previous video. Times c. c is x dot x. Well x dot x is the same thing as the length of vector x squared. So this was c. So 4 times a times c is going to be greater than or equal to b squared. Now what was b? b was this thing here. So b squared would be 2 times x dot y squared. So we've gotten to this result so far. And so what can we do with this? Oh sorry, and this whole thing is squared. This whole thing right here is b. So let's see if we can simplify this. So we get-- let me switch to a different color. 4 times the length of y squared times the length of x squared is greater than or equal to-- if we squared this quantity right here, we get 4 times x dot y. 4 times x dot y times x dot y. Actually, even better, let me just write it like this. Let me just write 4 times x dot y squared. Now we can divide both sides by 4. That won't change our inequality. So that just cancels out there. And now let's take the square root of both sides of this equation. So the square roots of both sides of this equation-- these are positive values, so the square root of this side is the square root of each of its terms. That's just an exponent property. So if you take the square root of both sides you get the length of y times the length of x is greater than or equal to the square root of this. And we're going to take the positive square root. We're going to take the positive square root on both sides of this equation. That keeps us from having to mess with anything on the inequality or anything like that. So the positive square root is going to be the absolute value of x dot y. And I want to be very careful to say this is the absolute value because it's possible that this thing right here is a negative value. But when you square it, you want to be careful that when you take the square root of it that you stay a positive value. Because otherwise when we take the principal square root, we might mess with the inquality. We're taking the positive square root, which will be-- so if you take the absolute value, you're ensuring that it's going to be positive. But this is our result. The absolute value of the dot product of our vectors is less than the product of the two vectors lengths. So we got our Cauchy-Schwarz inequality. Now the last thing I said is look, what happens if x is equal to some scalar multiple of y? Well in that case, what's the absolute value? The absolute value of x dot y? Well that equals-- that equals what? If we make the substitution that equals the absolute value of c times y. That's just x dot y, which is equal to just from the associative property. It's equal to the absolute value of c times-- we want to make sure our absolute value, keep everything positive. y dot y. Well this is just equal to c times the magnitude of y-- the length of y squared. Well that just is equal to the magnitude of c times-- or the absolute value of our scalar c times our length of y. Well this right here, I can rewrite this. I mean you can prove this to yourself if you don't believe it, but this-- we could put the c inside of the magnitude and that could be a good exercise for you to prove. But it's pretty straightforward. You just do the definition of length. And you multiply it by c. This is equal to the magnitude of cy times-- let me say the length of cy times the length of y. I've lost my vector notation someplace over here. There you go. Now, this is x. So this is equal to the length of x times the length of y. So I showed you kind of the second part of the Cauchy-Schwarz Inequality that this is only equal to each other if one of them is a scalar multiple of the other. If you're a little uncomfortable with some of these steps I took, it might be a good exercise to actually prove it. For example, to prove that the absolute value of c times the length of the vector y is the same thing as the length of c times y. Anyway, hopefully you found this pretty useful. The Cauchy-Schwarz Inequality we'll use a lot when we prove other results in linear algebra. And in a future video, I'll give you a little more intuition about why this makes a lot of sense relative to the dot product.

Definition

The Schwarzian derivative of a holomorphic function f of one complex variable z is defined by

The same formula also defines the Schwarzian derivative of a C3 function of one real variable. The alternative notation

is frequently used.

Properties

The Schwarzian derivative of any Möbius transformation

is zero. Conversely, the Möbius transformations are the only functions with this property. Thus, the Schwarzian derivative precisely measures the degree to which a function fails to be a Möbius transformation.[1]

If g is a Möbius transformation, then the composition g o f has the same Schwarzian derivative as f; and on the other hand, the Schwarzian derivative of f o g is given by the chain rule

More generally, for any sufficiently differentiable functions f and g

When f and g are smooth real-valued functions, this implies that all iterations of a function with negative (or positive) Schwarzian will remain negative (resp. positive), a fact of use in the study of one-dimensional dynamics.[2]

Introducing the function of two complex variables[3]

its second mixed partial derivative is given by

and the Schwarzian derivative is given by the formula:

The Schwarzian derivative has a simple inversion formula, exchanging the dependent and the independent variables. One has

or more explicitly, . This follows from the chain rule above.

Geometric interpretation

William Thurston interprets the Schwarzian derivative as a measure of how much a conformal map deviates from a Möbius transformation.[1] Let be a conformal mapping in a neighborhood of Then there exists a unique Möbius transformation such that has the same 0, 1, 2-th order derivatives at

Now To explicitly solve for it suffices to solve the case of Let and solve for the that make the first three coefficients of equal to Plugging it into the fourth coefficient, we get .

After a translation, rotation, and scaling of the complex plane, in a neighborhood of zero. Up to third order this function maps the circle of radius to the parametric curve defined by where This curve is, up to fourth order, an ellipse with semiaxes and :

as

Since Möbius transformations always map circles to circles or lines, the eccentricity measures the deviation of from a Möbius transform.

Differential equation

Consider the linear second-order ordinary differential equation

where is a real-valued function of a real parameter . Let denote the two-dimensional space of solutions. For , let be the evaluation functional . The map gives, for each point of the domain of , a one-dimensional linear subspace of . That is, the kernel defines a mapping from the real line to the real projective line. The Schwarzian of this mapping is well-defined, and in fact is equal to (Ovsienko & Tabachnikov 2005).

Owing to this interpretation of the Schwarzian, if two diffeomorphisms of a common open interval into have the same Schwarzian, then they are (locally) related by an element of the general linear group acting on the two-dimensional vector space of solutions to the same differential equation, i.e., a fractional linear transformation of .

Alternatively, consider the second-order linear ordinary differential equation in the complex plane[4]

Let and be two linearly independent holomorphic solutions. Then the ratio satisfies

over the domain on which and are defined, and The converse is also true: if such a g exists, and it is holomorphic on a simply connected domain, then two solutions and can be found, and furthermore, these are unique up to a common scale factor.

When a linear second-order ordinary differential equation can be brought into the above form, the resulting Q is sometimes called the Q-value of the equation.

Note that the Gaussian hypergeometric differential equation can be brought into the above form, and thus pairs of solutions to the hypergeometric equation are related in this way.

Conditions for univalence

If f is a holomorphic function on the unit disc, D, then W. Kraus (1932) and Nehari (1949) proved that a necessary condition for f to be univalent is[5]

Conversely if f(z) is a holomorphic function on D satisfying

then Nehari proved that f is univalent.[6]

In particular a sufficient condition for univalence is[7]

Conformal mapping of circular arc polygons

The Schwarzian derivative and associated second-order ordinary differential equation can be used to determine the Riemann mapping between the upper half-plane or unit circle and any bounded polygon in the complex plane, the edges of which are circular arcs or straight lines. For polygons with straight edges, this reduces to the Schwarz–Christoffel mapping, which can be derived directly without using the Schwarzian derivative. The accessory parameters that arise as constants of integration are related to the eigenvalues of the second-order differential equation. Already in 1890 Felix Klein had studied the case of quadrilaterals in terms of the Lamé differential equation.[8][9][10]

Let Δ be a circular arc polygon with angles πα1, ..., παn in clockwise order. Let f : H → Δ be a holomorphic map extending continuously to a map between the boundaries. Let the vertices correspond to points a1, ..., an on the real axis. Then p(x) = S(f)(x) is real-valued for x real and not one of the points. By the Schwarz reflection principle p(x) extends to a rational function on the complex plane with a double pole at ai:

The real numbers βi are called accessory parameters. They are subject to three linear constraints:

which correspond to the vanishing of the coefficients of and in the expansion of p(z) around z = ∞. The mapping f(z) can then be written as

where and are linearly independent holomorphic solutions of the linear second-order ordinary differential equation

There are n−3 linearly independent accessory parameters, which can be difficult to determine in practise.

For a triangle, when n = 3, there are no accessory parameters. The ordinary differential equation is equivalent to the hypergeometric differential equation and f(z) is the Schwarz triangle function, which can be written in terms of hypergeometric functions.

For a quadrilateral the accessory parameters depend on one independent variable λ. Writing U(z) = q(z)u(z) for a suitable choice of q(z), the ordinary differential equation takes the form

Thus are eigenfunctions of a Sturm–Liouville equation on the interval . By the Sturm separation theorem, the non-vanishing of forces λ to be the lowest eigenvalue.

Complex structure on Teichmüller space

Universal Teichmüller space is defined to be the space of real analytic quasiconformal mappings of the unit disc D, or equivalently the upper half-plane H, onto itself, with two mappings considered to be equivalent if on the boundary one is obtained from the other by composition with a Möbius transformation. Identifying D with the lower hemisphere of the Riemann sphere, any quasiconformal self-map f of the lower hemisphere corresponds naturally to a conformal mapping of the upper hemisphere onto itself. In fact is determined as the restriction to the upper hemisphere of the solution of the Beltrami differential equation

where μ is the bounded measurable function defined by

on the lower hemisphere, extended to 0 on the upper hemisphere.

Identifying the upper hemisphere with D, Lipman Bers used the Schwarzian derivative to define a mapping

which embeds universal Teichmüller space into an open subset U of the space of bounded holomorphic functions g on D with the uniform norm. Frederick Gehring showed in 1977 that U is the interior of the closed subset of Schwarzian derivatives of univalent functions.[11][12][13]

For a compact Riemann surface S of genus greater than 1, its universal covering space is the unit disc D on which its fundamental group Γ acts by Möbius transformations. The Teichmüller space of S can be identified with the subspace of the universal Teichmüller space invariant under Γ. The holomorphic functions g have the property that

is invariant under Γ, so determine quadratic differentials on S. In this way, the Teichmüller space of S is realized as an open subspace of the finite-dimensional complex vector space of quadratic differentials on S.

Diffeomorphism group of the circle

Crossed homomorphisms

The transformation property

allows the Schwarzian derivative to be interpreted as a continuous 1-cocycle or crossed homomorphism of the diffeomorphism group of the circle with coefficients in the module of densities of degree 2 on the circle.[14] Let Fλ(S1) be the space of tensor densities of degree λ on S1. The group of orientation-preserving diffeomorphisms of S1, Diff(S1), acts on Fλ(S1) via pushforwards. If f is an element of Diff(S1) then consider the mapping

In the language of group cohomology the chain-like rule above says that this mapping is a 1-cocycle on Diff(S1) with coefficients in F2(S1). In fact

and the 1-cocycle generating the cohomology is fS(f−1). The computation of 1-cohomology is a particular case of the more general result

Note that if G is a group and M a G-module, then the identity defining a crossed homomorphism c of G into M can be expressed in terms of standard homomorphisms of groups: it is encoded in a homomorphism 𝜙 of G into the semidirect product such that the composition of 𝜙 with the projection onto G is the identity map; the correspondence is by the map C(g) = (c(g), g). The crossed homomorphisms form a vector space and containing as a subspace the coboundary crossed homomorphisms b(g) = gmm for m in M. A simple averaging argument shows that, if K is a compact group and V a topological vector space on which K acts continuously, then the higher cohomology groups vanish Hm(K, V) = (0) for m > 0. In particular for 1-cocycles χ with

averaging over y, using left invariant of the Haar measure on K gives

with

Thus by averaging it may be assumed that c satisfies the normalisation condition c(x) = 0 for x in Rot(S1). Note that if any element x in G satisfies c(x) = 0 then C(x) = (0,x). But then, since C is a homomorphism, C(xgx−1) = C(x)C(g)C(x)−1, so that c satisfies the equivariance condition c(xgx−1) = x ⋅ c(g). Thus it may be assumed that the cocycle satisfies these normalisation conditions for Rot(S1). The Schwarzian derivative in fact vanishes whenever x is a Möbius transformation corresponding to SU(1,1). The other two 1-cycles discussed below vanish only on Rot(S1) (λ = 0, 1).

There is an infinitesimal version of this result giving a 1-cocycle for Vect(S1), the Lie algebra of smooth vector fields, and hence for the Witt algebra, the subalgebra of trigonometric polynomial vector fields. Indeed, when G is a Lie group and the action of G on M is smooth, there is a Lie algebraic version of crossed homomorphism obtained by taking the corresponding homomorphisms of the Lie algebras (the derivatives of the homomorphisms at the identity). This also makes sense for Diff(S1) and leads to the 1-cocycle

which satisfies the identity

In the Lie algebra case, the coboundary maps have the form b(X) = Xm for m in M. In both cases the 1-cohomology is defined as the space of crossed homomorphisms modulo coboundaries. The natural correspondence between group homomorphisms and Lie algebra homomorphisms leads to the "van Est inclusion map"

In this way the calculation can be reduced to that of Lie algebra cohomology. By continuity this reduces to the computation of crossed homomorphisms 𝜙 of the Witt algebra into Fλ(S1). The normalisations conditions on the group crossed homomorphism imply the following additional conditions for 𝜙:

for x in Rot(S1).

Following the conventions of Kac & Raina (1987), a basis of the Witt algebra is given by

so that [dm,dn] = (mn) dm + n. A basis for the complexification of Fλ(S1) is given by

so that

for gζ in Rot(S1) = T. This forces 𝜙(dn) = anvn for suitable coefficients an. The crossed homomorphism condition 𝜙([X,Y]) = X𝜙(Y) – Y𝜙(X) gives a recurrence relation for the an:

The condition 𝜙(d/dθ) = 0, implies that a0 = 0. From this condition and the recurrence relation, it follows that up to scalar multiples, this has a unique non-zero solution when λ equals 0, 1 or 2 and only the zero solution otherwise. The solution for λ = 1 corresponds to the group 1-cocycle . The solution for λ = 0 corresponds to the group 1-cocycle 𝜙0(f) = log f' . The corresponding Lie algebra 1-cocycles for λ = 0, 1, 2 are given up to a scalar multiple by

Central extensions

The crossed homomorphisms in turn give rise to the central extension of Diff(S1) and of its Lie algebra Vect(S1), the so-called Virasoro algebra.

Coadjoint action

The group Diff(S1) and its central extension also appear naturally in the context of Teichmüller theory and string theory.[15] In fact the homeomorphisms of S1 induced by quasiconformal self-maps of D are precisely the quasisymmetric homeomorphisms of S1; these are exactly homeomorphisms which do not send four points with cross ratio 1/2 to points with cross ratio near 1 or 0. Taking boundary values, universal Teichmüller can be identified with the quotient of the group of quasisymmetric homeomorphisms QS(S1) by the subgroup of Möbius transformations Moeb(S1). (It can also be realized naturally as the space of quasicircles in C.) Since

the homogeneous space Diff(S1)/Moeb(S1) is naturally a subspace of universal Teichmüller space. It is also naturally a complex manifold and this and other natural geometric structures are compatible with those on Teichmüller space. The dual of the Lie algebra of Diff(S1) can be identified with the space of Hill's operators on S1

and the coadjoint action of Diff(S1) invokes the Schwarzian derivative. The inverse of the diffeomorphism f sends the Hill's operator to

Pseudogroups and connections

The Schwarzian derivative and the other 1-cocycle defined on Diff(S1) can be extended to biholomorphic between open sets in the complex plane. In this case the local description leads to the theory of analytic pseudogroups, formalizing the theory of infinite-dimensional groups and Lie algebras first studied by Élie Cartan in the 1910s. This is related to affine and projective structures on Riemann surfaces as well as the theory of Schwarzian or projective connections, discussed by Gunning, Schiffer and Hawley.

A holomorphic pseudogroup Γ on C consists of a collection of biholomorphisms f between open sets U and V in C which contains the identity maps for each open U, which is closed under restricting to opens, which is closed under composition (when possible), which is closed under taking inverses and such that if a biholomorphisms is locally in Γ, then it too is in Γ. The pseudogroup is said to be transitive if, given z and w in C, there is a biholomorphism f in Γ such that f(z) = w. A particular case of transitive pseudogroups are those which are flat, i.e. contain all complex translations Tb(z) = z + b. Let G be the group, under composition, of formal power series transformations F(z) = a1z + a2z2 + .... with a1 ≠ 0. A holomorphic pseudogroup Γ defines a subgroup A of G, namely the subgroup defined by the Taylor series expansion about 0 (or "jet") of elements f of Γ with f(0) = 0. Conversely if Γ is flat it is uniquely determined by A: a biholomorphism f on U is contained in Γ in if and only if the power series of Tf(a)fTa lies in A for every a in U: in other words the formal power series for f at a is given by an element of A with z replaced by za; or more briefly all the jets of f lie in A.[16]

The group G has a natural homomorphisms onto the group Gk of k-jets obtained by taking the truncated power series taken up to the term zk. This group acts faithfully on the space of polynomials of degree k (truncating terms of order higher than k). Truncations similarly define homomorphisms of Gk onto Gk − 1; the kernel consists of maps f with f(z) = z + bzk, so is Abelian. Thus the group Gk is solvable, a fact also clear from the fact that it is in triangular form for the basis of monomials.

A flat pseudogroup Γ is said to be "defined by differential equations" if there is a finite integer k such that homomorphism of A into Gk is faithful and the image is a closed subgroup. The smallest such k is said to be the order of Γ. There is a complete classification of all subgroups A that arise in this way which satisfy the additional assumptions that the image of A in Gk is a complex subgroup and that G1 equals C*: this implies that the pseudogroup also contains the scaling transformations Sa(z) = az for a ≠ 0, i.e. contains A contains every polynomial az with a ≠ 0.

The only possibilities in this case are that k = 1 and A = {az: a ≠ 0}; or that k = 2 and A = {az/(1−bz) : a ≠ 0}. The former is the pseudogroup defined by affine subgroup of the complex Möbius group (the az + b transformations fixing ); the latter is the pseudogroup defined by the whole complex Möbius group.

This classification can easily be reduced to a Lie algebraic problem since the formal Lie algebra of G consists of formal vector fields F(z) d/dz with F a formal power series. It contains the polynomial vectors fields with basis dn = zn+1 d/dz (n ≥ 0), which is a subalgebra of the Witt algebra. The Lie brackets are given by [dm,dn] = (nm)dm+n. Again these act on the space of polynomials of degree k by differentiation—it can be identified with C[[z]]/(zk+1)—and the images of d0, ..., dk – 1 give a basis of the Lie algebra of Gk. Note that Ad(Sa) dn= an dn. Let denote the Lie algebra of A: it is isomorphic to a subalgebra of the Lie algebra of Gk. It contains d0 and is invariant under Ad(Sa). Since is a Lie subalgebra of the Witt algebra, the only possibility is that it has basis d0 or basis d0, dn for some n ≥ 1. There are corresponding group elements of the form f(z)= z + bzn+1 + .... Composing this with translations yields Tf(ε)fT ε(z) = cz + dz2 + ... with c, d ≠ 0. Unless n = 2, this contradicts the form of subgroup A; so n = 2.[17]

The Schwarzian derivative is related to the pseudogroup for the complex Möbius group. In fact if f is a biholomorphism defined on V then 𝜙2(f) = S(f) is a quadratic differential on V. If g is a bihomolorphism defined on U and g(V) ⊆ U, S(fg) and S(g) are quadratic differentials on U; moreover S(f) is a quadratic differential on V, so that gS(f) is also a quadratic differential on U. The identity

is thus the analogue of a 1-cocycle for the pseudogroup of biholomorphisms with coefficients in holomorphic quadratic differentials. Similarly and are 1-cocycles for the same pseudogroup with values in holomorphic functions and holomorphic differentials. In general 1-cocycle can be defined for holomorphic differentials of any order so that

Applying the above identity to inclusion maps j, it follows that 𝜙(j) = 0; and hence that if f1 is the restriction of f2, so that f2j = f1, then 𝜙(f1) = 𝜙 (f2). On the other hand, taking the local holomororphic flow defined by holomorphic vector fields—the exponential of the vector fields—the holomorphic pseudogroup of local biholomorphisms is generated by holomorphic vector fields. If the 1-cocycle 𝜙 satisfies suitable continuity or analyticity conditions, it induces a 1-cocycle of holomorphic vector fields, also compatible with restriction. Accordingly, it defines a 1-cocycle on holomorphic vector fields on C:[18]

Restricting to the Lie algebra of polynomial vector fields with basis dn = zn+1 d/dz (n ≥ −1), these can be determined using the same methods of Lie algebra cohomology (as in the previous section on crossed homomorphisms). There the calculation was for the whole Witt algebra acting on densities of order k, whereas here it is just for a subalgebra acting on holomorphic (or polynomial) differentials of order k. Again, assuming that 𝜙 vanishes on rotations of C, there are non-zero 1-cocycles, unique up to scalar multiples. only for differentials of degree 0, 1 and 2 given by the same derivative formula

where p(z) is a polynomial.

The 1-cocycles define the three pseudogroups by 𝜙k(f) = 0: this gives the scaling group (k = 0); the affine group (k = 1); and the whole complex Möbius group (k = 2). So these 1-cocycles are the special ordinary differential equations defining the pseudogroup. More significantly they can be used to define corresponding affine or projective structures and connections on Riemann surfaces. If Γ is a pseudogroup of smooth mappings on Rn, a topological space M is said to have a Γ-structure if it has a collection of charts f that are homeomorphisms from open sets Vi in M to open sets Ui in Rn such that, for every non-empty intersection, the natural map from fi (UiUj) to fj (UiUj) lies in Γ. This defines the structure of a smooth n-manifold if Γ consists of local diffeomorphims and a Riemann surface if n = 2—so that R2C—and Γ consists of biholomorphisms. If Γ is the affine pseudogroup, M is said to have an affine structure; and if Γ is the Möbius pseudogroup, M is said to have a projective structure. Thus a genus one surface given as C for some lattice Λ ⊂ C has an affine structure; and a genus p > 1 surface given as the quotient of the upper half plane or unit disk by a Fuchsian group has a projective structure.[19]

Gunning in 1966 describes how this process can be reversed: for genus p > 1, the existence of a projective connection, defined using the Schwarzian derivative 𝜙2 and proved using standard results on cohomology, can be used to identify the universal covering surface with the upper half plane or unit disk (a similar result holds for genus 1, using affine connections and 𝜙1).[19]

Generalizations

Osgood & Stowe (1992) describe a generalization that is applicable for mappings of conformal manifolds, in which the Schwarzian derivative becomes a symmetric tensor on the manifold. Let be a smooth manifold of dimension with a smooth metric tensor . A smooth diffeomorphism is conformal if for some smooth function . The Schwarzian is defined by

where is the Levi-Civita connection of , denotes the Hessian with respect to the connection, is the Laplace–Beltrami operator (defined as the trace of the Hessian with respect to ).

The Schwarzian satisfies the cocycle law

A Möbius transformation is a conformal diffeomorphism, whose conformal factor has vanishing Schwarzian. The collection of Möbius transformations of is a closed Lie subgroup of the conformal group of . The solutions to on Euclidean space, with the Euclidean metric, are precisely when is constant, the conformal factor giving the spherical metric , or else a conformal factor for a hyperbolic Poincaré metric on the ball or half-space or (respectively).

Another generalization applies to positive curves in a Lagrangian Grassmannian (Ovsienko & Tabachnikov 2005). Suppose that is a symplectic vector space, of dimension over . Fix a pair of complementary Lagrangian subspaces . The set of Lagrangian subspaces that are complemenary to is parameterized by the space of mappings that are symmetric with respect to ( for all ). Any Lagrangian subspace complementary to is given by for some such tensor . A curve is thus specified locally by a one-parameter family of symmetric tensors. A curve is positive if is positive definite. The Lagrangian Schwarzian is then defined as

This has the property that if and only if there is a symplectic transformation relating the curves and .

The Lagrangian Schwarzian is related to a second order differential equation

where is a symmetric tensor, depending on a real variable and is a curve in . Let be the -dimensional space of solutions of the differential equation. Since is symmetric, the form on given by is independent of , and so gives a symplectic structure. Let the evaluation functional. Then for any in the domain of , the kernel of is a Lagrangian subspace of , and so the kernel defines a curve in the Lagrangian Grassmannian of . The Lagrangian Schwarzian of this curve is then .

See also

Notes

References

This page was last edited on 26 January 2024, at 18:44
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.