Vous êtes sur la page 1sur 10

In section A, each part of a question (that is, i, ii, iii, ...) carries exactly 5 points.

In section
B, each question carries exactly 25 points (since only two are to be counted). The repartition of
points amongst parts of section B questions is shown below.
A1 This question is about basic material that has been used and worked out in class (i, iii) or
stated and commonly used in class (ii) and given to prove as exercise with solutions (ii,iv)
[exercise iv: still to be done at the time of writing].
(i) Dene: a) group homomorphism, b) group isomorphism, and c) group automorphism.
Answer: a) A map from a group G to a group H satisfying (gg

) = (g)(g

) for
all g, g

G.
b) A group homomorphism that is bijective.
c) A group isomorphism mapping G onto itself.
(ii) Show that the set of automorphisms of any group G is itself a group under the
operation of composition. This is the automorphism group of G, Aut(G).
Answer: Consider the set of bijective maps Aut(G) = : G G[ (gg

) =
(g)(g

) g, g

G. We have:
closure: the composition of maps
1
,
2
Aut(G) gives a map
1

2
that is in
Aut(G): it is still bijective from G to G (there is a unique pre-image g

G for any
g G under
1
by bijectivity, and then a unique pre-image g

G for that g

under

2
), and
1
(
2
(gg

)) =
1
(
2
(g)
2
(g

)) =
1
(
2
(g))
1
(
2
(g

)).
associativity: ((
1

2
)
3
)(g) = (
1

2
)(
3
(g)) =
1
(
2
(
3
(g))) and (
1
(
2

3
))(g) =
1
((
2

3
)(g)) =
1
(
2
(
3
(g))).
identity element: The identity map id : g g is in Aut(G), and has the property
id = id = for any Aut(G).
inverse: Since Aut(G) is an isomorphism, hence is bijective, there is a unique
bijective inverse
1
such that
1
=
1
= id (thanks to bijectivity of , there is
a unique pre-image g

for any g G under , and we set


1
(g) = g

), and this inverse


is still an isomorphism from G to G (since
1
(gg

) =
1
((
1
(g))(
1
(g

))) =

1
((
1
(g)
1
(g

))) =
1
(g)
1
(g

)), so it is in Aut(G).
(iii) State the denition of the property simple of a Lie algebra. Show that if g is simple,
then [gg] = g.
Answer: A simple Lie algebra is a Lie algebra that is non-abelian and that does not
have proper ideals (i.e. ideals other than 0 and itself). Consider [gg]. This is
certainly an ideal, because for all g g and h [gg] g, we have [gh] [gg]. Since
g is simple, then [gg] is not a proper ideal, so that [gg] = 0 or [gg] = g. But also g
is non-abelian, so that the rst option cannot be.
(iv) Let L be a Lie algebra and let x L be a xed element of L. Prove that the subspace
of L spanned by the eigenvectors of adx is a Lie subalgebra.
1
Answer: The eigenvectors of adx are the elements in V = y L[ [xy] y. We
want to look at the linear span of V (i.e. all linear combinations of elements of V ).
This is a subalgebra, because 1) it is a subspace, since we take the linear span, and 2)
given any y, y

V , we have [x[yy

]] = [y[xy

]] +[[xy]y

] by the Jacobi identity, which


is proportional to [yy

], hence [yy

] V ; since the bracket is bilinear, the bracket also


preserves the linear span.
A2 This question is about the rst part of the course, concerned with the basic denition and
properties of matrix Lie groups. All of this has been worked out in class, and similar
questions in tutorials.
(i) State the denition of a matrix Lie group.
Answer: A closed subgroup of the group of invertible complex matrices GL(n; C).
(ii) Show that the set of matrices U(n) = A Mat(n; C) [ A

A = 1 is a matrix Lie
group for any positive integer n (dont forget to show, in particular, that it is a group).
Answer: It is a group: 1) it is closed, for A, B U(n), we have (AB)

(AB) =
B

AB = 1, 2) associativity is obvious from matrix multiplication, 3) there is an


identity 1, 4) if A U(n), then A
1
U(n) because A
1
= A

, so that (A
1
)

A
1
=
AA
1
= A
1
A = A

A = 1. It is a matrix Lie group because it is closed: if a sequence


(A
j
U(n) : j N) of matrices in U(n) converges to a matrix lim
j
A
j
= A, then
A is still in U(n), because the functions of the matrix elements given by f
kl
(A) =
(A

A)
kl
, with j-independent values f
kl
(A
j
) = 1
kl
=
kl
, are continuous functions (in
fact, polynomial functions), hence we have
kl
= lim
j
f
kl
(A
j
) = f
kl
(lim
j
A) =
f
kl
(A), so A

A = 1.
(iii) Is U(n) compact? Is it connected? Prove one of those two claims. Hint: recall that
any unitary matrix has an orthonormal basis of eigenvectors.
Answer: U(n) is compact. In order to have compactness, we need two things: that
sequences of matrices in U(n) that converge do so in U(n), and that the set of all
matrix elements of all matrices in U(n) be bounded. The rst statement was already
proved. For the second, denote a matrix A U(n) in terms of its column vectors
A = (v
1
, . . . , v
n
). Then, A

A = 1 means that the vectors satisfy v

k
v
l
=
kl
. Hence,
they are orthonormal vectors of length 1. Since they are of length 1, denoting the
elements of the vectors by v
k
=
_
_
_
_
v
1k
.
.
.
v
nk
_
_
_
_
, we have that

n
l=1
[v
lk
[
2
= 1, hence that
[v
lk
[ 1 for all l, k = 1, . . . , n. This is the bound.
U(n) is connected. Consider any matrix B U(n). We want to show that it is
connected (in this lecture we understand this as path-connected) to 1. First, for
n = 1, this is clear because in this case, B = (e
ia
) for some a R, so we can take
B(t) = (e
iat
), forming a continuous path from B(0) = 1 to B(1) = B. In the general
2
case, since there is a basis of orthonormal eigenvectors of B, we can write
B = U

_
_
_
_
e
i
1
0
0 e
i
2

.
.
.
.
.
.
.
.
.
_
_
_
_
U
for real
1
,
2
, . . . ,
n
, with U

U = 1. Now we apply the arguments of the n = 1 case


to each individual e
i
j
, bringing them on S
1
in a continuous way to 1, so that we
have a path B(t) from B(0) = B to B(1) = U

U = 1. No matter which continous


path we choose for each e
i
j
, we always have B

(t)B(t) = 1.
A3 This question is about the second part of the course, concerned with Lie algebras associated
to matrix Lie groups, and with Lie algebras more generally. Parts i and ii have been worked
out in class (and similar problems in exercises with solutions). A problem similar to that
of part iii is [will be at the time of writing] worked out in class and in an exercise with
solution.
(i) State the denition of the Lie algebra g associated to a matrix Lie group G.
Answer: The set of all matrices A such that e
tA
G for all t R, i.e. g = A
Mat(n; C) : e
tA
Gt R, with the Lie bracket given by the commutator, [AB] =
[A, B] = AB BA.
(ii) The group SO(3) is the group of all 3 3 orthogonal matrices. Using the denition
you stated in (i), show that the Lie algebra of SO(3) is
so(3) = A Mat(3; C) : A
T
= A,
i.e. it is the set of all 3 3 anti-symmetric matrices (you can use without proof the
properties of the exponential).
Answer: This is the Lie algebra of SO(3) because 1) if A so(3), then (e
tA
)
T
e
tA
=
e
tA
T
e
tA
= e
tA
e
tA
= 1 so that e
tA
SO(3) for all t R, and 2) if e
tA
SO(3) for all
t R, then 1 = (e
tA
)
T
e
tA
= e
tA
T
e
tA
so that e
tA
T
= e
tA
, hence taking the derivative
w.r.t. t at t = 0 we obtain A
T
= A, hence A so(3).
(iii) Consider so(3; C) (this is the complexication of so(3) the algebra that we get from
so(3), but as a vector space over the complex numbers). Let h so(3; C) be the
subalgebra
h =
_

_
_
_
_
0 a 0
a 0 0
0 0 0
_
_
_, a C
_

_
.
Given that h is a Cartan subalgebra of so(3; C), nd the roots and the corresponding
root space decomposition.
3
Answer: We x a basis of elements in so(3; C) by choosing, e.g.,
f =
_
_
_
0 0 1
0 0 0
1 0 0
_
_
_, g =
_
_
_
0 0 0
0 0 1
0 1 0
_
_
_, h =
_
_
_
0 1 0
1 0 0
0 0 0
_
_
_.
Note that h = Ch. We see that
[h, f] = g, [h, g] = f, [h, h] = 0 adh =
_
_
_
0 1 0
1 0 0
0 0 0
_
_
_
in the natural column-vector notation for f, g, h. Diagonalising, we nd the three
orthonormal eigenvectors, which we can put into a matrix:
U =
1

2
_
_
_
1 1 0
i i 0
0 0

2
_
_
_, U

(adh)U =
_
_
_
i 0 0
0 i 0
0 0 0
_
_
_.
In order to nd the root space decomposition, we consider the eigenvectors and eigen-
values of adh. The roots are the eigenvalues: i and i (naturally seen as linear opera-
tors on h), and the root-space decomposition is the decomposition into the correspond-
ing eigenspaces, L
i
= C(f +ig), L
i
= C(f +ig), so that so(3; C) = h L
i
L
i
.
4
B4 Part i contains more involved elements of group theory, but similar problems have been
worked out in class. Part ii is a simplied version of a somewhat involved problem given
as exercise with solution, and worked out in a tutorial [not done yet at the time of writing].
Part iii is new. Consider the matrix Lie group SL(2; R) of real 2 2 matrices of
determinant one, and its Lie algebra
sl(2; R) = X Mat(2; R) [ Tr(X) = 0.
(i) (9 points) Show that SL(2; R) is a normal subgroup of GL(2; R) (in particular, it is
a subgroup). Knowing that the determinant is a surjective Lie group homomorphism
det : GL(2; R) G for some Lie group G, describe G and show that SL(2; R) =
ker det. Describe the quotient group H = GL(2; R)/SL(2; R), and its relation to
G. Show that an appropriate semi-direct product H SL(2; R) is isomorphic to
GL(2; R).
Answer: SL(2; R) is a subgroup of GL(2; R), because products of determinant-1
matrices are determinant-1 matrices, it contains the identity which has determinant
1, and every matrix A with det(A) = 1 has an inverse A
1
with det(A
1
) = 1
1
= 1,
hence inverses also are in SL(2; R). With g GL(2; R) and s SL(2; R), we have
that det(gsg
1
) = det(s), hence gsg
1
SL(2; R). Hence SL(2; R) is a normal
subgroup of GL(2; R). The determinant maps GL(2; R) to R

, so G = R

. Clearly,
ker det = SL(2; R) because the identity in R

is 1. The quotient group H is the group


of cosets c
g
= gs : s SL(2; R) for g GL(2; R) with product c
g
c
g
obtained by
forming the set of all products of elements in c
g
with elements c
g
. Since the image
of det is R

, the quotient group is isomorphic to R

by the general homomorphism


theorem.
Hence, we need to show that an appropriate R

SL(2; R) is isomorphic to GL(2; R).


It is natural to see R

as
__
a 0
0 1
_
[ a R

_
.
Then there is an isomorphism: : R

SL(2; R) GL(2; R) : (r, s) sr, with, for


the semi-direct product, the product rule
(r, s)(r

, s

) = (rr

, s
r
(s

))
where
r
(s

) = rs

r
1
is an automorphism of SL(2; R). The map is an isomor-
phism because 1) it preserves the products: ((r, s)(r

, s

)) = ((rr

, srs

r
1
)) =
srs

r
1
rr

= srs

= ((r, s))((r

, s

)), 2) it is surjective: any g GL(2; R) can be


written as g = sr by extracting the matrix r =
_
det g 0
0 1
_
, 3) it is injective: if
r ,= r

then det(r) ,= det(r

) hence sr ,= s

for any s, s

SL(2; R), and if s ,= s

then sr ,= s

r because r is invertible.
5
(ii) (8 points) Show that the image of the exponential mapping exp : sl(2; R) SL(2; R)
is a subset of T = A SL(2; R) : Tr(A) 2. Hint: analyse the eigenvalues (the
roots of the characteristic polynomial), assume that they are dierent so that the
matrix in sl(2; R) can be diagonalised, and get to the non-diagonalisable cases by
discussing the limit towards equal eigenvalues.
Answer: Consider a matrix X sl(2; R),
X =
_
a b
c a
_
.
The eigenvalues are given by

a
2
+bc. Assuming that X can be diagonalised,
X = U
1
_

+
0
0

_
U e
X
= U
1
_
e

+
0
0 e

_
U.
Then, Tr(e
X
) = e

+
+ e

. If a
2
+ bc 0, then this is clearly 2. If a
2
+ bc < 0,
then this is 2 cos

a
2
bc which is 2. This shows that the exponential map
maps sl(2; R) so a subset of SL(2; R) containing only matrices A with Tr(A) 2. If
X cannot be diagonalised, then the eigenvalues are the same, so that a
2
+bc = 0. But
then, we can just modify X X

by b b

= b+b and, if necessary, c c

= c+c,
so that X

can be diagonalised. The argument above then shows that Tr(X

) is near
to 2, and as b, c 0, the trace tends to 2. Since the trace is a continuous function
of the matrix elements, when X cannot be diagonalised, then Tr(X) = 2 2.
(iii) (8 points) Describe SL(2; R) as a subset of R
3
(recall that the matrices have 4
parameters with one constraint), and from this argue that it is connected.
Answer: Let us write A SL(2; R) as
A =
_
a b
c d
_
with ad bc = 1. We may, for instance, solve for d, giving d = (1 + bc)/a. Hence,
we may look at the set formed by all triplets (a, b, c) R
3
which are such that d is
nite. Clearly, if a = 0 there may be a problem, except when bc = 1. Hence, the
set SO(2, R) is R
3
from which we remove the plane a = 0, but where we put back
the curve a = 0, b = 1/c. This is connected, because the only possible obstruction
would be to go from a < 0 to a > 0, but at there are paths that go through the curve
bc = 1 at a = 0.
B5 Part i is something that was emphasized in class. Parts ii is a somewhat technical calcu-
lation that was presented in class. Part iii is new.
(i) (8 points) The Baker-Campbell-Hausdor (BCH) formula tells us that if X and
Y are two elements of the Lie algebra associated to the matrix Lie group G, then
6
log(e
X
e
Y
) can be expressed purely in terms of multiple commutators of X and Y .
Using this, explain in words (with simple formulas) how a representation of a Lie
algebra may give rise to a representation of a Lie group.
Answer: A representation is a homomorphism from the Lie group or the Lie algebra
to the space of endomorphisms of some vector space. In order for a representation
of a Lie algebra

to give rise to a representation of the corresponding Lie group
, we must dene (e
X
) = e

(X)
for X in the Lie algebra. But then, we must
verify that (e
X
e
Y
) = (e
X
)(e
Y
). Since a Lie algebra homomorphism is such that

([XY ]) = [

(X)

(Y )], then the BCH formula implies


(e
X
e
Y
) = (e
commutators of X and Y
)
= e

(commutators of X and Y )
= e
commutators of

(X) and

(Y )
= e

(x)
e

(Y )
= (e
X
)(e
Y
).
(ii) (8 points) Show that the BCH formula
log(e
X
e
Y
) = X +
_
1
0
dt
log(e
ad X
e
tad Y
)
1 (e
ad X
e
tad Y
)
1
(Y )
reduces to
log(e
X
e
Y
) = X +Y +
1
2
[X, Y ]
in the case where [X, Y ] commutes with both X and Y .
Answer: Since [X, Y ] commutes with both X and Y , this means that in the BCH
formula, we only need to keep the terms containing only one commutator. Hence, we
expand the exponentials in adX and adY , and keep only the terms that lead to rst
order in these matrices. It is convenient to rst expand the function:
g(1 +x) =
log(1 +x)
1 (1 +x)
1
=
x x
2
/2 +. . .
1 (1 x +x
2
) +. . .
=
1 x/2 +. . .
1 x +. . .
= 1 +x/2 +. . .
With x = e
ad X
e
tad Y
1, we only need to keep the rst order of the exponentials,
x = adX +tadY +. . .. This gives
log(e
X
e
Y
) = X +
_
1
0
dt
_
1 +
1
2
(adX +tadY )
_
(Y ) = X +Y +
1
2
[X, Y ].
(iii) (9 points) Consider an algebra H over C with basis elements a
n
: n Z 0 and
c, and with bilinear bracket dened by the relations
[cc] = [ca
n
] = [a
n
c] = 0, [a
m
a
n
] =
1
m

m+n,0
c m, n Z 0.
7
Consider the linear combinations

+
(z) =

n>0
a
n
z
n
,

(z) =

n<0
a
n
z
n
where z C. For [z
2
[ < [z
1
[, show that
[
+
(z
1
)

(z
2
)] = log(1 z
2
/z
1
)c
and hence commutes with both
+
(z
1
) and

(z
2
). Using this result and the simpli-
ed formula displayed in (ii), show that
e
i
+
(z
1
)
e
i

(z
2
)
= e
i(
+
(z
1
)+

(z
2
))
_
1
z
2
z
1
_
2
2
e
.
Answer: We have
[
+
(z
1
)

(z
2
)] =

m>0,n<0
[a
m
a
n
]z
m
1
z
n
2
=

m>0,n<0
1
m

m+n,0
cz
m
1
z
n
2
=

m>0
1
m
c(z
2
/z
1
)
m
= log(1 z
2
/z
1
)c.
Since c commutes with everything, it does so with
+
(z
1
) and

(z
2
). Using the
formula, we have
e
i
+
(z
1
)
e
i

(z
2
)
= e
i(
+
(z
1
)+

(z
2
))

2
2
[
+
(z
1
)

(z
2
)]
.
Hence, we nd
e
i
+
(z
1
)
e
i

(z
2
)
= e
i(
+
(z
1
)+

(z
2
))
e

2
2
log(1z
2
/z
1
)c
which is the desired result.
B6 This question is completely new, although all concepts involved (Killing form, derivations,
etc.) were taught in class, and it involves only small calculations. Consider a Lie algebra
g over C with Lie bracket denoted by []. A central extension g of g by C is a new Lie
algebra g = gCC (as a vector space), where C is a central element, and with Lie bracket
otherwise dened by
XY = [XY ] +X, Y )C X, Y g g
for some bilinear form , ) : g g C.
8
(i) (8 points) Show that in order for g to be a Lie algebra, X, Y ) must be anti-
symmetric, and it must satisfy
[XY ], Z) +[Y Z], X) +[ZX], Y ) = 0.
Answer: We have two conditions to check (since bilinearity is already there). Anti-
symmetry XY = Y X immediately implies X, Y ) = Y, X), since [] is anti-
symmmetric. Jacobi identity XY Z+ cyclic permuttations implies the equation
displayed, since [] satises the identity.
(ii) (8 points) Consider : g g such that (X) = X + a(X)C for all X g, where
a : g C is linear. Show that is a homomorphism if and only if
a([XY ]) = X, Y ) X, Y g.
Answer: All we need is (X)(Y ) = ([XY ]). Explicit calculations give
(X)(Y ) = (X +a(X)C)(Y +a(Y )C) = XY = [XY ] +X, Y )C
and
([XY ]) = [XY ] +a([XY ])C.
Since [XY ] g g CC, equality holds if and only if a([XY ]) = X, Y ) for all
X, Y g.
(iii) (9 points) Consider the Killing form : g g C. A non-trivial fact is that if
D GL(g; C) is a derivation, then (X, D(Y )) = (Y, D(X)) for all X, Y g.
Using this fact and other properties of the Killing form seen in the lectures, show
that X, Y ) = (X, D(Y )) denes an anti-symmmetric bilinear form which satises
the equation displayed in (i). Show that if g is semi-simple and if we dene g using
X, Y ) = (X, D(Y )), then there exists a homomorphism as in (ii) (that is, nd the
function a(X)). Hint: for g semi-simple, any derivation is of the form D = adZ for
some Z g.
Answer: X, Y ) = (X, D(Y )) is clearly anti-symmetric by the fact stated. To show
that we have the equation displayed in (i), we do:
([XY ], D(Z)) = (Z, D([XY ]))
= (Z, [D(X)Y ]) (Z, [XD(Y )])
= ([ZY ], D(X)) ([ZX], D(Y ))
which is the desired result. Another way is:
([XY ], D(Z)) = (X, [Y D(Z)])
= (X, D([Y Z])) (X, [D(Y )Z])
= ([Y Z], D(X)) +([XZ], D(Y )).
9
Since, as hinted, any derivation is of the form adZ for some Z, then we have
X, Y ) = (X, [ZY ]) = (X, [Y Z]) = ([XY ], Z).
Hence, we can choose a(X) = (X, Z), which satises the equation displayed in
(ii). Hence, we have a homomorphism, by (ii).
10

Vous aimerez peut-être aussi