Vous êtes sur la page 1sur 36

Linear Algebra: Graduate Level Problems and Solutions

Igor Yanovsky
1
Linear Algebra Igor Yanovsky, 2005 2
Disclaimer: This handbook is intended to assist graduate students with qualifying
examination preparation. Please be aware, however, that the handbook might contain,
and almost certainly contains, typos as well as incorrect or inaccurate solutions. I can
not be made responsible for any inaccuracies contained in this handbook.
Linear Algebra Igor Yanovsky, 2005 3
Contents
1 Basic Theory 4
1.1 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Linear Maps as Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Dimension and Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Matrix Representations Redux . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Linear Maps and Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.7 Dimension Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.8 Matrix Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.9 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Inner Product Spaces 8
2.1 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Gram-Schmidt procedure . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Orthogonal Complements and Projections . . . . . . . . . . . . . . . . . 9
3 Linear Maps on Inner Product Spaces 11
3.1 Adjoint Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Self-Adjoint Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Polarization and Isometries . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Unitary and Orthogonal Operators . . . . . . . . . . . . . . . . . . . . . 14
3.5 Spectral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.6 Normal Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.7 Unitary Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.8 Triangulability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Determinants 17
4.1 Characteristic Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5 Linear Operators 18
5.1 Dual Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.2 Dual Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6 Problems 23
Linear Algebra Igor Yanovsky, 2005 4
1 Basic Theory
1.1 Linear Maps
Lemma. If A Mat
mxn
(F) and B Mat
nxm
(F), then
tr(AB) = tr(BA).
Proof. Note that the (i, i) entry in AB is

n
j=1

ij

ji
, while (j, j) entry in BA is

m
i=1

ji

ij
.
Thus tr(AB) =
m

i=1
n

j=1

ij

ji
,
tr(BA) =
n

j=1
m

i=1

ji

ij
.
1.2 Linear Maps as Matrices
Example. Let P
n
=
0
+
1
t + +
n
t
n
:
0
,
1
, . . . ,
n
F be the space of
polynomials of degree n and D : V V the dierential map
D(
0
+
1
t + +
n
t
n
) =
1
+ +n
n
t
n1
.
If we use the basis 1, t, . . . , t
n
for V then we see that D(t
k
) = kt
k1
and thus
the (n + 1)x(n + 1) matrix representation is computed via
[D(1) D(t) D(t
2
) D(t
n
)] = [0 1 2t nt
n1
] = [1 t t
2
t
n
]
_

_
0 1 0 0
0 0 2 0
0 0 0
.
.
. 0
.
.
.
.
.
.
.
.
.
.
.
. n
0 0 0 0
_

_
1.3 Dimension and Isomorphism
A linear map L : V W is isomorphism if we can nd K : W V such that
LK = I
W
and KL = I
V
.
V
L
W
I
V

I
W
V
K
W
Linear Algebra Igor Yanovsky, 2005 5
Theorem. V and W are isomorphic there is a bijective linear map L : V W.
Proof. If V and W are isomorphic we can nd linear maps L : V W and
K : W V so that LK = I
W
and KL = I
V
. Then for any y = I
W
(y) = L(K(y)) so
we can let x = K(y), which means L is onto. If L(x
1
) = L(x
2
) then x
1
= I
V
(x
1
) =
KL(x
1
) = KL(x
2
) = I
V
(x
2
) = x
2
, which means L is 1 1.
Assume L : V W is linear and a bijection. Then we have an inverse map L
1
which satises L L
1
= I
W
and L
1
L = I
V
. In order for this inverse map to be
allowable as K we need to check that it is linear. Select
1
,
2
F and y
1
, y
2
W.
Let x
i
= L
1
(y
i
) so that L(x
i
) = y
i
. Then we have
L
1
(
1
y
1
+
2
y
2
) = L
1
(
1
L(x
1
) +
2
L(x
2
)) = L
1
(L(
1
x
1
+
2
x
2
))
= I
V
(
1
x
1
+
2
x
2
) =
1
x
1
+
2
x
2
=
1
L
1
(y
1
) +
2
L
1
(y
2
).
Theorem. If F
m
and F
n
are isomorphic over F, then n = m.
Proof. Suppose we have L : F
m
F
n
and K : F
n
F
m
such that LK = I
F
n and
KL = I
F
m. L Mat
nxm
(F) and K Mat
mxn
(F). Thus
n = tr(I
F
n) = tr(LK) = tr(KL) = tr(I
F
m) = m.
Dene the dimension of a vector space V over F as dim
F
V = n if V is isomorphic
to F
n
.
Remark. dim
C
C = 1, dim
R
C = 2, dim
Q
R = .
The set of all linear maps L : V W over F is homomorphism, and is denoted
by hom
F
(V, W).
Corollary. If V and W are nite dimensional vector spaces over F, then hom
F
(V, W)
is also nite dimensional and
dim
F
hom
F
(V, W) = (dim
F
W) (dim
F
V )
Proof. By choosing bases for V and W there is a natural mapping
hom
F
(V, W) Mat
(dim
F
W)(dim
F
V )
(F) F
(dim
F
W)(dim
F
V )
This map is both 1-1 and onto as the matrix represetation uniquely determines the
linear map and every matrix yields a linear map.
Linear Algebra Igor Yanovsky, 2005 6
1.4 Matrix Representations Redux
L : V W, bases x
1
, . . . , x
m
for V and y
1
, . . . , y
n
for W. The matrix for L interpreted
as a linear map is [L] : F
m
F
n
. The basis isomorphisms dened by the choices of
basis for V and W:
[x
1
x
m
] : F
m
V ,
1
[y
1
y
n
] : F
n
W.
V
L
W
[x
1
xm]

[y
1
yn]
F
m
[L]
F
n
L [x
1
x
m
] = [y
1
y
n
][L]
1.5 Subspaces
A nonempty subset M V is a subspace if , F and x, y M, then x+y M.
Also, 0 M.
If M, N V are subspaces, then we can form two new subspaces, the sum and the
intersection:
M +N = x +y : x M, y N, M

N = x : x M, x N.
M and N have trivial intersection if M

N = 0.
M and N are transversal if M +N = V .
Two spaces are complementary if they are transversal and have trivial intersection.
M, N form a direct sum of V if M

N = 0 and M +N = V . Write V = M

N.
Example. V = R
2
. M = (x, 0) : x R, x-axis, and N = (0, y) : y R, y-axis.
Example. V = R
2
. M = (x, 0) : x R, x-axis, and N = (y, y) : y R, a
diagonal.
Note (x, y) = (x y, 0) + (y, y), which gives V = M

N.
If we have a direct sum decomposition V = M

N, then we can construct the


projection of V onto M along N. The map E : V V is dened using that each
z = x + y, x M, y N and mapping z to x. E(z) = E(x + y) = E(x) + E(y) =
E(x) = x. Thus im(E) = M and ker(E) = N.
Denition. If V is a vector space, a projection of V is a linear operator E on V
such that E
2
= E.
1
[x
1
x
m
] : F
m
V means
[x1 xm]

1
.
.
.

= 1x1 + +mxm
Linear Algebra Igor Yanovsky, 2005 7
1.6 Linear Maps and Subspaces
L : V W is a linear map over F.
The kernel or nullspace of L is
ker(L) = N(L) = x V : L(x) = 0
The image or range of L is
im(L) = R(L) = L(V ) = L(x) W : x V
Lemma. ker(L) is a subspace of V and im(L) is a subspace of W.
Proof. Assume that
1
,
2
F and that x
1
, x
2
ker(L), then L(
1
x
1
+
2
x
2
) =

1
L(x
1
) +
2
L(x
2
) = 0
1
x
1
+
2
x
2
ker(L).
Assume
1
,
2
F and x
1
, x
2
V , then
1
L(x
1
) +
2
L(x
2
) = L(
1
x
1
+
2
x
2
)
im(L).
Lemma. L is 1-1 ker(L) = 0.
Proof. We know that L(00) = 0L(0) = 0, so if L is 11 we have L(x) = 0 = L(0)
implies that x = 0. Hence ker(L) = 0.
Assume that ker(L) = 0. If L(x
1
) = L(x
2
), then linearity of L tells that
L(x
1
x
2
) = 0. Then ker(L) = 0 implies x
1
x
2
= 0, which shows that x
1
= x
2
as
desired.
Lemma. L : V W, and dimV = dimW. L is 1-1 L is onto dimim(L) =
dimV .
Proof. From the dimension formula, we have
dimV = dimker(L) + dimim(L).
L is 1-1 ker(L) = 0 dimker(L) = 0 dimim(L) = dimV dimim(L) =
dimW im(L) = W, that is, L is onto.
1.7 Dimension Formula
Theorem. Let V be nite dimensional and L : V W a linear map, all over F, then
im(L) is nite dimensional and
dim
F
V = dim
F
ker(L) + dim
F
im(L)
Proof. We know that dimker(L) dimV and that it has a complement M of dimension
k = dimV dimker(L). Since M

ker(L) = 0 the linear map L must be 1-1 when


restricted to M. Thus L[
M
: M im(L) is an isomorphism, i.e. dimim(L) = dimM =
k.
1.8 Matrix Calculations
Change of Basis Matrix. Given the two basis of R
2
,
1
= x
1
= (1, 1), x
2
= (1, 0)
and
2
= y
1
= (4, 3), y
2
= (3, 2), we nd the change-of-basis matrix P from
1
to
2
.
Write y
1
as a linear combination of x
1
and x
2
, y
1
= ax
1
+bx
2
. (4, 3) = a(1, 1) +b(1, 0)
a = 3, b = 1 y
1
= 3x
1
+x
2
.
Write y
2
as a linear combination of x
1
and x
2
, y
2
= ax
1
+bx
2
. (3, 2) = a(1, 1) +b(1, 0)
a = 2, b = 1 y
2
= 2x
1
+x
2
.
Write the coordinates of y
1
and y
2
as columns of P. P =
_
3 2
1 1
_
.
Linear Algebra Igor Yanovsky, 2005 8
1.9 Diagonalizability
Denition. Let T be a linear operator on the nite-dimensional space V .
T is diagonalizable if there is a basis for V consisting of eigenvectors of T.
Theorem. Let v
1
, . . . , v
n
be nonzero eigenvectors of distinct eigenvalues
1
, . . . ,
n
.
Then v
1
, . . . , v
n
is linearly independent.
Alternative Statement. If L has n distinct eigenvalues
1
, . . . ,
n
, then L is
diagonalizable. (Proof is in the exercises).
Denition. Let L be a linear operator on a nite-dimensional vector space V , and let
be an eigenvalue of L. Dene E

= x V : L(x) = x = ker(L I
V
). The set
E

is called the eigenspace of L corresponding to the eigenvalue .


The algebraic multiplicity is dened to be the multiplicity of as a root of the
characteristic polynomial of L, while the geometric multiplicity of is dened to be
the dimension of its eigenspace, dimE

= dim(ker(L I
V
)). Also,
dim(ker(L I
V
)) m.
Eigenspaces. A vector v with (AI)v = 0 is an eigenvector for .
Generalized Eigenspaces. Let be an eigenvalue of A with algebraic multiplicity m.
A vector v with (AI)
m
v = 0 is a generalised eigenvector for .
2 Inner Product Spaces
2.1 Inner Products
The three important properties for complex inner products are:
1) (x[x) = [[x[[
2
> 0 unless x = 0.
2) (x[y) = (y[x).
3) For each y V the map x (x[y) is linear.
The inner product on C
n
is dened by
(x[y) = x
t
y
Consequences: (
1
x
1
+
2
x
2
[y) =
1
(x
1
[y) +
2
(x
2
[y),
(x[
1
y
1
+
2
y
2
) =

1
(x[y
1
) +

2
(x[y
2
),
(x[x) = (x[x) = [[
2
(x[x).
2.2 Orthonormal Bases
Lemma. Let e
1
, . . . , e
n
be orthonormal. Then e
1
, . . . , e
n
are linearly independent and
any element x spane
1
, . . . , e
n
has the expansion
x = (x[e
1
)e
1
+ + (x[e
n
)e
n
.
Proof. Note that if x =
1
e
1
+ +
n
e
n
, then
(x[e
i
) = (
1
e
1
+ +
n
e
n
[e
i
) =
1
(e
1
[e
i
)+ +
n
(e
n
[e
i
) =
1

1i
+ +
n

ni
=
i
.
Linear Algebra Igor Yanovsky, 2005 9
2.2.1 Gram-Schmidt procedure
Given a linearly independent set x
1
, . . . , x
m
in an inner product space V it is possi-
ble to construct an orthonormal collection e
1
, . . . , e
m
such that spanx
1
, . . . , x
m
=
spane
1
, . . . , e
m
.
e
1
=
x
1
[[x
1
[[
.
z
2
= x
2
proj
x
1
(x
2
) = x
2
proj
e
1
(x
2
) = x
2
(x
2
[e
1
)e
1
, e
2
=
z
2
[[z
2
[[
.
z
k+1
= x
k+1
(x
k+1
[e
1
)e
1
(x
k+1
[e
k
)e
k
, e
k+1
=
z
k+1
[[z
k+1
[[
.
2.2.2 QR Factorization
A =
_
x
1
x
m

=
_
e
1
e
m

_
(x
1
[e
1
) (x
2
[e
1
) (x
m
[e
1
)
0 (x
2
[e
2
) (x
m
[e
2
)
.
.
.
.
.
.
.
.
.
.
.
.
0 0 (x
m
[e
m
)
_

_
= QR
Example. Consider the vectors x
1
= (1, 1, 0), x
2
= (1, 0, 1), x
3
= (0, 1, 1) in R
3
.
Perform Gram-Schmidt:
e
1
=
x
1
||x
1
||
=
(1,1,0)

2
=
_
1

2
,
1

2
, 0
_
.
z
2
= (1, 0, 1)
1

2
_
1

2
,
1

2
, 0
_
=
_
1
2
,
1
2
, 1
_
, e
2
=
z
2
||z
2
||
=
(
1
2
,
1
2
,1)

3/2
=
_
1

6
,
1

6
,
2

6
_
.
z
3
= x
3
(x
3
[e
1
)e
1
(x
3
[e
2
)e
2
= (0, 1, 1)
1

2
_
1

2
,
1

2
, 0
_

6
_
1

6
,
1

6
,
2

6
_
=
_

3
,
1

3
,
1

3
_
, e
3
=
z
3
||z
3
||
=
_

3
,
1

3
,
1

3
_
.
2.3 Orthogonal Complements and Projections
The orthogonal projection of a vector x onto a nonzero vector y is dened by
proj
y
(x) =
_
x

y
[[y[[
_
y
[[y[[
=
(x[y)
(y[y)
y,
_
The length of this projection is [proj
y
(x)[ =
[[(x[y)[[
[[y[[
_
.
The denition of proj
y
(x) immediately implies that it is linear from the linearity of the
inner product.
The map x proj
y
(x) is a projection.
Proof. Need to show proj
y
(proj
y
(x)) = proj
y
(x).
proj
y
(proj
y
(x)) = proj
y
_
(x[y)
(y[y)
y
_
=
(x[y)
(y[y)
proj
y
(y) =
(x[y)
(y[y)
(y[y)
(y[y)
y =
(x[y)
(y[y)
y = proj
y
(x).
Linear Algebra Igor Yanovsky, 2005 10
Cauchy-Schwarz Inequality. V complex inner product space.
[(x[y)[ [[x[[[[y[[, x, y V.
Proof. First show proj
y
(x) x proj
y
(x):
(proj
y
(x)[x proj
y
(x)) =
_
(x[y)
[[y[[
2
y

x
(x[y)
[[y[[
2
y
_
=
_
(x[y)
[[y[[
2
y

x
_

_
(x[y)
[[y[[
2
y

(x[y)
[[y[[
2
y
_
=
(x[y)
[[y[[
2
(y[x)
(x[y)
[[y[[
2
(x[y)
[[y[[
2
(y[y) =
(x[y)
[[y[[
2
(y[x)
(x[y)
[[y[[
2
(x[y) = 0.
[[x[[ [[proj
y
(x)[[ =

(x[y)
(y[y)
y

(x[y)
(y[y)

[[y[[ =
[(x[y)[
[[y[[
.
Triangle Inequality. V complex inner product space.
[[x +y[[ [[x[[ +[[y[[.
Proof. [[x +y[[
2
= (x +y[x +y) = [[x[[
2
+2Re(x[y) +[[y[[
2
[[x[[
2
+2[(x[y)[ +[[y[[
2

[[x[[
2
+ 2[[x[[[[y[[ +[[y[[
2
= ([[x[[ +[[y[[)
2
.
Let M V be a nite dimensional subspace of an innter product space, and
e
1
, . . . , e
m
an orthonormal basis for M. Using that basis, dene E : V V by
E(x) = (x[e
1
)e
1
+ + (x[e
m
)e
m
Note that E(x) M and that if x M, then E(x) = x. Thus E
2
(x) = E(x) implying
that E is a projection whose image is M. If x ker(E), then
0 = E(x) = (x[e
1
)e
1
+ + (x[e
m
)e
m
(x[e
1
) = = (x[e
m
) = 0.
This is equivalent to the condition (x[z) = 0 for all z M. The set of all such vectors
is the orthogonal complement to M in V is denoted
M

= x V : (x[z) = 0 for all z M


Theorem. Let V be an inner product space. Assume V = M

, then im(proj
M
) =
M and ker(proj
M
) = M

. If M V is nite dimensional then V = M

and
proj
M
(x) = (x[e
1
)e
1
+ + (x[e
m
)e
m
for any orthonormal basis e
1
, . . . , e
m
for M.
Proof. For E dened as above, ker(E) = M

. x = E(x) +(I E)(x) and (I E)(x)


ker(E) = M

. Choose z M.
[[x proj
M
(x)[[
2
[[x proj
M
(x)[[
2
+[[proj
M
(x) z[[
2
= [[x z[[
2
,
where equality holds when [[proj
M
(x) z[[
2
= 0, i.e., proj
M
(x) is the only closest point
to x among the points in M.
Linear Algebra Igor Yanovsky, 2005 11
Theorem. Let E : V V be a projection on to M V with the property that
V = ker(E)

ker(E)

. Then the following conditions are equivalent.


1) E = proj
M
.
2) im(E)

= ker(E).
3) [[E(x)[[ [[x[[ for all x V .
Proof. We have already seen that (1) (2). Also (1),(2) (3) as x = E(x)+(IE)(x)
is an orthogonal decomposition. So [[x[[
2
= [[E(x)[[
2
+[[(I E)(x)[[
2
[[E(x)[[
2
.
Thus, we only need to show that (3) implies that E is orthogonal. Choose x ker(E)

and observe that E(x) = x (1 E)(x) is an orthogonal decomposition. Thus


[[x[[
2
[[E(x)[[
2
= [[x (1 E)(x)[[
2
= [[x[[
2
+[[(1 E)(x)[[
2
[[x[[
2
This means that (1 E)(x) = 0 and hence x = E(x) im(E) ker(E)

im(E).
Conversely, if z im(E) = M, then we can write z = x +y ker(E)

ker(E)

. This
implies that z = E(z) = E(y) = y, where the last equality follows from ker(E)


im(E). This means that x = 0 and hence z = y ker(E)

.
3 Linear Maps on Inner Product Spaces
3.1 Adjoint Maps
The adjoint of A is the matrix A

such that a
ij
= a
ji
.
A : F
m
F
n
, A

: F
n
F
m
.
(Ax[y) = (Ax)
t
y = x
t
A
t
y = x
t
(

A
t
y) = (x[A

y).
dim(M) + dim(M

) = dim(V ) = dim(V

)
Theorem. Suppose S = v
1
, v
2
, . . . , v
k
is an orthonormal set in an n-dimensional
inner product space V . Then
a) S can be extended to an orthonormal basis v
1
, v
2
, . . . , v
k
, v
k+1
, . . . , v
n
for V .
b) If M = span(S), then S
1
= v
k+1
, . . . , v
n
is an orthonormal basis for M

.
c) If M is any subspace of V , then dim(V ) = dim(M) + dim(M

).
Proof. a) Extend S to a basis S

= v
1
, v
2
, . . . , v
k
, w
k+1
, . . . , w
n
for V . Apply Gram-
Schmidt process to S

. The rst k vectors resulting from this process are the vectors in
S. S

spans V . Normalizing the last n k vectors of this set produces an orthonormal


set that spans V .
b) Because S
1
is a subset of a basis, it is linearly independent. Since S
1
is clearly a
subset of M

, we need only show that it spans M

. For any x V , we have


x =
n

i=1
(x[v
i
)v
i
.
If x M

, then (x[v
i
) = 0 for 1 i k. Therefore,
x =
n

i=k+1
(x[v
i
)v
i
span(S
1
).
c) Let M be a subspace of V . It is nite-dimensional inner product space because V
is, and so it has an orthonormal basis v
1
, v
2
, . . . , v
k
. By (a) and (b), we have
dim(V ) = n = k + (n k) = dim(M) + dim(M

).
Linear Algebra Igor Yanovsky, 2005 12
Theorem. Let M be a subspace of V . Then V = M

.
Proof. By Gram-Schmidt process, we can obtain an orthonormal basis v
1
, v
2
, . . . , v
k

of M, and by theorem above, we can extend it to an orthonormal basis v


1
, v
2
, . . . , v
n

of V . Hence v
k+1
, . . . , v
n
M

. If x V , then
x = a
1
v
1
+ +a
n
v
n
, where a
1
v
1
+ +a
k
v
k
M and a
k+1
v
k+1
+ +a
n
v
n
M

.
Accordingly, V = M + M

. On the other hand, if x

, then (x

[x

) = 0.
This yields x

= 0. Hence M

= 0.
Theorem. a) M M

.
b) If M is a subset of a nite-dimensional space V , then M = M

.
Proof. a) Let x M. Then (x[z) = 0, z M

; hence x M

.
b) V = M

and, also V = M

. Hence dimM = dimV dimM

and
dimM

= dimV dimM

. This yields dimM = dimM

. Since M M

by
(a), we have M = M

.
Fredholm Alternative. L : V W be a linear map between nite dimensional inner
product spaces. Then
ker(L) = im(L

,
ker(L

) = im(L)

,
ker(L)

= im(L

),
ker(L

= im(L).
Proof. Using L

= L and M

= M, the four statements are equivalent.


ker L = x V : Lx = 0. V
L
W
im(L) = Lx : x V , W
L

V
im(L

) = L

y : y W,
im(L

= x V : (x[L

y) = 0 for all y W = x V : (Lx[y) = 0 for all y W.


If x ker L x im(L

. Conversely, if (Lx[y) = 0 for all y W Lx = 0


x ker L.
Rank Theorem. L : V W be a linear map between nite dimensional inner product
spaces. Then
rank(L) = rank(L

).
Proof. dimV = dim(ker(L)) + dim(im(L)) = dim(im(L

))

+ dim(im(L))
= dimV dim(im(L

)) + dim(im(L)).
Corollary. For an n m matrix A, the column rank equals the row rank.
Proof. Conjugation does not change the rank. rank(A) is the column rank. rank(A

)
is the row rank of the conjugate of A.
Corollary. Let L : V V be a linear operator on a nite dimensional inner product
space. Then is an eigenvalue for L

is an eigenvalue for L

. Moreover, these
eigenvalue pairs have the same geometric multiplicity:
dim(ker(L I
V
)) = dim(ker(L

I
V
)).
Proof. Note that (L I
V
)

= L

I
V
. Thus we only need to show dim(ker(L)) =
dim(ker(L

)).
dim(ker(L)) = dimV dim(im(L)) = dimV dim(im(L

)) = dim(ker(L

)).
Linear Algebra Igor Yanovsky, 2005 13
3.2 Self-Adjoint Maps
A linear operator L : V V is self-adjoint (Hermitian) if L

= L,
and skew-adjoint if L

= L.
Theorem. If L is self-adjoint operator on a nite-dimensional inner product space
V , then every eigenvalue of L is real.
Proof. Method I: Suppose L is a self-adjoint operator in V . Let be an eigenvalue of
L, and let x be a nonzero vector in V such that Lx = x. Then
(x[x) = (x[x) = (Lx[x) = (x[L

x) = (x[Lx) =

(x[x).
Thus =

, which means that is real.
Proof. Method II: Suppose that L(x) = x for x ,= 0. Because a self-adjoint operator
is normal, then if x is an eigenvector of L then x is also an eigenvector of L

. Thus,
x = L(x) = L

(x) =

x.
Proposition. If L is self- or skew-adjoint, then for each invariant subspace M V the
orthogonal complement is also invariant, i.e., if L(M) M, then also L(M

) M

.
Proof. Assume that L(M) M. If x M and z M

, then since L(x) M we have


0 = (z[L(x)) = (L

(z)[x) = (L(z)[x).
Since this holds x M, it follows that L(z) M

.
3.3 Polarization and Isometries
Real inner product on V :
(x +y[x +y) = (x[x) + 2(x[y) + (y[y)
(x[y) =
1
2
((x +y[x +y) (x[x) (y[y)) =
1
2
([[x +y[[
2
[[x[[
2
[[y[[
2
).
Complex inner products (are only conjugate symmetric) on V :
(x +y[x +y) = (x[x) + 2Re(x[y) + (y[y)
Re(x[y) =
1
2
([[x +y[[
2
[[x[[
2
[[y[[
2
).
Re(x[iy) = Re(i(x[y)) = Im(x[y). In particular, we have
Im(x[y) =
1
2
([[x +iy[[
2
[[x[[
2
[[iy[[
2
).
We can use these ideas to check when linear operators L : V V are 0. First note
that L = 0 (L(x)[y) = 0, x, y V . To check the part, let y = L(x) to see that
[[L(x)[[
2
= 0, x V .
Theorem. Let L : V V be self-adjoint. L = 0 (L(x)[x) = 0, x V .
Proof. If L = 0 (L(x)[x) = 0, x V .
Assume that (L(x)[x) = 0, x V .
0 = (L(x +y)[x +y) = (L(x)[x) + (L(x)[y) + (L(y)[x) + (L(y)[y)
= (L(x)[y) + (y[L

(x)) = (L(x)[y) + (y[(L(x))) = 2Re(L(x)[y).


Now insert y = L(x) to see that 0 = Re(L(x)[L(x)) = [[L(x)[[
2
.
Linear Algebra Igor Yanovsky, 2005 14
Theorem. Let L : V V be a linear map on a complex inner-product space. Then
L = 0 (L(x)[x) = 0, x V .
Proof. If L = 0 (L(x)[x) = 0, x V .
Assume that (L(x)[x) = 0, x V .
0 = (L(x +y)[x +y) = (L(x)[x) + (L(x)[y) + (L(y)[x) + (L(y)[y) = (L(x)[y) + (L(y)[x)
0 = (L(x +iy)[x +iy) = (L(x)[x) + (L(x)[iy) + (L(iy)[x) + (L(iy)[iy) = i(L(x)[y) +i(L(y)[x)

_
1 1
i i
_ _
(L(x)[y)
(L(y)[x)
_
=
_
0
0
_
.
Since the columns of the matrix on the left are linearly independent the only solution
is the trivial one. In particular (L(x)[y) = 0.
3.4 Unitary and Orthogonal Operators
A linear transformation A is orthogonal is AA
T
= I, and unitary if AA

= I, i.e.
A

= A
1
.
Theorem. L : V W is a linear map between inner product spaces. TFAE:
1) L

L = I
V
, (L is unitary)
2) (L(x)[L(y)) = (x[y) x, y V , (L preserves inner products)
3) [[L(x)[[ = [[x[[ x V . (L preserves lengths)
Proof. (1) (2). L

L = I
V
(L(x)[L(y)) = (x[L

L(y)) = (x[Iy) = (x[y), x V .


Also note: L takes orthonormal sets of vectors to orthonormal sets of vectors.
(2) (3). (L(x)[L(y)) = (x[y), x, y V [[L(x)[[ =
_
(L(x), L(x)) =
_
(x, x) =
[[x[[.
(3) (1). [[L(x)[[ = [[x[[, x V (L

L(x)[x) = (L(x)[L(x)) = (x[x) = (Ix[x)


((L

L I)(x)[x) = 0, x V . Since L

L I is self-adjoint (check), L

L = I.
Two inner product spaces V and W over F are isometric, if we can nd an isom-
etry L : V W, i.e. an isomorphism such that (L(x)[L(y)) = (x[y).
Theorem. Supposet L is unitary, then L is an isometry on V .
Proof. An isometry on V is a mapping which preserves distances. Since L is unitary,
[[L(x) L(y)[[ = [[L(x y)[[ = [[x y[[. Thus L is an isometry.
Linear Algebra Igor Yanovsky, 2005 15
3.5 Spectral Theorem
Theorem. Let L : V V be a self-adjoint operator on a nite dimensional inner
product space. Then we can nd a real eigenvalue for L.
Spectral Theorem. Let L : V V be a self-adjoint operator on a nite dimensional
inner product space. Then there exists an orthonormal basis e
1
, . . . , e
n
of eigenvectors,
i.e. L(e
1
) =
1
e
1
, . . . , L(e
n
) =
n
e
n
. Moreover, all eigenvalues
1
, . . . ,
n
are real.
Proof. Prove this by induction on dimV .
Since L = L

, we can nd v V , R such that L(v) = v (Lagrange multipliers).


Let v

= x V : (x[v) = 0, an orthogonal complement to v. dimv

= dimV 1.
We show L leaves v

invariant, i.e. L(v

) v

. Let x v

, then
(L(x)[v) = (x[L

(v)) = (x[L(v)) = (x[v) =


..
R
(x[v) = 0.
Thus, L[
v
: v

is again self-adjoint, because (L(x)[y) = (x[L(y)), x, y V


x, y v

. Let e
1
=
v
||v||
; e
2
, . . . , e
n
- orthonormal basis for v

so that L(e
i
) =
i
e
i
, i =
2, . . . , n. Check: (e
1
[e
i
) = 0, i 2 since e
i
v

= e

1
, i = 2, . . . , n.
Corollary. Let L : V V be a self-adjoint operator on a nite dimensional inner
product space. Then there exists an orthonormal basis e
1
, . . . , e
n
of eigenvectors and a
real n n diagonal matrix D such that
L = [e
1
e
n
] D[e
1
e
n
]

= [e
1
e
n
]
_

1
0
.
.
.
.
.
.
.
.
.
0
n
_

_[e
1
e
n
]

.
3.6 Normal Operators
An operator L : V V on an inner product space is normal if LL

= L

L.
Self-adjoint, skew-adjoint and isometric operators are normal. These are precisely the
operators that admit the orthonormal basis that diagonalizes them.
Proposition. LL

= L

L [[L(x)[[ = [[L

(x)[[ for all x V .


Proof. [[L(x)[[ = [[L

(x)[[ [[L(x)[[
2
= [[L

(x)[[
2
(L(x)[L(x)) = (L

(x)[L

(x))
(x[L

L(x)) = (x[LL

(x)) (x[(L

L LL

)(x)) = 0 L

L LL

= 0, since
L

L LL

is self-adjoint.
Theorem. If V is a complex inner product space and L : V V is normal, then
ker(L I
V
) = ker(L

I
V
) for all C.
Proof. Observe L I
V
is normal and use previous proposition to conclude that
[[(L I
V
)(x)[[ = [[(L

I
V
)(x)[[.
Linear Algebra Igor Yanovsky, 2005 16
Spectral Theorem for Normal Operators. Let L : V V be a normal operator
on a complex inner product space. Then there exists an orthonormal basis e
1
, . . . , e
n
such that L(e
1
) =
1
e
1
, . . . , L(e
n
) =
n
e
n
.
Proof. Prove this by induction on dimV .
Since L is complex linear, we can use the Fundamental Theorem of Algebra to nd C
and x V 0, so that L(x) = x L

(x) =

x. ker(LI
V
) = ker(L

I
V
).
Let x

= z V : (z[x) = 0, an orthogonal complement to x. To get induction, we


need to show that x

is invariant under L, i.e. L(x

) x

. Let z x

and show
Lz x

.
(L(z)[x) = (z[L

(x)) = (z[

x) = (z[x) = 0.
Check that L[
x
is normal.
Similarly, x

is invariant under L

, i.e. L

: x

, since
(L

(z)[x) = (z[L(x)) = (z[x) =



(z[x) = 0.
L

[
x
= (L[
x
)

since (L(z)[y) = (z[L

y), z, y x

.
3.7 Unitary Equivalence
Two nxn matrices A and B are unitary equivalent if A = UBU

, where U is an nxn
matrix such that U

U = UU

= I
F
n.
Corollary. (nxn matrices)
1. A normal matrix is unitary equivalent to a diagonal matrix.
2. A self-adjoint matrix is unitary or orthogonally equivalent to a real diago-
nal.
3. A skew-adjoint matrix is unitary equivalent to a purely imaginary diagonal.
4. A unitary matrix is unitary equivalent to a diagonal matrix whose diagonal
elements are unit scalars.
3.8 Triangulability
Schurs Theorem. Let L : V V be a linear operator on a nite dimensional
complex inner product space. Then we can nd an orthonormal basis e
1
, . . . , e
n
such
that the matrix representation [L] is upper triangular in this basis, i.e.,
L = [e
1
e
n
] [L] [e
1
e
n
]

= [e
1
e
n
]
_

11

12

1n
0
22

2n
.
.
.
.
.
.
.
.
.
.
.
.
0 0
nn
_

_
[e
1
e
n
]

.
Linear Algebra Igor Yanovsky, 2005 17
Generalized Schurs Theorem. Let L : V V be a linear operator on an n dimen-
sional vector space over F. Assume that
L
(t) = (t
1
) (t
n
) for
1
, . . . ,
n
F,
then V admits a basis x
1
, . . . , x
n
such that the matrix representation with respect to
x
1
, . . . , x
n
is upper triangular.
Proof. The proof is by induction on the dimension n of V . The result is immediate if
n = 1. So suppose that the result is true for linear operators on (n 1)-dimensional
inner product spaces whose characteristic polynomials split. We can assume that L

has a unit eigenvector z. Suppose that L

(z) = z and that W = span(z). We show


that W

is L-invariant. If y W

and x = cz W, then
(L(y)[x) = (L(y)[cz) = (y[L

(cz)) = (y[cL

(z)) = (y[cz) = c(y[z) = c(0) = 0.


So L(y) W

. It is easy to show that the characteristic polynomial of L


W
divides
the characteristic polynomial of L and hence splits. Thus, dim(W

) = n 1, so we
may apply the induction hypothesis to L
W
and obtain an orthonormal basis of W

such that [L
W
]

is upper triangular. Clearly, =

z is an orthonormal basis for


V such that [L]

is upper triangular.
4 Determinants
4.1 Characteristic Polynomial
The characteristic polynomial of A is dened as
A
(t) = t
n
+
n1
t
n1
+. . .
1
t+
0
.
The characteristic polynomial of L : V V can be dened by

L
(t) = det(L tI
V
).
Facts: det L
1
=
1
det L
. det A = det A
T
.
If A is orthogonal, det A = 1, since det(I) = det(AA
T
) = det(A) det(A
T
) = (det A)
2
.
If U is unitary, [ det A[ = 1, and all [
i
[ = 1.
Linear Algebra Igor Yanovsky, 2005 18
5 Linear Operators
5.1 Dual Spaces
For a vector space V over F, we dene the dual space V

= hom(V, F) as the set of


linear functions of V , i.e. V

= f : V F [ f is linear .
Let x
1
, . . . , x
n
be a basis for V . For each i, there is a unique linear functional f
i
on V
s.t.
f
i
(x
j
) =
ij
.
In this way we obtain from x
1
, . . . , x
n
a set of n distinct linear functionals f
1
, . . . , f
n
on V . These functionals are also linearly independent. For, suppose
f =
n

i=1
c
i
f
i
.
Then f(x
j
) =
n

i=1
c
i
f
i
(x
j
) =
n

i=1
c
i

ij
= c
j
.
In particular, if f is the zero functional, f(x
j
) = 0 for each j and hence the scalars
c
j
are all 0. Now f
1
, . . . , f
n
are n linearly independent functionals, and since we know
that V

has dimension n, it must be that f


1
, . . . , f
n
is a basis for V

. This basis is called


the dual basis.
We have shown that ! dual basis f
1
, . . . , f
n
for V

. If f is a linear functional on
V then f is some linear combination of the f
i
, and the scalars c
j
must be given by
c
j
= f(x
j
).
f =
n

i=1
f(x
i
)f
i
Similarly, if
x =
n

i=1

i
x
i
is a vector in V , then
f
j
(x) =
n

i=1

i
f
j
(x
i
) =
n

i=1

ij
=
j
so that the unique expression for x as a linear combination of the x
i
is
x =
n

i=1
f
i
(x)x
i
f
i
(x) =
i
= i
th
coordinate for x.
Let M V be a subspace and dene the annihilator
2
to M in V as
M
0
= f V

: f(x) = 0 for all x M = f V

: f(M) = 0 = f V

: f[
M
= 0
2
annihilaror is the counterpart of orthogonal complement
Linear Algebra Igor Yanovsky, 2005 19
Example. Let = x
1
, x
2
= (2, 1), (3, 1) be a basis for R
2
. (x
i
(
1
,
2
)). We nd
the dual basis of given by

= f
1
, f
2
. To determine formulas for f
1
and f
2
, we
seek functionals f
1
(
1
,
2
) = a
1

1
+a
2

2
and f
2
(
1
,
2
) = b
1

1
+b
2

2
such that
f
1
(x
1
) = 1, f
1
(x
2
) = 0, f
2
(x
1
) = 0, f
2
(x
2
) = 1. Thus
_
1 = f
1
(x
1
) = f(2, 1) = 2a
1
+a
2
0 = f
1
(x
2
) = f(3, 1) = 3a
1
+a
2
_
0 = f
2
(x
1
) = f(2, 1) = 2b
1
+b
2
1 = f
2
(x
2
) = f(3, 1) = 3b
1
+b
2
The solutions yield a
1
= 1, a
2
= 3 and b
1
= 1, b
2
= 2. Hence f
1
(
1
,
2
) =
1
+ 3
2
and f
2
(
1
,
2
) =
1
2
2
, or f
1
= (1, 3), f
2
= (1, 2), form the dual basis.
Example. Let = x
1
, x
2
, x
3
= (1, 0, 1), (0, 2, 0), (1, 0, 2) be a basis for R
3
.
(x
i
(
1
,
2
,
3
)). We nd the dual basis of given by

= f
1
, f
2
, f
3
. To deter-
mine formulas for f
1
,f
2
,f
3
we seek functionals f
1
(
1
,
2
,
3
) = a
1

1
+ a
2

2
+ a
3

3
,
f
2
(
1
,
2
,
3
) = b
1

1
+ b
2

2
+ b
3

3
, and f
3
(
1
,
2
,
3
) = c
1

1
+ c
2

2
+ c
3

3
such that
f
i
(x
j
) =
ij
.
_

_
1 = f
1
(x
1
) = f(1, 0, 1) = a
1
+a
3
0 = f
1
(x
2
) = f(0, 2, 0) = 2a
2
0 = f
1
(x
3
) = f(1, 0, 2) = a
1
+ 2a
3
_

_
0 = f
2
(x
1
) = f(1, 0, 1) = b
1
+b
3
1 = f
2
(x
2
) = f(0, 2, 0) = 2b
2
0 = f
2
(x
3
) = f(1, 0, 2) = b
1
+ 2b
3
_

_
0 = f
3
(x
1
) = f(1, 0, 1) = c
1
+c
3
Thus a
1
=
2
3
, a
2
= 0, a
3
=
1
3
,
0 = f
3
(x
2
) = f(0, 2, 0) = 2c
2
b
1
= 0, b
2
=
1
2
, b
3
= 0,
1 = f
3
(x
3
) = f(1, 0, 2) = c
1
+ 2c
3
c
1
=
1
3
, c
2
= 0, c
3
=
1
3
.
Hence f
1
(
1
,
2
,
3
) =
2
3

1
+
1
3

3
, f
2
(
1
,
2
,
3
) =
1
2

2
, f
3
(
1
,
2
,
3
) =
1
3

1
+
1
3

3
,
or f
1
= (
2
3
, 0,
1
3
), f
2
= (0,
1
2
, 0), f
3
= (
1
3
, 0,
1
3
), form the dual basis.
Example. Let W is the subspace of R
4
spanned by x
1
= (1, 2, 3, 4) and x
2
=
(0, 1, 4, 1). We nd the basis for W

, the annihilator of W.
If suces to nd a basis of the set of linear functionals
f(
1
,
2
,
3
,
4
) = a
1

1
+a
2

2
+a
3

3
+a
4

4
for which f(x
1
) = 0 and f(x
2
) = 0 :
f(1, 2, 3, 4) = a
1
+ 2a
2
3a
3
+ 4a
4
= 0
f(0, 1, 4, 1) = a
2
+ 4a
3
a
4
= 0
The system of equations in a
1
, a
2
, a
3
, a
4
is in echelon form with free variables a
3
and
a
4
.
Set a
3
= 1, a
4
= 0 to obtain a
1
= 11, a
2
= 4, a
3
= 1, a
4
= 0
f
1
(
1
,
2
,
3
,
4
) = 11
1
4
2
+
3
.
Set a
3
= 0, a
4
= 1 to obtain a
1
= 6, a
2
= 1, a
3
= 0, a
4
= 1
f
2
(
1
,
2
,
3
,
4
) = 6
1

4
.
The set of linear functionals f
1
, f
2
is a basis of W

.
Linear Algebra Igor Yanovsky, 2005 20
Example. Given the annihilator described by the three linear functionals in R
4
:
f
1
(
1
,
2
,
3
,
4
) =
1
+ 2
2
+ 2
3
+
4
f
2
(
1
,
2
,
3
,
4
) = 2
1
+
4
f
3
(
1
,
2
,
3
,
4
) = 2
1
4
3
+ 3
4
we nd the subspace it annihilates.
After the row reduction, we nd that the functionals below ahhihilate the same subspace
g
1
(
1
,
2
,
3
,
4
) =
1
+ 2
3
g
2
(
1
,
2
,
3
,
4
) =
2
g
3
(
1
,
2
,
3
,
4
) =
4
The subspace annihilated consists of the vectors with
1
= 2
3
,
2
=
4
= 0.
Thus the subspace that is annihilated is given by span(2, 0, 1, 0).
Proposition. If M V is a subspace of a nite dimensional space and x
1
, . . . , x
n
is
a basis for V such that M = spanx
1
, . . . , x
m
, then M

= spanf
m+1
, . . . , f
n
, where
f
1
, . . . , f
n
is the dual basis. In particular we have
dim(M) + dim(M

) = dim(V ) = dim(V

).
Proof. Let x
1
, . . . , x
m
be a basis for M; M = spanx
1
, . . . , x
m
. Extend to x
1
, . . . , x
n
,
a basis for V . Construct a dual basis f
1
, . . . , f
n
for V

, f
i
(x
j
) =
ij
.
We show that f
m+1
, . . . , f
n
is a basis for M

. First, show M

= spanf
m+1
, . . . , f
n
.
Let f M

. f =

n
i=1
c
i
f
i
=

n
i=1
f(x
i
)f
i
=

m
i=1
f(x
i
)f
i
+

n
i=m+1
f(x
i
)f
i
=

n
i=m+1
f(x
i
)f
i
spanf
m+1
, . . . , f
n
.
Second, f
m+1
, . . . , f
n
are linearly independent, since f
m+1
, . . . , f
n
is a subset of
basis for V

. Thus, dim(M

) = n m = dim(V ) dim(M).
Theorem. W
1
and W
2
are subspaces of a nite-dimensional vector space. Then
W
1
= W
2
W

1
= W

2
.
Proof. If W
1
= W
2
, then of course W

1
= W

2
.
If W
1
,= W
2
, then one of the two subspaces contains a vector which is not in the
other. Suppose there is a vector x W
2
but x / W
1
. There is a linear functional f
such that f(z) = 0 for all z W
1
, but f(x) ,= 0. Then f W

1
but f / W

2
and
W

1
,= W

2
.
Theorem. Let W be a subspace of a nite-dimensional vector space V . Then W =
W

.
Proof. dimW + dimW

= dimV ,
dimW

+ dimW

= dimV

,
and since dimV = dimV

we have dimW = dimW

. Since W W

, we see that
W = W

.
Linear Algebra Igor Yanovsky, 2005 21
Proposition. Assume that the nite dimensional space V = M

N, then also V

=
M

and the restriction maps V

and V

give isomorphisms
M

,
N

.
Proof. Select a basis x
1
, . . . , x
n
for V such that M = spanx
1
, . . . , x
m
and N =
spanx
m+1
, . . . , x
n
. Then let f
1
, . . . , f
n
be the dual basis and simply observe that
M

= spanf
m+1
, . . . , f
n
and N

= spanf
1
, . . . , f
m
. This proves that V

= M

.
Next we note that
dim(M

) = dim(V ) dim(M) = dim(N) = dim(N

).
So at least M

and N

have the same dimension. Also if we restrict f


m+1
, . . . , f
n
to N,
then we still have that f
i
(x
j
) =
ij
for j = m+ 1, . . . , n. As N = spanx
m+1
, . . . , x
n
,
this means that f
m+1
[
N
, . . . , f
n
[
N
form a basis for N

. The proof that N

is
similar.
Linear Algebra Igor Yanovsky, 2005 22
5.2 Dual Maps
The dual space construction leads to dual map L

: W

for a linear map


L : V W. This dual map is a substitute for the adjoint to L and is related to
the transpose of the matrix representation of L. The denition is L

(g) = g L. Thus
if g W

we get a linear function g L : V F since L : V W. The dual to L is


often denoted L

= L
t
. The dual map satises (L(x)[g) = (x[L

(g)) for all x V and


g W

.
Generalized Fredholm Alternative. L : V W be a linear map between nite
dimensional inner product spaces. Then
ker(L) = im(L

,
ker(L

) = im(L)

,
ker(L)

= im(L

),
ker(L

= im(L).
Proof. Using L

= L and M

= M, the four statements are equivalent.


ker L = x V : Lx = 0.
im(L) = Lx : x V ,
im(L

) = L

(g) : g W,
im(L

= x V : (x[L

(g)) = 0 for all g W = x V : g(L(x)) = 0 for all g W.


If x ker L x im(L

. Conversely, if g(L(x)) = 0 for all g W Lx = 0


x ker L.
Rank Theorem. L : V W be a linear map between nite dimensional inner product
spaces. Then
rank(L) = rank(L

).
Proof. dimV = dim(ker(L)) + dim(im(L)) = dim(im(L

) + dim(im(L))
= dimV dim(im(L

)) + dim(im(L)).
Linear Algebra Igor Yanovsky, 2005 23
6 Problems
Cross Product: a = (a
1
, a
2
, a
3
), b = (b
1
, b
2
, b
3
):
a b =

i j k
a
1
a
2
a
3
b
1
b
2
b
3

= (a
2
b
3
a
3
b
2
)i + (a
3
b
1
a
1
b
3
)j + (a
1
b
2
a
2
b
1
)k
=
_

a
2
a
3
b
2
b
3

a
3
a
1
b
3
b
1

a
1
a
2
b
1
b
2

_
.
Problem (F03, #9).
Consider a 3x3 real symmetric matrix with determinant 6. Assume (1, 2, 3) and (0, 3, 2)
are eigenvectors with eigenvalues 1 and 2.
a) Give an eigenvector of the form (1, x, y) for some real x, y which is linearly indepen-
dent of the two vectors above.
b) What is the eigenvalue of this eigenvector.
Proof. a) Since A is real and symmetric, A is self-adjoint. By the spectral theorem, its
eigenvectors are orthonormal. v
1
v
2
= (1, 2, 3) (0, 3, 2) = 0, so the two vectors are
orthogonal. We cross the v
1
and v
2
to obtain a linearly indepentent vector v
3
.
v
3
= v
1
v
2
=

i j k
1 2 3
0 3 2

=
_

2 3
3 2

3 1
2 0

1 2
0 3

_
= (13, 2, 3).
Thus, the required vector is e
3
=
v
3
13
= (1,
2
13
,
3
13
).
b) Since A is self-adjoint, by the spectral theorem, there exists an orthonormal basis
of eigenvectors and a real diagonal matrix D such that
A = ODO

= [e
1
e
2
e
3
]

1
0 0
0
2
0
0 0
3

[e
1
e
2
e
3
]

.
Since O is orthotonal, OO

= I, i.e. O

= O
1
, and A = ODO
1
. Note det A =
det(ODO
1
) = det(O) det(D) det(O
1
) = det(O) det(D)
1
det(O)
= det(D) =
1

3
.
Thus 6 = det A = det D =
1

3
, and
1
= 1,
2
= 2, then
3
= 3.
Linear Algebra Igor Yanovsky, 2005 24
Problem (S02, #9).
Find the matrix representation in the standard basis for either rotation by an angle
in the plane perpendicular to the subspace spanned by vectors (1, 1, 1, 1) and (1, 1, 1, 0)
in R
4
.
Proof. x
1
= (1, 1, 1, 1), x
2
= (1, 1, 1, 0). span(1, 1, 1, 1), (1, 1, 1, 0) = spane
1
, e
2
,
orthogonal complement spane
3
, e
4
, where rotation happens.
T =
_
e
1
e
2
e
3
e
4

_
1 0 0 0
0 1 0 0
0 0 cos() sin()
0 0 sin() cos()
_

_
_
e
1
e
2
e
3
e
4

e
1
=
1
2
_

_
1
1
1
1
_

_
, z
2
= x
2
(x
2
[e
1
)e
1
=
_

_
1
1
1
0
_

_
_

_
1
1
1
0
_

1
2
_

_
1
1
1
1
_

_
_
1
2
_

_
1
1
1
1
_

_
=
1
4
_

_
1
1
1
3
_

_
,
e
2
=
z
2
[[z
2
[[
=
1
2

3
_

_
1
1
1
3
_

_
orthogonal complement
_

_
1
1
0
0
_

_
,
_

_
1
1
2
0
_

_
basis for ; e
3
=
1

2
_

_
1
1
0
0
_

_
, e
4
=
1

6
_

_
1
1
2
0
_

_
Linear Algebra Igor Yanovsky, 2005 25
Problem (F01, #8).
T : R
3
R
3
rotation by 60

counterclockwise about the plane perpendicular to (1, 1, 1).


S : R
3
R
3
reection about the plane perpendicular to (1, 0, 1).
Determine the matrix representation of ST in the standard basis (1, 0, 0), (0, 1, 0), (0, 0, 1).
Proof. Rotation:
T =
_
e
1
e
2
e
3

_
_
1 0 0
0 cos() sin()
0 sin() cos()
_
_
_
e
1
e
2
e
3

T =
_
e
1
e
2
e
3

_
1 0 0
0
1
2

3
2
0

3
2
1
2
_

_
_
e
1
e
2
e
3

e
1
, e
2
, e
3
orthonormal basis; e
1
= e
2
e
3
, e
2
= e
3
e
1
, e
3
= e
1
e
2
.
e
1
=
1

3
_
_
1
1
1
_
_
, e
2
=
1

2
_
_
1
1
0
_
_
, e
3
=
1

6
_
_
1
1
2
_
_
.
Check: e
3
= e
1
e
2
=
1

3
1

i j k
1 1 1
1 1 0

=
_
1

3
1

1 1
1 0

, ,
_
= (
1

6
, , ),
e
3
= +
1

6
_
_
1
1
2
_
_
.
T =
_

_
1

3
1

2
1

6
1

3

1

2
1

6
1

3
0
2

6
_

_
_

_
1 0 0
0
1
2

3
2
0

3
2
1
2
_

_
_

_
1

3
1

3
1

3
1

2

1

2
0
1

6
1

6

2

6
_

_ = PT

P
1
.
Reection:
S =
_
f
1
f
2
f
3

_
_
1 0 0
0 1 0
0 0 1
_
_
_
f
1
f
2
f
3

f
1
=
1

2
_
_
1
0
1
_
_
, f
2
=
_
_
0
1
0
_
_
, f
3
=
1

2
_
_
1
0
1
_
_
.
Check: f
3
= f
1
f
2
=
1

i j k
1 0 1
0 1 0

=
_
1

0 1
1 0

, ,
_
= (
1

2
, , ),
f
3
=
1

2
_
_
1
0
1
_
_
.
S =
_

_
1

2
0
1

2
0 1 0
1

2
0
1

2
_

_
_
_
1 0 0
0 1 0
0 0 1
_
_
_

_
1

2
0
1

2
0 1 0

2
0
1

2
_

_ = OS

O
1
.
S T = OS

O
1
PT

P
1
.
Linear Algebra Igor Yanovsky, 2005 26
Problem (F02, #8).
Let T be the rotation of an angle 60

counterclockwise about the origin in the plane


perpendicular to (1, 1, 2) in R
3
.
i) Find the matrix representation of T in the standard basis. Find all eigenvalues and
eigenspaces of T.
ii) What are the eigenvalues and eigenspaces of T if R
3
is replaced by C
3
.
Proof. i)
T =
_
e
1
e
2
e
3

_
1 0 0
0
1
2

3
2
0

3
2
1
2
_

_
_
e
1
e
2
e
3

e
1
, e
2
, e
3
orthonormal basis; e
1
= e
2
e
3
, e
2
= e
3
e
1
, e
3
= e
1
e
2
.
e
1
=
1

6
_
_
1
1
2
_
_
, e
2
=
1

2
_
_
1
1
0
_
_
, e
3
=
1

3
_
_
1
1
1
_
_
.
Check: e
3
= e
1
e
2
=
1

6
1

i j k
1 1 2
1 1 0

=
_
1

6
1

1 2
1 0

, ,
_
= (
1

3
, , ),
e
3
= +
1

3
_
_
1
1
1
_
_
.
Know T(e
1
) = e
1
, so = 1 is an eigenvalue. Eigenspace = spane
1
. If z R
3
,
z = e
1
+ w, w spane
2
, e
3
. T(z) = e
1
+T(w)
. .
rot. by 60

. So if T(z) = z, must have


(e
1
+ w) = e
1
+T(w)
. .
span{e
2
,e
3
}
. e
1
= e
1
and T(w) = w impossible unless
w = 0. No more eigenvalues or eigenvectors.
Any 3-D rotation has 1 for an eigenvalue: any vector lying along the axis of the
rotation is unchanged by the rotation, and is therefore an eigenvector corresponding to
eigenvalue 1. The line formed by these eigenvectors is the axis of rotation. For the spe-
cial case of the null rotation, every vector in 3-D space is an eigenvector corresponding
to eigenvalue 1.
Any 3-D reection has two eigenvalues: -1 and 1. Any vector orthogonal to the
plane of the mirror is reversed in direction by the reection, without its size being
changed; that is, the reected vector is -1 times the original, and so the vector is an
eigenvector corresponding to eigenvalue -1. The set formed by all these eigenvectors is
the line orthogonal to the plane of the mirror. On the other hand, any vector in the
plane of the mirror is unchanged by the reection: it is an eigenvector corresponding
to eigenvalue 1. The set formed by all these eigenvectors is the plane of the mirror.
Any vector that is neither in the plane of the mirror nor orthogonal to it is not an
eigenvector of the reection.
ii) T : C
3
C
3
;
_

_
1 0 0
0
1
2

3
2
0

3
2
1
2
_

_, = (t 1)(t
2
t + 1) roots are t =
1, e
i

3
, e
i

3
. These are then the three distinct eigenvalues, each with one complex
Linear Algebra Igor Yanovsky, 2005 27
dimensional eigenspace. Find e
i

3
eigenspaces for
_
1
2

3
2

3
2
1
2
_
, and then
_

_
,
_

_
C
2
eigenspaces for T : C
3
C
3
, spane
1
= 1, spane
2
+e
3
e
i

3
,
spane
2
+e
3
e
i

3
.
Problem (S03, #8; W02, #9; F01, #10).
Let V be n-dimensional complex vector space and T : V V a linear operator.

T
has n distinct roots. Show T is diagonalizable.
Let V be n-dimensional complex vector space and T : V V a linear operator.
Let v
1
, . . . , v
n
be non-zero vectors of distinct eigenvalues in V . Prove that v
1
, . . . , v
n

is linearly independent.
Proof. Since F = C, any root of
T
is also an eigenvalue, so we have
1
, . . . ,
n
distinct
eigenvalues. Induction on n = dimV .
n = 1 trivially linearly independent.
n > 1, v
1
, . . . , v
n
are non-zero vectors in V with
1
, . . . ,
n
distinct eigenvalues. If

1
v
1
+ +
n
v
n
= 0, (6.1)
want to show
i
s = 0.
T(
1
v
1
+ +
n
v
n
) = T(0) = 0,

1
Tv
1
+ +
n
Tv
n
= 0,

1
v
1
+ +
n

n
v
n
= 0. (6.2)
Multiplying (6.1) by
n
and subtracting o (6.2), we get

1
(
n

1
)v
1
+ +
n1
(
n

n1
)v
n1
= 0.
Since v
1
, . . . , v
n1
are linearly independent, and
i
,=
j
, i ,= j,
1
= =

n1
= 0.
Then by (6.1),
n
v
n
= 0
n
= 0, since v
n
is non-zero.
Thus,
1
= =
n
= 0, and v
1
, . . . , v
n
are linearly independent.
Having shown v
1
, . . . , v
n
are linearly independent, they generate an n-dimensional
subspace which is then all of V . Hence v
1
, . . . , v
n
gives a basis.
Linear Algebra Igor Yanovsky, 2005 28
Problem (F01, #9).
Let A be a real symmetric matrix. Prove that there exists an invertible matrix P such
that P
1
AP is diagonal.
Proof. Let V = R
n
with the standard inner product. Let A be real symmetric matrix
A
t
= A A is self-adjoint. Let T be the linear operator on V which is represented
by A in the standard order basis.
V
S
[A]
S
V
S
P

P
V
B
[A]
B
V
B
Since T is self-adjoint on V , there exists an orthonormal basis = v
1
, . . . , v
n
of
eigenvectors of T. Then Tv
i
= v
i
, i = 1, . . . , n, where
i
s are eigenvalues of T.
Let D = [A]
B
. Let P be the matrix with v
1
, . . . , v
n
as column vectors. Then
[A]
B
= P
1
[A]
S
P = P
1
[A]
S
[v
1
v
n
] = P
1
[Av
1
Av
n
] = P
1
[
1
v
1

n
v
n
]
Since with choice, P is orthonormal with real entries, detP = 1 (P invertible) P
1
=
P
t
.
[A]
B
= P
t
[
1
v
1

n
v
n
] =
_

_
v
1
.
.
.
v
n
_

_[
1
v
1

n
v
n
] =
_

1
0
.
.
.
.
.
.
.
.
.
0
n
_

_
since v
i
s are orthonormal. Then D = P
t
[A]
S
P, D = P
1
AP.
Problem (S03, #9).
Let A M
3
(R) satisfy det(A) = 1 and A
t
A = AA
t
= I
R
3. Prove that the characteristic
polynomial of A has 1 as a root (i.e. 1 is an eigenvalue of A).
Proof.
A
(t) = t
3
+ 1 = (t
1
)(t
2
)(t
3
),
1
,
2
,
3
C, using the
fundamental theorem of algebra.
A real 1 root of odd degree.
1
R.
Case 1:
2
,
3
R;
Case 2:
2
= ,
3
=

.
det(A) = 1 =
1

3
, since determinant is a product of eigenvalues.
A
t
A = AA
t
= I
R
3 A orthogonal A M
3
(C) is unitary since A

=

A
T
= A
T
.

1
,
2
,
3
eigenvalues for A as a unitary transformation, so if Ax
i
=
i
x
i
, then
(x
i
[x
i
) = (U

Ux
i
[x
i
) =
(U is unitary)
= (Ux
i
[Ux
i
) = (
i
x
i
[
i
x
i
) = [
i
[
2
(x
i
[x
i
)
[
i
[
2
= 1.
Case 1:
2
,
3
R
2
,
3
= 1 and
2

3
= 1, so one or three eigenvalues = +1.
Case 2:
1

3
=
1

=
1
[[
2
=
1
= 1.
Linear Algebra Igor Yanovsky, 2005 29
Problem (S03, #10).
Let T : R
n
R
n
be symmetric
3
, tr(T
2
) = 0. Show that T = 0.
Proof. By spectral theorem, T = ODO

, O is orthogonal and D is diagonal with real


entries.
T
2
= ODO

ODO

= OD
2
O

, where D =
_

1
0
.
.
.
.
.
.
.
.
.
0
n
_

_.
0 = tr(T
2
) = tr(OD
2
O

) = tr(O

OD
2
) = tr(D
2
) =
2
1
+ +
2
n
.

i
= 0 since
2
i
0.
Problem (W02, #10).
Let V be a nite dimensional complex inner product space and f : V C a linear
functional. Show f(x) = (x[y) for some y.
Proof. Select e
1
, . . . , e
n
orthonormal basis, and let y = f(e
1
)e
1
+ +f(e
n
)e
n
.
(x[y) = (x[f(e
1
)e
1
+ +f(e
n
)e
n
) = f(e
1
)(x[e
1
) + +f(e
n
)(x[e
n
)
= f((x[e
1
)e
1
+ + (x[e
n
)e
n
) = f(x), since f is linear.
We can also show that y is unique. Suppose y

is another vector in V for which


f(x) = (x[y

) for every x V . Then (x[y) = (x[y

) for all x, so (y y

[y y

) = 0, and
y = y

.
Problem (S02, #11).
Let V be a nite dimensional real inner product space and T, S : V V two commuting
(i.e. ST = TS) self-adjoint linear operators. Show that there exists an orthonormal
basis that simultaneously diagonalizes S and T.
Proof. Since T, S are self-adjoint, an ordered orthonormal basis v
1
, . . . , v
n
of eigen-
vectors corresponding to eigenvalues
1
, . . . ,
n
for T. v
i
E

i
(T).
V = E

1
(T) E
n
(T).
v
i
E

i
(T) Tv
i
=
i
v
i
.
TSv
i
= STv
i
= S
i
v
i
=
i
Sv
i
Sv
i
E

i
(T).
Thus E

i
(T) is invariant under S, i.e. S : E

i
(T) E

i
(T).
Since S[
E

i
(T)
is self-adjoint, an ordered orthonormal basis
i
of eigenvectors of S for
E

i
(T). =

n
i=1

i
.
3
symmetric in R self-adjoint hermitian.
Linear Algebra Igor Yanovsky, 2005 30
Problem (S02, #10).
Let V be a complex inner product space and W a nite dimensional subspace. Let
v V . Prove that there exists a unique vector v
W
W such that
[[v v
W
[[ [[v w[[, w W.
Deduce that equality holds if and only if w = v
W
.
Proof. v
W
is supposed to be the orthogonal projection of v onto W.
Choose an orthonormal basis e
1
, . . . , e
n
for W. Then dene
proj
W
(x) = (x[e
1
)e
1
+ + (x[e
n
)e
n
.
Claim: x
W
= proj
W
(x).
Show: x proj
W
(x) proj
W
(x).
_
x (x[e
1
)e
1
(x[e
n
)e
n

(x[e
1
)e
1
+ + (x[e
n
)e
n
_
=
_
x

(x[e
1
)e
1
+ + (x[e
n
)e
n
_

_
(x[e
1
)e
1
+ + (x[e
n
)e
n

(x[e
1
)e
1
+ + (x[e
n
)e
n
_
= (x[e
1
)(x[e
1
) + + (x[e
n
)(x[e
n
)
_
n

i=1
n

j=1
(x[e
i
)(x[e
j
)(e
i
[e
j
)
_
= (x[e
1
)(x[e
1
) + + (x[e
n
)(x[e
n
)
_
n

i=1
(x[e
i
)(x[e
i
)
_
= 0 since (e
i
[e
j
) =
ij
.
In fact, proj
W
(x) x proj
W
(x) and W w x proj
W
(x).
[[x proj
W
(x) +proj
W
(x) w[[
2
= [[x w[[
2
[[x proj
W
(x)[[ [[x w[[, with equality when [[proj
W
(x) w[[ = 0
proj
W
(x) = w.
Show: x proj
W
(x) w W.
_
x (x[e
1
)e
1
(x[e
n
)e
n

(w[e
1
)e
1
+ + (w[e
n
)e
n
_
=
_
x

(w[e
1
)e
1
+ + (w[e
n
)e
n
_

_
(x[e
1
)e
1
+ + (x[e
n
)e
n

(w[e
1
)e
1
+ + (w[e
n
)e
n
_
= (w[e
1
)(x[e
1
) + + (w[e
n
)(x[e
n
)
_
n

i=1
n

j=1
(x[e
i
)(w[e
j
)(e
i
[e
j
)
_
= (w[e
1
)(x[e
1
) + + (w[e
n
)(x[e
n
)
_
n

i=1
(x[e
i
)(w[e
i
)
_
= 0 since (e
i
[e
j
) =
ij
.
Problem (F03, #10).
a) Let t R such that t is not an integer multiple of . For the matrix
A =
_
cos(t) sin(t)
sin(t) cos(t)
_
prove there does not exist a real valued matrix B such that BAB
1
is a diagonal matrix.
b) Do the same for the matrix A =
_
1
0 1
_
, where R 0.
Proof. a) det(AI) =
_
cos(t) sin(t)
sin(t) cos(t)
_
=
2
2cos t + 1.

1,2
= cos t

cos
2
t 1.
1,2
= aib, b ,= 0, i.e.
1,2
/ R. Hence, eigenvectors
Linear Algebra Igor Yanovsky, 2005 31
are not real, and B M(R), such that BAB
1
is diagonal.
b)
1,2
= 1. We nd eigenvectors,
_
0
0 0
_ _
w
1
w
2
_
=
_
0
0
_
. Thus, both eigenvectors
are v
1,2
=
_
1
0
_
, i.e. linearly dependent there does not exist a basis for R
2
consisting of eigenvectors of A. Therefore, B M(R), such that BAB
1
is diagonal.
Linear Algebra Igor Yanovsky, 2005 32
Problem (F02, #10) (Spectral Theorem for Normal Operators).
Let A Mat
nn
(C) satisfying A

A = AA

, i.e. A is normal. Show that there is an


orthonormal basis of eigenvectors of A.
Rephrase: For L : V V , V complex nite dimensional inner product space.
Proof. Prove this by induction on dimV .
Since L is complex linear, we can use the Fundamental Theorem of Algebra to nd C
and x V 0, so that L(x) = x L

(x) =

x. ker(LI
V
) = ker(L

I
V
).
Let x

= z V : (z[x) = 0, an orthogonal complement to x. To get induction, we


need to show that x

is invariant under L, i.e. L(x

) x

. Let z x

and show
Lz x

.
(L(z)[x) = (z[L

(x)) = (z[

x) = (z[x) = 0.
Check that L[
x
is normal.
Similarly, x

is invariant under L

, i.e. L

: x

, since
(L

(z)[x) = (z[L(x)) = (z[x) =



(z[x) = 0.
L

[
x
= (L[
x
)

since (L(z)[y) = (z[L

y), z, y x

.
Linear Algebra Igor Yanovsky, 2005 33
Problem (W02, #11).
Let V be a nite dimensional complex inner product space and L : V V a linear
transformation. Show that we can nd an orthonormal basis so [L] is upper triangular.
Proof. Assume [L] is upper triangular with respect to e
1
, . . . , e
n
[L(e
1
) L(e
n
)] = [e
1
e
n
]
_

11

12

1n
0
22

2n
.
.
.
.
.
.
.
.
.
.
.
.
0 0
nn
_

_
.
L(e
1
) =
11
e
1
e
1
is an eigenvector with eigenvalue
11
.
dimV = 2: L : V V complex can pick e
1
V so that Le
1
=
11
e
1
; pick
e
2
e
1
.
[L(e
1
) L(e
2
)] = [e
1
e
2
]
_

11

12
0
22
_
upper triangular.
L(e
1
) =
11
e
1
,
L(e
2
) =
12
e
1
+
22
e
2
.
Observe: L(e
k
) spane
1
, . . . , e
k
L(e
1
), . . . , L(e
k
) spane
1
, . . . , e
k
.
So we have 0 = M
0
M
1
M
k1
M
k
= V with the property dimM
k
= k,
L(M
k
) M
k
= spane
1
, . . . , e
k
.
Enough to show that any linear transformation on an n-dimensional space has an
(n 1)-dim subspace M V . Keep using this result then we can generate such an
increasing sequence 0 = M
0
M
1
M
k1
M = V that are all invariant
under L.
M
1
= spane
1
, [e
1
[ = 1.
e
2
= M
2

1
, e
2
is an orthogonal complement of e
1
.

Pick e
1
, . . . , e
k
orthonormal basis such that spane
1
, . . . , e
k
= M
k
.
L(e
1
) M
1
, L(e
1
) =
11
e
1
,
L(e
2
) M
2
, L(e
2
) =
12
e
1
+
22
e
2
,
Get an upper triangular form
for [L].
L(e
k
) M
k
, L(e
k
) =
1k
e
1
+
2k
e
2
+ +
kk
e
k
.
To construct M V we select x V 0 such that L

(x) = x. (L

: V V is
complex and linear, so by the Fundamental Theorem of Algebra can nd x, , so that
L

(x) = x). Then M = x

. dimM = k 1 have to show M is invariant under


L. Take z x:
(L(z)[x) = (z[L

(x)) = (z[x) =

(z[x) = 0.
L(z) x

. So L(M) M and M has dimension k 1.


Linear Algebra Igor Yanovsky, 2005 34
Problem (F02, #7; F01, #7).
Let T : V W be a linear transformation of nite dimensional real vector spaces.
Dene the transpose of T and then prove both of the following:
1) im(T)
0
= ker(T
t
).
2) rank(T) = rank(T
t
) (dimim(T) = dimim(T
t
)).
Proof. Transpose = dual. Let T : V W be linear. Let T
t
= T

: W

, where
X

= hom
R
(X, R). T
t
: W

is linear.
T

(g) = g T V
T
W, V


T
t
W

1) This is a proof of the Generalized Fredholm Alternative.


ker T

= g W

: T

(g) = g T = 0
im(T) = T(x) : x V
im(T)

= g W

: g(T(x)) = 0 for all x V


g(T(x)) = 0 for all x V g T = 0 g ker T

.
2) This is a proof of the Generalized Rank Theorem.
rank(T) = dim(im(T)), rank(T
t
) = dim(im(T
t
)). T
t
: W

. Dimension
formula:
dimW

= dim(ker(T
t
)) + dim(im(T
t
)) = dim(im(T))

+ dim(im(T
t
))
= dimW

dim(im(T)) + dim(im(T
t
)).
Problem (W02, #8).
Let T : V W and S : W X be linear transformations of nite dimensional real
vector spaces. Prove that
rank(T) +rank(S) dim(W) rank(S T) minrank(T), rank(S).
Proof. Note: V
T
W
S
X. rank(S T) = rank(T) dim(im T

ker S).
rank(T) +rank(S) dim(W) = rank(T) +rank(S) dim(ker S) rank(S)
= rank(T) dim(ker S) = rank(S T) + dim(im T

ker S) dim(ker S)
. .
0
rank(S T)
Note: M V subspace, dim(L(M)) dimM, a consequence of dimension formula.
rank(S T) = dim((S T)(V )) = dim(S(T(V ))) dim(T(V )) = rank(T) dim(S(W)) = rank(S).
Alternatively, to prove rank(S T) rank(S) , note that since T(V ) W , we also
have S(T(V )) S(W) and so dimS(T(V )) dimS(W) . Then
rank(S T) = dim((S T)(V )) = dim(S(T(V ))) dimS(W) = rank(S).
Linear Algebra Igor Yanovsky, 2005 35
Problem (S03, #7).
Let V be a nite dimensional real vector space. Let W V be a subspace and
W
0
= f : V F linear [ f = 0 on W. Let W
1
, W
2
V be subspaces. Prove that
W

2
= (W
1
+W
2
)

.
Proof. W
0
= f V

[ f[
W
= 0.
Write similar denitions for W

1
, W

2
, (W
1
+W
2
)

, and W

2
and make observations.
1) (W
1
+W
2
)

1
, W

2
(W
1
+W
2
)

2
.
2) Suppose f W

2
f[
W
1
= 0, f[
W
2
= 0 f[
W
1
+W
2
= 0 f (W
1
+W
2
)

.
Thus, W

2
(W
1
+W
2
)

.
Problem (S02, #8).
Let V be a nite dimensional real vector space. Let M V be a subspace and
M
0
= f : V F linear [ f = 0 on M. Prove that
dim(V ) = dim(M) + dim(M

).
Proof. Let x
1
, . . . , x
m
be a basis for M; M = spanx
1
, . . . , x
m
. Extend to x
1
, . . . , x
n
,
a basis for V . Construct a dual basis f
1
, . . . , f
n
for V

, f
i
(x
j
) =
ij
.
We show that f
m+1
, . . . , f
n
is a basis for M

. First, show M

= spanf
m+1
, . . . , f
n
.
Let f M

. f =

n
i=1
c
i
f
i
=

n
i=1
f(x
i
)f
i
=

m
i=1
f(x
i
)f
i
+

n
i=m+1
f(x
i
)f
i
=

n
i=m+1
f(x
i
)f
i
spanf
m+1
, . . . , f
n
.
Second, f
m+1
, . . . , f
n
are linearly independent, since f
m+1
, . . . , f
n
is a subset of
basis for V

. Thus, dim(M

) = n m = dim(V ) dim(M).
Linear Algebra Igor Yanovsky, 2005 36
Problem (F03, #8).
Prove the following three statements. You may choose an order of these statements and
then use the earlier statements to prove the later statements.
a) If L : V W is a linear transformation between two nite dimensional real vector
spaces V, W, then
dimimL = dimV dimker(L).
b) If L : V V is a linear transformation on a nite dimensional real inner product
space and L

is its adjoint, then im(L

) is the orthogonal complement of ker(L) in V .


c) Let A
nxn
be a real matrix, then the maximal number of linearly independent rows
(row rank) equals the maximal number of linearly independent columns (column rank).
Proof. We prove (a) and (b) separately. (a),(b) (c).
a) We know that dimker(L) dimV and that it has a complement M of dimension
k = dimV dimker(L). Since M

ker(L) = 0 the linear map L must be 1-1 when


restricted to M. Thus L[
M
: M im(L) is an isomorphism, i.e. dimim(L) = dimM =
k.
b) We want to show ker(L)

= im(L

). Since M

= M, we can prove ker(L) =


im(L

.
ker L = x V : Lx = 0. V
L
W
im(L) = Lx : x V , W
L

V
im(L

) = L

y : y W,
im(L

= x V : (x[L

y) = 0 for all y W = x V : (Lx[y) = 0 for all y W.


If x ker L x im(L

. Conversely, if (Lx[y) = 0, y W Lx = 0 x
ker L.
c) Using Dimension formula (a) and Fredholm Alternative (b), we have the Rank
theorem:
dimV = dim(ker(A)) + dim(im(A)) = dim(im(A

))

+ dim(im(A))
= dimV dim(im(A

)) + dim(im(A)).
Thus, rank(A) = rank(A

). Conjugation does not change the rank, so rank(A) =


rank(A
T
). rank(A) is the column rank. rank(A
T
) is the row rank of A. Thus,
row rank (A) = column rank (A).
We have not proved that the conjugation does not change the rank. To establish this
result easier, use (b) where we showed im(L

) = ker(L)

. Since V = ker L

imL,
ker(L)

= im(L), which establishes im(L

) = im(L).

Vous aimerez peut-être aussi