Académique Documents
Professionnel Documents
Culture Documents
LINEAR TRANSFORMATIONS
6.1 WHAT IS A LINEAR TRANSFORMA-
TION?
You know what a function is - its a RULE which
turns NUMBERS INTO OTHER NUMBERS: f(x) =
x
2
means please turn 3 into 9, 12 into 144 and so
on.
Similarly a TRANSFORMATION is a rule which
turns VECTORS into other VECTORS. For exam-
ple, please rotate all 3-dimensional vectors through
an angle of 90
u) = cT(
u) and T(
u +
v ) = T(
u) +T(
v ).
1
EXAMPLE: Let I be the rule I
u =
u for all
u.
You can check that I is linear! Called IDENTITY
Linear Transformation.
EXAMPLE : Let D be the rule D
u = 2
u for all
u.
D(c
u) = 2(c
u) = c(2
u) = cD
u
D(
u +
v ) = 2(
u +
v ) = 2
u + 2
v = D
u + D
v
LINEAR!
Note: Usually we write D(
u) as just D
u.
6.2. THE BASIC BOX, AND THE MATRIX
OF A LINEAR TRANSFORMATION
The usual vectors
i and
j dene a square:
Lets call this the BASIC BOX in two dimensions.
2
Similarly,
i,
j, and
k dene the BASIC BOX in 3
dimensions.
Now let T be any linear transformation. You know
that any 2-dimensional vector can be written as a
i +
b
i +b
j) = aT
i +bT
j.
This formula tells us something very important: IF
I KNOW WHAT T DOES TO
i and
j, THEN I
KNOW EVERYTHING ABOUT T - because now I
can tell you what T does to ANY vector.
EXAMPLE: Suppose I know that T(
i) =
i +
1
4
j
and T(
j) =
1
4
i +
i + 3
j)?
Answer: T(2
i + 3
j) = 2T
i + 3T
j = 2
_
i +
1
4
j
_
+
3
_
1
4
i +
j
_
= 2
i +
1
2
j +
3
4
i + 3
j =
11
4
i +
7
2
j.
Since T
i and T
i) =
i +
1
4
j and T(
j) =
1
4
i +
j.
The basic box has been squashed a bit! Pictures
of WHAT T DOES TO THE BASIC BOX tell us
everything about T!
EXAMPLE: If D is the transformation D
u = 2
u,
then the Basic Box just gets stretched:
So every LT can be pictured by
seeing what it does to the Basic Box.
There is another way!
Let T
i =
_
a
c
_
and T
j =
_
b
d
_
. Then we DEFINE
4
THE MATRIX OF T RELATIVE TO
i,
j as
_
a b
c d
_
,
that is, the rst COLUMN tells us what happened
to
i, and the second column tells us what happened
to
j.
EXAMPLE: Let I be the identity transformation.
Then I
i =
i =
_
1
0
_
, I
j =
j =
_
0
1
_
, so the matrix of
the identity transformation relative to
i
j is
_
1 0
0 1
_
.
EXAMPLE: If D
u = 2
u, then D
i =
_
2
0
_
and
D
j =
_
0
2
_
so the matrix of D relative to
i,
j is
_
2 0
0 2
_
.
EXAMPLE: If T
i =
i +
1
4
j and T
j =
1
4
i + j, then
the matrix is
_
1
1
4
1
4
1
_
.
EXAMPLE: If T
i =
j and T
j =
i, the matrix is
_
0 1
1 0
_
. Basic box is REFLECTED
5
EXAMPLE: Suppose in 3 dimensions T
i =
i+4
j +
7
k, T
j = 2
i + 5
j + 8
k, T
k = 3
i + 6
j + 9
k, then the
matrix is
_
_
1 2 3
4 5 6
7 8 9
_
_
, relative to
i
k.
EXAMPLE: Suppose T
i =
i +
j + 2
k and T
j =
i 3
i,
j. Sim-
ilarly a 3-dimensional linear transformation has a 3
by 3 matrix. In Engineering applications, most lin-
6
ear transformations are 2-dimensional or 3-dimensional,
so we are mainly interested in these two cases.
EXAMPLE: Suppose T is a linear transformation
that eats 3-dimensional vectors and produces
2-dimensional vectors according to the rule T
i = 2
i,
T
j =
i +
j, T
k =
i =
i, S
j =
i tan +
j, so the matrix of S relative
to
i,
j is
_
1 tan
0 1
_
.
7
EXAMPLE: Suppose Ti =
i +
j and T
j =
i +
j. Matrix is
_
1 1
1 1
_
and basic box is SQUASHED
FLAT!
EXAMPLE: Rotations in the plane. Suppose you
ROTATE the whole plane through an angle (anti-
clockwise). Then simple trigonometry shows you
that
R
i = cos
i + sin
j
R
j = sini + cos
j
So the rotation matrix is
R() =
_
cos sin
sin cos
_
.
Application: Suppose an object is moving on a cir-
cle at constant angular speed . What is its accel-
8
eration?
Answer: Let its position vector at t = 0 be
r
0
.
Because the object is moving on a circle, its position
at a later time t is given by rotating
r
0
by an angle
(t). So
r (t) =
_
cos sin
sin cos
_
r
0
Dierentiate
d
r
dt
=
_
sin cos
cos sin
_
r
0
by the chain rule. Here
is actually , so
d
r
dt
=
_
sin cos
cos sin
_
r
0
. Dierentiate again,
d
2
r
dt
2
=
_
cos sin
sin cos
_
r
0
=
2
_
cos sin
sin cos
_
r
0
.
9
Substitute the equation for
r (t),
d
2
r
dt
2
=
2
r ,
which is formula you know from physics.
6.3. COMPOSITE TRANSFORMATIONS
AND MATRIX MULTIPLICATION.
You know what it means to take the COMPOSITE
of two functions: if f(u) = sin(u), and u(x) = x
2
,
then f u means: please do u FIRST, THEN f, so
f u(x) = sin(x
2
). NOTE THE ORDER!!
u f(x) = sin
2
(x), NOT the same!
Similarly if A and B are linear transformations, then
AB means do B FIRST, then A.
NOTE: BE CAREFUL! According to our denition,
A and B both eat vectors and both produce vectors.
But then you have to take care that A can eat what
10
B produces!
EXAMPLE: Suppose Aeats and produces 2-dimensional
vectors, and B eats and produces 3-dimensional vec-
tors. Then AB would not make sense!
EXAMPLE: Suppose B eats 2-d vectors and pro-
duces 3-d vectors (so its matrix relative to
i
k looks
like this:
_
_
b
11
b
12
b
21
b
22
b
31
b
32
_
_
, a 3 by 2 matrix) and suppose
A eats 3-d vectors and produces 2-d vectors. Then
AB DOES make sense, because A can eat what B
produces. (In this case, BA also makes sense.).
IMPORTANT FACT: Suppose a
ij
is the matrix
of a linear transformation A relative to
k, and sup-
pose b
ij
is the matrix of the Linear Transformation
B relative to
i
j or
i
k is just
11
the matrix product of a
ij
and b
ij
.
EXAMPLE: What happens to the vector
_
1
2
_
if
we shear 45
_
1 tan
0 1
_
so in this case it is
_
1 1
0 1
_
. A rotation through
has matrix
_
cos sin
sin cos
_
, so here it is
_
0 1
1 0
_
Hence
SHEAR, THEN ROTATE
_
0 1
1 0
_ _
1 1
0 1
_
=
_
0 1
1 1
_
ROTATE, THEN SHEAR
_
1 1
0 1
_ _
0 1
1 0
_
=
_
1 1
1 0
_
.
12
So shear, then rotate
_
1
2
_
_
0 1
1 1
_ _
1
2
_
=
_
2
3
_
.
Rotate, then shear
_
1
2
_
_
1 1
1 0
_ _
1
2
_
=
_
1
1
_
Very dierent!
EXAMPLE: Suppose B is a LT with matrix
_
_
1 0
0 1
1 1
_
_
and A is a LT with matrix
_
0 1 1
1 1 0
_
.
What is the matrix of AB? Of BA?
Answer:
_
0 1 1
1 1 0
_
_
_
1 0
0 1
1 1
_
_
=
_
1 0
1 1
_
= AB
2 by 3 3 by 2 2 by 2
_
_
1 0
0 1
1 1
_
_
_
0 1 1
1 1 0
_
=
_
_
0 1 1
1 1 0
1 0 1
_
_
3 by 2 2 by 3 3 by 3
13
EXAMPLE: Suppose you take a piece of rubber in
2 dimensions and shear it parallel to the x axis by
degrees, and then shear it again by degrees. What
happens?
_
1 tan
0 1
_ _
1 tan
0 1
_
=
_
1 tan + tan
0 1
_
which is also a shear, but NOT through +!
The shear angles dont add up, since tan +tan =
tan( +).
EXAMPLE: Rotate 90
v |, magnitude
of the vector product.
If you dont know it, you can easily check it, since
the area of any parallelogram is given by
15
AREA = HEIGHT Base
=
_
|
v | sin
_
|
u|
= |
u| |
v | sin
= |
v |.
Similarly, the VOLUME of a three-dimensional par-
allelogram [called a PARALLELOPIPED!] is given
by
VOLUME = (AREA OF BASE) HEIGHT.
If you take any 3 vectors in 3 dimensions, say
u,
v ,
w,
then they dene a 3-dimensional parallelogram. The
area of the base is |
v |,
height is |
w| | sin
_
2
_
|
where is the angle between
v and
w, so VOLUME
dened by
u,
v ,
w is just
16
|
v | |
w| | sin
_
2
_
|
=|
v | |
w| | cos |
=|
w|.
[Check: Volume of Basic Box dened by
i
k is
|
k| = |
k| = 1, correct!
Now let T be any linear transformation in two di-
mensions. [This means that it acts on vectors in the
xy plane and turns them into other vectors in the xy
plane.]
We let T act on the Basic Box, as usual.
Now T
i and T
j, so (T
i) (T
i) (T
j) = det(T)
k.
EXAMPLE: If I = identity, then
I
i I
j =
j =
k = 1
k
so det(I) = 1.
EXAMPLE: D
u = 2
u
D
i D
j = 4
j = 4
k det(D) = 4
EXAMPLE: T
i =
i +
1
4
j, T
j =
1
4
i +
j,
T
i T
j =
_
i +
1
4
j
_
_
1
4
i +
j
_
=
j +
1
16
i
=
15
16
j =
15
16
k det T =
15
16
.
EXAMPLE: T
i =
j, T
j =
i,
T
i T
j =
j
i =
k det T = 1
18
EXAMPLE: Shear, S
i =
i, S
j =
i tan +
j,
S
i S
j =
k det S = 1.
EXAMPLE: T
i =
i +
j = T
j,
T
i T
j =
0
det T = 0.
EXAMPLE: Rotation
R
i R
j = (cos
i + sin
j) (sin
i + cos
j)
= (cos
2
sin
2
)
k =
k det(R) = 1.
The area of the Basic Box is initially |
j| = 1.
After we let T act on it, the area becomes
|T
i T
j| = |det T| |
k| = | det T|.
So
Final Area of Basic Box
Initial Area of Basic Box
=
| det T|
1
= | det T|
19
so | det T| TELLS YOU THE AMOUNT BY WHICH
AREAS ARE CHANGED BY T. So det T = 1
means that the area is UNCHANGED (Shears, ro-
tations, reections) while det T = 0 means that the
Basic Box is squashed FLAT, zero area.
Take a general 2 by 2 matrix M =
_
a b
c d
_
. We
know that this means M
i = a
i + c
j, M
j = b
i + d
j.
Hence M
i M
j =
_
a
i +c
j
_
_
b
i +d
j
_
= (ad
bc)
k, so
det
_
a b
c d
_
= ad bc.
Check: det
_
2 0
0 2
_
= 4, det
_
1 tan
0 1
_
= 1,
det
_
cos sin
sin cos
_
= 1, det
_
1 1
1 1
_
= 0.
IN THREE dimensions there is a similar gadget.
20
The Basic Box is dened by
i
i, T
j, T
k. We dene
det T =
_
T
i
_
_
T
j
_
_
T
k
_
where the dot is the scalar product, as usual. Since
|T
i T
j T
i, T
j, T
k, we see that
| det T| =
Final Volume of Basic Box
Initial Volume of Basic Box
,
that is, | det T| tells you how much T changes vol-
umes. If T squashes the Basic Box at, then
det T = 0.
Just as det
_
a b
c d
_
= ad bc, there is a formula
for the determinant of a 3 by 3 matrix. The usual
notation is this. We DEFINE
a b
c d
= det
_
a b
c d
_
= ad bc.
21
Similarly
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
is the determinant of
_
_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
_
_
and there is a formula for it, as
follows:
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
= a
11
a
22
a
23
a
32
a
33
a
12
a
21
a
23
a
31
a
33
+a
13
a
21
a
22
a
31
a
32
.
In other words, we can compute a three-dimensional
determinant if we know how to work out 2-dimensional
determinants.
COMMENTS:
[a] We worked along the top row. Actually, a THE-
OREM says that you can use ANY ROW OR ANY
COLUMN!
[b] How did I know that a
12
had to multiply the par-
ticular 2-dimensional determinant
a
21
a
23
a
31
a
33
? Easy:
22
I just struck out EVERYTHING IN THE SAME
ROW AND COLUMN as a
12
:
a
21
a
23
a
31
a
33
and
just kept the survivors!
This is the pattern, for example if you expand along
the second row you will get
a
21
a
12
a
13
a
32
a
33
+a
22
a
11
a
13
a
31
a
33
a
23
a
11
a
12
a
31
a
32
1 1 0
1 1 1
2 0 0
= 1
1 1
0 0
1 1
2 0
+ 0
1 1
2 0
= 0 + 2 + 0 = 2
(expanding along the top row) or, if you use the sec-
ond row,
1 1 0
1 1 1
2 0 0
= 1
1 0
0 0
+ 1
1 0
2 0
1 1
2 0
= 0 + 0 + 2 = 2
24
or
1 1 0
1 1 1
2 0 0
= 1
1 1
0 0
1 0
0 0
+ 2
1 0
1 1
= 0 + 0 + 2 = 2
(expanding down the rst column).
Important Properties of Determinants
[a] Let S and T be two linear transformations such
that det S and det T are dened. Then
det ST = det TS = (det S) (det T).
Therefore, det[STU] = det[UST] = det[TUS] and
so on: det doesnt care about the order. Remember
however that this DOES NOT mean that STU =
UST etc etc.
25
[b] If M is a square matrix, then
det M
T
= det M.
[c] If c is a number and M is an n by n matrix, then
det(cM) = c
n
det M.
EXAMPLE: Remember from Section 2[g] of Chap-
ter 5 that an ORTHOGONAL matrix satises MM
T
=
I. So det(MM
T
) = det I = 1. But det(MM
T
) =
det(M) det(M
T
) = det(M) det(M) = (det M)
2
,
thus
det M = 1
for any orthogonal matrix.
6.5. INVERSES.
If I give you a 3-dimensional vector
u and a 3-
dimensional linear transformation T, then T sends
26
u
T
u
But what about this picture:
u
T
u = T
v
Can T send TWO DIFFERENT VECTORS TO
ONE? Yes!
_
_
1 0 0
0 0 0
0 0 0
_
_
_
_
0
1
0
_
_
=
_
_
0
0
0
_
_
=
_
_
1 0 0
0 0 0
0 0 0
_
_
_
_
0
0
1
_
_
So it can happen! Notice that this transformation
destroys
j (and also
k). In fact if
u =
v and
27
T
u = T
v , then T(
v ) = 0, that is, T
w =
0
where
w IS NOT THE ZERO VECTOR. So if this
happens, T destroys everything in the
w direction.
That is, T SQUASHES 3-dimensional space down
to two or even less dimensions. This means that
T LOSES INFORMATION it throws away all of
the information stored in the
w direction. Clearly T
squashes the basic box down to zero volume, so
det T = 0
and we say T is SINGULAR.
SUMMARY: A SINGULAR LINEAR TRANSFOR-
MATION
[a] Maps two dierent vectors to one vector
[b] Destroys all of the vectors in at least one direction
[c] Loses all information associated with those direc-
tions
[d] Satises det T = 0.
28
Conversely, a NON-SINGULAR transformation never
maps 2 vectors to one,
u T
v T
v
_
_
_
Dierent
Therefore if I give you T
u, THERE IS EXACTLY
ONE
u. The transformation that takes you from T
u
back to
u is called the INVERSE OF T. The idea is
that since a NON-SINGULAR linear transformation
does NOT destroy information, we can re-construct
u if we are given T
_
and
_
a
b
_
and sends them to the same vector,
so
_
0 1
1 0
_ _
_
=
_
0 1
1 0
_ _
a
b
_
.
Then
_
_
=
_
b
a
_
= a
= b
_
_
=
_
a
b
_
30
so
_
_
and
_
a
b
_
are the same this transforma-
tion never maps dierent vectors to the same vector.
No vector is destroyed, no information is lost, noth-
ing gets squashed! And det
_
0 1
1 0
_
= 1, NON-
SINGULAR.
How to FIND THE INVERSE.
By denition, T
1
sends T
u to
u, i.e.
T
1
(T(
u)) =
u = T(T
1
(
u)).
But
u = I
u (identity) so T
1
satises
T
1
T = TT
1
= I.
So to nd the inverse of
_
0 1
1 0
_
we just have to
nd a matrix
_
a b
c d
_
such that
_
a b
c d
_ _
0 1
1 0
_
=
_
1 0
0 1
_
, b = 1, a = 0, d = 0, c = 1 so answer is
31
_
0 1
1 0
_
. In fact its easy to show that
_
a b
c d
_
1
=
1
ad bc
_
d b
c a
_
.
For example, when we needed to nd the matrix S
in Section 4 of Chapter 5, we needed to nd a way
of solving
S
_
0.7 0.4
0.5 0.7
_
= I.
This just means that we need to inverse of
_
0.7 0.4
0.5 0.7
_
,
and the above formula does the job for us.
For bigger square matrices there are many tricks
for nding inverses. A general [BUT NOT VERY
PRACTICAL] method is as follows:
[a] Work out the matrix of COFACTORS. [A cofac-
tor is what you get when you work out the smaller
determinant obtained by striking out a row and a
column, for example the cofactor of 6 in
1 2 3
4 5 6
7 8 9
32
is
1 2
7 8
r =
a
where M is a matrix,
r = the vector of variables,
36
and
a is a given vector. Suppose M is square.
[a] If det M = 0, there is exactly one solution,
r = M
1
a.
[b] If det M = 0, there is probably no solution. But
if there is one, then there will be many.
PRACTICAL ENGINEERING PERSPECTIVE:
In the REAL world, NOTHING IS EVER EXACTLY
EQUAL TO ZERO! So if det M = 0, either [a] you
have made a mistake, OR [b] you are pretending that
your data are more accurate than they really are!
_
_
1 2 3
4 5 6
7 8 9
_
_
REALLY means
_
_
1.01 2.08 3.03
3.99 4.97 6.02
7.01 7.96 8.98
_
_
and of course the determinant of THIS is non-zero!
Actually, det = 0.597835!
37
6.6 EIGENVECTORS AND EIGENVALUES.
Remember we said that a linear transformation USU-
ALLY changes the direction of a vector. But there
may be some special vectors which DONT have their
direction changed!
EXAMPLE:
_
1 2
2 2
_
clearly DOES change the
direction of
i and
j, since
_
1
2
_
is not parallel to
i
and
_
2
2
_
is not parallel to
j. BUT
_
1 2
2 2
_ _
2
1
_
=
_
4
2
_
= 2
_
2
1
_
which IS parallel to
_
2
1
_
.
In general if a transformation T does not change the
direction of a vector
u, that is
T
u =
u
38
for some (SCALAR), then
u is called an EIGEN-
VECTOR of T. The scalar is called the EIGEN-
VALUE of
u.
6.7 FINDING EIGENVALUES AND EIGEN-
VECTORS.
There is a systematic way of doing this. Take the
equation
T
u =
u
and write
u = I
u, I = identity. Then
(T I)
u =
0
Lets suppose
u =
0
[of course,
0
is always an eigen-
vector, that is boring]. So the equation says that
T I SQUASHES everything in the
u direction.
Hence
det(T I) = 0.
39
This is an equation which can be SOLVED to nd
.
EXAMPLE: Find the eigenvalues of
_
1 2
2 2
_
:
det
__
1 2
2 2
_
_
1 0
0 1
__
= 0
det
_
1 2
2 2
_
= 0
(1 )(2 +) 4 = 0
= 2 OR 3
So there are TWO answers for a 2 by 2 matrix. Sim-
ilarly, in general there are three answers for 3 by 3
matrices, etc.
What are the eigenvectors for = 2, = 3?
IMPORTANT POINT: Let
u be an eigenvector
of T. Then 2
u) = 2T
u = 2
u = (2
u).
40
Similarly 3
u, 13.59
_
. Then
(T I)
u =
0
_
1 2
2 2
_ _
_
= 0
_
1 2
2 4
_ _
_
= 0
+ 2 = 0
2 4 = 0
But these equations are actually the SAME, so we
really only have ONE equation for 2 unknowns. We
arent surprised, because we did not expect a unique
answer anyway! We can just CHOOSE = 1 (or
13.59 or whatever) and then solve for . Clearly
=
1
2
, so an eigenvector corresponding to = 2
is
_
1
1
2
_
. But if you said
_
2
1
_
or
_
100
50
_
that is also
41
correct!
What about = 3?
_
4 2
2 1
_ _
_
= 0
4 + 2 = 0
2 + = 0
Again we can set = 1, then = 2, so an eigen-
vector corresponding to = 3 is
_
1
2
_
or
_
2
4
_
or
_
10
20
_
etc.
EXAMPLE: Find the eigenvalues, and correspond-
ing eigenvectors, of
_
0 1
1 0
_
.
Answer: We have det
_
1
1
_
= 0
2
+ 1 =
0 = i, i =
1.
Eigenvector for i: we set
_
i 1
1 i
_ _
1
_
= 0
42
i = 0 = i so an eigenvector for i is
_
1
i
_
. For = i we have
_
i 1
1 i
_ _
1
_
= 0
i = 0 = i so an eigenvector for i
is
_
1
i
_
. Note that a REAL matrix can have COM-
PLEX eigenvalues and eigenvectors! This is hap-
pening simply because
_
0 1
1 0
_
is a ROTATION
through 90
j by let-
ting T act on
i and
j and then putting the results in
43
the columns. So to say that T has matrix
_
a b
c d
_
with respect to
i,
j means that
T
i = a
i +c
j
T
j = b
i +d
j.
Whats so special about the two vectors
i and
j?
Nothing, except that EVERY vector in two dimen-
sions can be written as
i +
j for some , .
Now actually we only really use
i and
j for CONVE-
NIENCE. In fact, we can do this with ANY pair of
vectors
u,
v in two dimensions,
PROVIDED that they are not parallel.
That is, any vector
w can be
expressed as
w =
u +
v
for some scalars , . You can see this from the
diagram by stretching
u to
u and
v to
v , we
can make their sum equal to
w.
44
We call
u,
v a BASIS for 2-dimensional vectors. Let
u = P
11
i +P
21
j =
_
P
11
P
21
_
v = P
12
i +P
22
j =
_
P
12
P
22
_
Then the transformation that takes
_
i,
j
_
to (
u,
v )
has matrix
_
P
11
P
12
P
21
P
22
_
= P. In order for
u,
v to be
a basis, P must not squash the volume of the Basic
Box down to zero, since otherwise
u and
v will be
parallel. So we must have
det P = 0.
The same idea works in 3 dimensions: ANY set of
3 vectors forms a basis PROVIDED that the matrix
of components satises det P = 0.
EXAMPLE: The pair of vectors
u =
_
1
0
_
,
v =
_
1
1
_
forms a basis, because det
_
1 1
0 1
_
= 1 = 0.
45
Now of course the COMPONENTS of a vector will
change if you choose a dierent basis. For example,
_
1
2
_
= 1
i + 2
j BUT
_
1
2
_
= 1
u + 2
v .
Instead,
_
1
2
_
=
u +2
v are
_
1
2
_
(
u,
v )
. Where did I
get these numbers?
As usual, set
u = P
i,
v = P
j where P =
_
1 1
0 1
_
.
We want to nd , such that
_
1
2
_
=
u +
v .
Substituting
u = P
i,
v = P
i +P
j = P[
i +
j] = P
_
_
.
46
We know P is not singular, so we can take P over to
the left side by multiplying both sides of this equa-
tion by the inverse of P. So we get
_
_
= P
1
_
1
2
_
and this is our answer: this is how we nd and !
So to get and we just have to work out
P
1
_
1
2
_
=
_
1 1
0 1
_ _
1
2
_
=
_
1
2
_
,
that is, the components of this vector relative to
u,
v
are found as
_
1
2
_
(
u,
v )
= P
1
_
1
2
_
(
i,
j)
THE COMPONENTS RELATIVE TO
u,
v ARE
OBTAINED BY MULTIPLYING P
1
INTO THE
COMPONENTS RELATIVE TO
i,
j. Similarly for
47
linear transformations if a certain linear transfor-
mation T has matrix
_
1 2
0 1
_
j
relative to
i,
j it
will have a DIFFERENT matrix relative to
u,
v .
We have
_
1 2
0 1
_
(
i,
j)
_
1
2
_
(
i,
j)
=
_
5
2
_
(
i,
j)
That is, the matrix of T relative to
i,
j sends
_
1
2
_
(
j)
to
_
5
2
_
(
j)
. In the same way, the matrix of T rela-
tive to (
u,
u,
v )
to
_
7
2
_
(
u,
v )
, because these are
the components of these two vectors relative to
u,
v ,
as you can show by multiplying P
1
into
_
1
2
_
(
j)
and
_
5
2
_
(
j)
respectively.
So the unknown matrix we want satises
_
? ?
? ?
_
(
u,
v )
_
1
2
_
(
u,
v )
=
_
7
2
_
(
u,
v )
48
But we know
_
1
2
_
(
u,
v )
= P
1
_
1
2
_
(
i,
j)
and
_
7
2
_
(
u,
v )
= P
1
_
5
2
_
(
i,
j)
so
_
? ?
? ?
_
(
u,
v )
P
1
_
1
2
_
(
j)
= P
1
_
5
2
_
(
j)
.
Multiply both sides by P and get
P
_
? ?
? ?
_
(
u,
v )
P
1
_
1
2
_
(
j)
=
_
5
2
_
(
j)
Compare this with
_
1 2
0 1
_
(
i,
j)
_
1
2
_
(
i,
j)
=
_
5
2
_
(
i,
j)
_
1 2
0 1
_
(
i,
j)
= P
_
? ?
? ?
_
(
u,
v )
P
1
_
? ?
? ?
_
(
u,
v )
= P
1
_
1 2
0 1
_
(
i,
j)
P.
49
[In the last step, we multiplied both sides on the
LEFT by P
1
, and on the RIGHT by P.]
We conclude that THE MATRIX OF T REL-
ATIVE TO
u,
v , IS OBTAINED BY MULTIPLY-
ING P
1
ON THE LEFT AND P ON THE RIGHT
INTO THE MATRIX OF T RELATIVE TO
i,
j.
In this example,
_
? ?
? ?
_
(
u,
v )
=
_
1 1
0 1
_ _
1 2
0 1
_ _
1 1
0 1
_
=
_
1 1
0 1
_ _
1 3
0 1
_
=
_
1 4
0 1
_
.
So now we know how to work out the matrix of any
linear transformation relative to ANY basis.
Now let T be a linear transformation in 2 dimensions,
with eigenvectors
e
1
,
e
2
, eigenvalues
1
,
2
. Now
e
1
and
e
2
may or may not give a basis for 2-dimensional
space. But suppose they do.
50
QUESTION: What is the matrix of T relative to
e
1
,
e
2
?
ANSWER: As always, we see what T does to
e
1
and
e
2
, and put the results into the columns!
By denition of eigenvectors and eigenvalues,
T
e
1
=
1
e
1
=
1
e
1
+ 0
e
2
T
e
2
=
2
e
2
= 0
e
1
+
2
e
2
So the matrix is
_
1
0
0
2
_
(
e
1
,
e
2
)
.
We say that a matrix of the form
_
a 0
0 d
_
or
_
_
0 0
0 0
0 0
_
_
is DIAGONAL. So we see that THE MATRIX OF
A TRANSFORMATION RELATIVE TO ITS OWN
EIGENVECTORS (assuming that these form a ba-
sis) is DIAGONAL.
EXAMPLE: We know that the eigenvectors of
_
1 2
2 2
_
are
_
1
1
2
_
and
_
1
2
_
. So here P =
_
1 1
1
2
2
_
,
51
P
1
=
2
5
_
2 1
1
2
1
_
,
P
1
_
1 2
2 2
_
P =
2
5
_
2 1
1
2
1
_ _
1 2
2 2
_ _
1 1
1
2
2
_
=
2
5
_
2 1
1
2
1
_ _
2 3
1 6
_
=
2
5
_
5 0
0
15
2
_
=
_
2 0
0 3
_
as expected since the
eigenvalues are 2 and -3.
EXAMPLE: The shear matrix
_
1 tan
0 1
_
.
Eigenvalues: det
_
1 tan
0 1
_
= 0 (1)
2
=
0 = 1. Only one eigenvector, namely
_
1
0
_
, so
the eigenvectors DO NOT give us a basis in this case
NOT possible to diagonalize this matrix!
EXAMPLE: Suppose you want to do a reection
52
of the entire 2-dimensional plane around a straight
line that passes through the origin and makes an
angle of with the x-axis. Then the vector
_
cos
sin
_
lies along this line, so it is left unchanged by the
reection; in other words, it is an eigenvector of
with eigenvalue 1. On the other hand, the vector
_
sin
cos
_
is perpendicular to the rst vector [check
that their scalar product is zero] so it is reected into
its own negative by . That is, it is an eigenvector
with eigenvalue 1.
So has a matrix with these eigenvectors and
eigenvalues. The P matrix in this case is
P =
_
cos sin
sin cos
_
,
and clearly P
1
=
_
cos sin
sin cos
_
, and since the
eigenvalues are 1, we have
=
_
cos sin
sin cos
_ _
1 0
0 1
_ _
cos sin
sin cos
_
.
53
Doing the matrix multiplication and using the trigono-
metric identities for cos2 and sin2, you will nd
that
=
_
cos 2 sin2
sin2 cos 2
_
.
Notice that the determinant is 1, as is typi-
cal for a reection. Check that this gives the right
answer for reections around the 45 degree diagonal
and around the x-axis.
6.9 APPLICATION MARKOV CHAINS.
We saw back in Section 3 of Chapter 5 that to predict
the weather 4 days from now, we needed the 4th
power of the matrix
_
0.6 0.3
0.4 0.7
_
.
But suppose I want the weather 30 days from now
I need M
30
! There is an easy way to work this
54
out using eigenvalues.
Suppose I can diagonalize M, that is, I can write
P
1
MP = D =
_
1
0
0
2
_
for some matrix P. Then
M = PDP
1
M
2
= (PDP
1
)(PDP
1
)
= PDP
1
PDP
1
= PD
2
P
1
M
3
= MM
2
= PDP
1
PD
2
P
1
= PD
3
P
1
etc
M
30
= PD
30
P
1
.
But D
30
is very easy to work out it is just
_
30
1
0
0
30
2
_
.
Lets see how this works!
Eigenvectors and eigenvalues of
_
0.6 0.3
0.4 0.7
_
are
_
1
1
_
(eigenvalue 0.3) and
_
1
4
3
_
(eigenvalue 1) so
P =
_
1 1
1
4
3
_
, D =
_
0.3 0
0 1
_
, P
1
=
_
4
7
3
7
3
7
3
7
_
55
D
30
=
_
(0.3)
30
0
0 1
_
_
2 10
16
0
0 1
_
so
M
30
=
_
1 1
1
4
3
_ _
2 10
16
0
0 1
_ _
4
7
3
7
3
7
3
7
_
=
1
7
_
3 + 8 10
16
3 6 10
16
4 8 10
16
4 + 6 10
16
_
_
3
7
3
7
4
7
4
7
_
So if it is rainy today, the probability of rain tomor-
row is 60%, but the probability of rain 30 days from
now is only
3
7
43%. As we go forward in time, the
fact that it rained today becomes less and less im-
portant! The probability of rain in 31 days is almost
the same as the probability of rain in 30 days!
6.10 APPLICATION: THE TRACE OF A
DIAGONALIZABLE MATRIX IS THE SUM
OF ITS EIGENVALUES.
56
Let M be any square matrix. Then the TRACE
of M, denoted TrM, is dened as the sum of the
diagonal entries: Tr
_
1 0
0 1
_
= 2, Tr
_
_
1 2 3
4 5 6
7 8 9
_
_
=
15, Tr
_
_
1 5 16
7 2 15
11 9 8
_
_
= 11, etc.
In general it is NOT true that Tr(MN) = TrM TrN
BUT it is true that TrMN = TrNM.
Proof: TrM =
i
M
ii
so
TrMN =
j
M
ij
N
ji
=
i
N
ji
M
ij
= TrNM.
Hence Tr(P
1
AP) = Tr(APP
1
) = TrA so THE
TRACE OF A MATRIX IS ALWAYS THE SAME
NO MATTER WHICH BASIS YOU USE! This is
why the trace is interesting: it doesnt care which
basis you use. In particular, if A is diagonalizable,
TrA = Tr
_
1
0
0
2
_
=
1
+
2
.
57
So the trace is equal to the sum of the eigen-
values. This gives a quick check that you have not
made a mistake in working out the eigenvalues: they
have to add up to the same number as the trace of
the original matrix. Check this for the examples in
this chapter.
58