Vous êtes sur la page 1sur 35

LINEAR ALGEBRA

W W L CHEN
c _ W W L Chen, 1997, 2008.
This chapter is available free to all individuals, on the understanding that it is not to be used for nancial gain,
and may be downloaded and/or photocopied, with or without permission from the author.
However, this document may not be kept on any information storage and retrieval system without permission
from the author, unless such system is not accessible to any individuals other than its owners.
Chapter 8
LINEAR TRANSFORMATIONS
8.1. Euclidean Linear Transformations
By a transformation from R
n
into R
m
, we mean a function of the type T : R
n
R
m
, with domain R
n
and codomain R
m
. For every vector x R
n
, the vector T(x) R
m
is called the image of x under the
transformation T, and the set
R(T) = T(x) : x R
n
,
of all images under T, is called the range of the transformation T.
Remark. For our convenience later, we have chosen to use R(T) instead of the usual T(R
n
) to denote
the range of the transformation T.
For every x = (x
1
, . . . , x
n
) R
n
, we can write
T(x) = T(x
1
, . . . , x
n
) = (y
1
, . . . , y
m
).
Here, for every i = 1, . . . , m, we have
y
i
= T
i
(x
1
, . . . , x
n
), (1)
where T
i
: R
n
R is a real valued function.
Definition. A transformation T : R
n
R
m
is called a linear transformation if there exists a real
matrix
A =

a
11
. . . a
1n
.
.
.
.
.
.
a
m1
. . . a
mn

Chapter 8 : Linear Transformations page 1 of 35


Linear Algebra c _ W W L Chen, 1997, 2008
such that for every x = (x
1
, . . . , x
n
) R
n
, we have T(x
1
, . . . , x
n
) = (y
1
, . . . , y
m
), where
y
1
= a
11
x
1
+. . . + a
1n
x
n
,
.
.
.
y
m
= a
m1
x
1
+. . . +a
mn
x
n
,
or, in matrix notation,

y
1
.
.
.
y
m

a
11
. . . a
1n
.
.
.
.
.
.
a
m1
. . . a
mn

x
1
.
.
.
x
n

. (2)
The matrix A is called the standard matrix for the linear transformation T.
Remarks. (1) In other words, a transformation T : R
n
R
m
is linear if the equation (1) for every
i = 1, . . . , m is linear.
(2) If we write x R
n
and y R
m
as column matrices, then (2) can be written in the form y = Ax,
and so the linear transformation T can be interpreted as multiplication of x R
n
by the standard
matrix A.
Definition. A linear transformation T : R
n
R
m
is said to be a linear operator if n = m. In this case,
we say that T is a linear operator on R
n
.
Example 8.1.1. The linear transformation T : R
5
R
3
, dened by the equations
y
1
= 2x
1
+ 3x
2
+ 5x
3
+ 7x
4
9x
5
,
y
2
= 3x
2
+ 4x
3
+ 2x
5
,
y
3
= x
1
+ 3x
3
2x
4
,
can be expressed in matrix form as

y
1
y
2
y
3

2 3 5 7 9
0 3 4 0 2
1 0 3 2 0

x
1
x
2
x
3
x
4
x
5

.
If (x
1
, x
2
, x
3
, x
4
, x
5
) = (1, 0, 1, 0, 1), then

y
1
y
2
y
3

2 3 5 7 9
0 3 4 0 2
1 0 3 2 0

1
0
1
0
1

2
6
4

,
so that T(1, 0, 1, 0, 1) = (2, 6, 4).
Example 8.1.2. Suppose that A is the zero m n matrix. The linear transformation T : R
n
R
m
,
where T(x) = Ax for every x R
n
, is the zero transformation from R
n
into R
m
. Clearly T(x) = 0 for
every x R
n
.
Example 8.1.3. Suppose that I is the identity n n matrix. The linear operator T : R
n
R
n
, where
T(x) = Ix for every x R
n
, is the identity operator on R
n
. Clearly T(x) = x for every x R
n
.
Chapter 8 : Linear Transformations page 2 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
PROPOSITION 8A. Suppose that T : R
n
R
m
is a linear transformation, and that e
1
, . . . , e
n
is
the standard basis for R
n
. Then the standard matrix for T is given by
A = ( T(e
1
) . . . T(e
n
) ) ,
where T(e
j
) is a column matrix for every j = 1, . . . , n.
Proof. This follows immediately from (2). _
8.2. Linear Operators on R
2
In this section, we consider the special case when n = m = 2, and study linear operators on R
2
. For
every x R
2
, we shall write x = (x
1
, x
2
).
Example 8.2.1. Consider reection across the x
2
-axis, so that T(x
1
, x
2
) = (x
1
, x
2
). Clearly we have
T(e
1
) =

1
0

and T(e
2
) =

0
1

,
and so it follows from Proposition 8A that the standard matrix is given by
A =

1 0
0 1

.
It is not dicult to see that the standard matrices for reection across the x
1
-axis and across the line
x
1
= x
2
are given respectively by
A =

1 0
0 1

and A =

0 1
1 0

.
Also, the standard matrix for reection across the origin is given by
A =

1 0
0 1

.
We give a summary in the table below:
Linear operator Equations Standard matrix
Reection across x
2
-axis

y
1
= x
1
y
2
= x
2

1 0
0 1

Reection across x
1
-axis

y
1
= x
1
y
2
= x
2

1 0
0 1

Reection across x
1
= x
2

y
1
= x
2
y
2
= x
1

0 1
1 0

Reection across origin

y
1
= x
1
y
2
= x
2

1 0
0 1

Example 8.2.2. For orthogonal projection onto the x


1
-axis, we have T(x
1
, x
2
) = (x
1
, 0), with standard
matrix
A =

1 0
0 0

.
Chapter 8 : Linear Transformations page 3 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Similarly, the standard matrix for orthogonal projection onto the x
2
-axis is given by
A =

0 0
0 1

.
We give a summary in the table below:
Linear operator Equations Standard matrix
Orthogonal projection onto x
1
-axis

y
1
= x
1
y
2
= 0

1 0
0 0

Orthogonal projection onto x


2
-axis

y
1
= 0
y
2
= x
2

0 0
0 1

Example 8.2.3. For anticlockwise rotation by an angle , we have T(x


1
, x
2
) = (y
1
, y
2
), where
y
1
+ iy
2
= (x
1
+ ix
2
)(cos + i sin ),
and so

y
1
y
2

cos sin
sin cos

x
1
x
2

.
It follows that the standard matrix is given by
A =

cos sin
sin cos

.
We give a summary in the table below:
Linear operator Equations Standard matrix
Anticlockwise rotation by angle

y
1
= x
1
cos x
2
sin
y
2
= x
1
sin +x
2
cos

cos sin
sin cos

Example 8.2.4. For contraction or dilation by a non-negative scalar k, we have T(x


1
, x
2
) = (kx
1
, kx
2
),
with standard matrix
A =

k 0
0 k

.
The operator is called a contraction if 0 < k < 1 and a dilation if k > 1, and can be extended to negative
values of k by noting that for k < 0, we have

k 0
0 k

1 0
0 1

k 0
0 k

.
This describes contraction or dilation by non-negative scalar k followed by reection across the origin.
We give a summary in the table below:
Linear operator Equations Standard matrix
Contraction or dilation by factor k

y
1
= kx
1
y
2
= kx
2

k 0
0 k

Chapter 8 : Linear Transformations page 4 of 35


Linear Algebra c _ W W L Chen, 1997, 2008
Example 8.2.5. For expansion or compression in the x
1
-direction by a positive factor k, we have
T(x
1
, x
2
) = (kx
1
, x
2
), with standard matrix
A =

k 0
0 1

.
This can be extended to negative values of k by noting that for k < 0, we have

k 0
0 1

1 0
0 1

k 0
0 1

.
This describes expansion or compression in the x
1
-direction by positive factor k followed by reection
across the x
2
-axis. Similarly, for expansion or compression in the x
2
-direction by a non-zero factor k,
we have the standard matrix
A =

1 0
0 k

.
We give a summary in the table below:
Linear operator Equations Standard matrix
Expansion or compression in x
1
-direction

y
1
= kx
1
y
2
= x
2

k 0
0 1

Expansion or compression in x
2
-direction

y
1
= x
1
y
2
= kx
2

1 0
0 k

Example 8.2.6. For shears in the x


1
-direction with factor k, we have T(x
1
, x
2
) = (x
1
+ kx
2
, x
2
), with
standard matrix
A =

1 k
0 1

.
For the case k = 1, we have the following.
Linear Algebra c W W L Chen, 1997, 2006
Example 8.2.5. For expansion or compression in the x
1
-direction by a positive factor k, we have
T(x
1
, x
2
) = (kx
1
, x
2
), with standard matrix
A =

k 0
0 1

.
This can be extended to negative values of k by noting that for k < 0, we have

k 0
0 1

1 0
0 1

k 0
0 1

.
This describes expansion or compression in the x
1
-direction by positive factor k followed by reection
across the x
2
-axis. Similarly, for expansion or compression in the x
2
-direction by a non-zero factor k,
we have the standard matrix
A =

1 0
0 k

.
We give a summary in the table below:
Linear operator Equations Standard matrix
Expansion or compression in x
1
-direction

y
1
= kx
1
y
2
= x
2

k 0
0 1

Expansion or compression in x
2
-direction

y
1
= x
1
y
2
= kx
2

1 0
0 k

Example 8.2.6. For shears in the x


1
-direction with factor k, we have T(x
1
, x
2
) = (x
1
+ kx
2
, x
2
), with
standard matrix
A =

1 k
0 1

.
For the case k = 1, we have the following.


T
(k=1)
For the case k = 1, we have the following.


T
(k=1)
Chapter 8 : Linear Transformations page 5 of 35
For the case k = 1, we have the following.
Linear Algebra c W W L Chen, 1997, 2006
Example 8.2.5. For expansion or compression in the x
1
-direction by a positive factor k, we have
T(x
1
, x
2
) = (kx
1
, x
2
), with standard matrix
A =

k 0
0 1

.
This can be extended to negative values of k by noting that for k < 0, we have

k 0
0 1

1 0
0 1

k 0
0 1

.
This describes expansion or compression in the x
1
-direction by positive factor k followed by reection
across the x
2
-axis. Similarly, for expansion or compression in the x
2
-direction by a non-zero factor k,
we have the standard matrix
A =

1 0
0 k

.
We give a summary in the table below:
Linear operator Equations Standard matrix
Expansion or compression in x
1
-direction

y
1
= kx
1
y
2
= x
2

k 0
0 1

Expansion or compression in x
2
-direction

y
1
= x
1
y
2
= kx
2

1 0
0 k

Example 8.2.6. For shears in the x


1
-direction with factor k, we have T(x
1
, x
2
) = (x
1
+ kx
2
, x
2
), with
standard matrix
A =

1 k
0 1

.
For the case k = 1, we have the following.


T
(k=1)
For the case k = 1, we have the following.


T
(k=1)
Chapter 8 : Linear Transformations page 5 of 35
Chapter 8 : Linear Transformations page 5 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Similarly, for shears in the x
2
-direction with factor k, we have standard matrix
A =

1 0
k 1

.
We give a summary in the table below:
Linear operator Equations Standard matrix
Shear in x
1
-direction

y
1
= x
1
+kx
2
y
2
= x
2

1 k
0 1

Shear in x
2
-direction

y
1
= x
1
y
2
= kx
1
+x
2

1 0
k 1

Example 8.2.7. Consider a linear operator T : R


2
R
2
which consists of a reection across the x
2
-axis,
followed by a shear in the x
1
-direction with factor 3 and then reection across the x
1
-axis. To nd the
standard matrix, consider the eect of T on a standard basis e
1
, e
2
of R
2
. Note that
e
1
=

1
0

1
0

1
0

1
0

= T(e
1
),
e
2
=

0
1

0
1

3
1

3
1

= T(e
2
),
so it follows from Proposition 8A that the standard matrix for T is
A =

1 3
0 1

.
Let us summarize the above and consider a few special cases. We have the following table of invertible
linear operators with k ,= 0. Clearly, if A is the standard matrix for an invertible linear operator T, then
the inverse matrix A
1
is the standard matrix for the inverse linear operator T
1
.
Linear operator T Standard matrix A Inverse matrix A
1
Linear operator T
1
Reection across
line x
1
=x
2

0 1
1 0

0 1
1 0

Reection across
line x
1
=x
2
Expansion or compression
in x
1
direction

k 0
0 1

k
1
0
0 1

Expansion or compression
in x
1
direction
Expansion or compression
in x
2
direction

1 0
0 k

1 0
0 k
1

Expansion or compression
in x
2
direction
Shear
in x
1
direction

1 k
0 1

1 k
0 1

Shear
in x
1
direction
Shear
in x
2
direction

1 0
k 1

1 0
k 1

Shear
in x
2
direction
Next, let us consider the question of elementary row operations on 2 2 matrices. It is not dicult
to see that an elementary row operation performed on a 2 2 matrix A has the eect of multiplying the
Chapter 8 : Linear Transformations page 6 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
matrix A by some elementary matrix E to give the product EA. We have the following table.
Elementary row operation Elementary matrix E
Interchanging the two rows

0 1
1 0

Multiplying row 1 by non-zero factor k

k 0
0 1

Multiplying row 2 by non-zero factor k

1 0
0 k

Adding k times row 2 to row 1

1 k
0 1

Adding k times row 1 to row 2

1 0
k 1

Now, we know that any invertible matrix A can be reduced to the identity matrix by a nite number of
elementary row operations. In other words, there exist a nite number of elementary matrices E
1
, . . . , E
s
of the types above with various non-zero values of k such that
E
s
. . . E
1
A = I,
so that
A = E
1
1
. . . E
1
s
.
We have proved the following result.
PROPOSITION 8B. Suppose that the linear operator T : R
2
R
2
has standard matrix A, where A is
invertible. Then T is the product of a succession of nitely many reections, expansions, compressions
and shears.
In fact, we can prove the following result concerning images of straight lines.
PROPOSITION 8C. Suppose that the linear operator T : R
2
R
2
has standard matrix A, where A
is invertible. Then
(a) the image under T of a straight line is a straight line;
(b) the image under T of a straight line through the origin is a straight line through the origin; and
(c) the images under T of parallel straight lines are parallel straight lines.
Proof. Suppose that T(x
1
, x
2
) = (y
1
, y
2
). Since A is invertible, we have x = A
1
y, where
x =

x
1
x
2

and y =

y
1
y
2

.
The equation of a straight line is given by x
1
+x
2
= or, in matrix form, by
( )

x
1
x
2

= ( ) .
Hence
( ) A
1

y
1
y
2

= ( ) .
Chapter 8 : Linear Transformations page 7 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Let
(

) = ( ) A
1
.
Then
(

y
1
y
2

= ( ) .
In other words, the image under T of the straight line x
1
+x
2
= is

y
1
+

y
2
= , clearly another
straight line. This proves (a). To prove (b), note that straight lines through the origin correspond to
= 0. To prove (c), note that parallel straight lines correspond to dierent values of for the same
values of and . _
8.3. Elementary Properties of Euclidean Linear Transformations
In this section, we establish a number of simple properties of euclidean linear transformations.
PROPOSITION 8D. Suppose that T
1
: R
n
R
m
and T
2
: R
m
R
k
are linear transformations.
Then T = T
2
T
1
: R
n
R
k
is also a linear transformation.
Proof. Since T
1
and T
2
are linear transformations, they have standard matrices A
1
and A
2
respectively.
In other words, we have T
1
(x) = A
1
x for every x R
n
and T
2
(y) = A
2
y for every y R
m
. It follows
that T(x) = T
2
(T
1
(x)) = A
2
A
1
x for every x R
n
, so that T has standard matrix A
2
A
1
. _
Example 8.3.1. Suppose that T
1
: R
2
R
2
is anticlockwise rotation by /2 and T
2
: R
2
R
2
is
orthogonal projection onto the x
1
-axis. Then the respective standard matrices are
A
1
=

0 1
1 0

and A
2
=

1 0
0 0

.
It follows that the standard matrices for T
2
T
1
and T
1
T
2
are respectively
A
2
A
1
=

0 1
0 0

and A
1
A
2
=

0 0
1 0

.
Hence T
2
T
1
and T
1
T
2
are not equal.
Example 8.3.2. Suppose that T
1
: R
2
R
2
is anticlockwise rotation by and T
2
: R
2
R
2
is
anticlockwise rotation by . Then the respective standard matrices are
A
1
=

cos sin
sin cos

and A
2
=

cos sin
sin cos

.
It follows that the standard matrix for T
2
T
1
is
A
2
A
1
=

cos cos sin sin cos sin sin cos


sin cos + cos sin cos cos sin sin

cos( +) sin( +)
sin( +) cos( +)

.
Hence T
2
T
1
is anticlockwise rotation by +.
Example 8.3.3. The reader should check that in R
2
, reection across the x
1
-axis followed by reection
across the x
2
-axis gives reection across the origin.
Linear transformations that map distinct vectors to distinct vectors are of special importance.
Chapter 8 : Linear Transformations page 8 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Definition. A linear transformation T : R
n
R
m
is said to be one-to-one if for every x

, x

R
n
, we
have x

= x

whenever T(x

) = T(x

).
Example 8.3.4. If we consider linear operators T : R
2
R
2
, then T is one-to-one precisely when the
standard matrix A is invertible. To see this, suppose rst of all that A is invertible. If T(x

) = T(x

),
then Ax

= Ax

. Multiplying on the left by A


1
, we obtain x

= x

. Suppose next that A is not


invertible. Then there exists x R
2
such that x ,= 0 and Ax = 0. On the other hand, we clearly have
A0 = 0. It follows that T(x) = T(0), so that T is not one-to-one.
PROPOSITION 8E. Suppose that the linear operator T : R
n
R
n
has standard matrix A. Then the
following statements are equivalent:
(a) The matrix A is invertible.
(b) The linear operator T is one-to-one.
(c) The range of T is R
n
; in other words, R(T) = R
n
.
Proof. ((a)(b)) Suppose that T(x

) = T(x

). Then Ax

= Ax

. Multiplying on the left by A


1
gives
x

= x

.
((b)(a)) Suppose that T is one-to-one. Then the system Ax = 0 has unique solution x = 0 in R
n
.
It follows that A can be reduced by elementary row operations to the identity matrix I, and is therefore
invertible.
((a)(c)) For any y R
n
, clearly x = A
1
y satises Ax = y, so that T(x) = y.
((c)(a)) Suppose that e
1
, . . . , e
n
is the standard basis for R
n
. Let x
1
, . . . , x
n
R
n
be chosen to
satisfy T(x
j
) = e
j
, so that Ax
j
= e
j
, for every j = 1, . . . , n. Write
C = ( x
1
. . . x
n
) .
Then AC = I, so that A is invertible. _
Definition. Suppose that the linear operator T : R
n
R
n
has standard matrix A, where A is invertible.
Then the linear operator T
1
: R
n
R
n
, dened by T
1
(x) = A
1
x for every x R
n
, is called the
inverse of the linear operator T.
Remark. Clearly T
1
(T(x)) = x and T(T
1
(x)) = x for every x R
n
.
Example 8.3.5. Consider the linear operator T : R
2
R
2
, dened by T(x) = Ax for every x R
2
,
where
A =

1 1
1 2

.
Clearly A is invertible, and
A
1
=

2 1
1 1

.
Hence the inverse linear operator is T
1
: R
2
R
2
, dened by T
1
(x) = A
1
x for every x R
2
.
Example 8.3.6. Suppose that T : R
2
R
2
is anticlockwise rotation by angle . The reader should
check that T
1
: R
2
R
2
is anticlockwise rotation by angle 2 .
Next, we study the linearity properties of euclidean linear transformations which we shall use later to
discuss linear transformations in arbitrary real vector spaces.
Chapter 8 : Linear Transformations page 9 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
PROPOSITION 8F. A transformation T : R
n
R
m
is linear if and only if the following two
conditions are satised:
(a) For every u, v R
n
, we have T(u +v) = T(u) +T(v).
(b) For every u R
n
and c R, we have T(cu) = cT(u).
Proof. Suppose rst of all that T : R
n
R
m
is a linear transformation. Let A be the standard matrix
for T. Then for every u, v R
n
and c R, we have
T(u +v) = A(u +v) = Au +Av = T(u) +T(v)
and
T(cu) = A(cu) = c(Au) = cT(u).
Suppose now that (a) and (b) hold. To show that T is linear, we need to nd a matrix A such that
T(x) = Ax for every x R
n
. Suppose that e
1
, . . . , e
n
is the standard basis for R
n
. As suggested by
Proposition 8A, we write
A = ( T(e
1
) . . . T(e
n
) ) ,
where T(e
j
) is a column matrix for every j = 1, . . . , n. For any vector
x =

x
1
.
.
.
x
n

in R
n
, we have
Ax = ( T(e
1
) . . . T(e
n
) )

x
1
.
.
.
x
n

= x
1
T(e
1
) +. . . +x
n
T(e
n
).
Using (b) on each summand and then using (a) inductively, we obtain
Ax = T(x
1
e
1
) +. . . +T(x
n
e
n
) = T(x
1
e
1
+. . . +x
n
e
n
) = T(x)
as required. _
To conclude our study of euclidean linear transformations, we briey mention the problem of eigen-
values and eigenvectors of euclidean linear operators.
Definition. Suppose that T : R
n
R
n
is a linear operator. Then any real number R is called
an eigenvalue of T if there exists a non-zero vector x R
n
such that T(x) = x. This non-zero vector
x R
n
is called an eigenvector of T corresponding to the eigenvalue .
Remark. Note that the equation T(x) = x is equivalent to the equation Ax = x. It follows that
there is no distinction between eigenvalues and eigenvectors of T and those of the standard matrix A.
We therefore do not need to discuss this problem any further.
8.4. General Linear Transformations
Suppose that V and W are real vector spaces. To dene a linear transformation from V into W, we are
motivated by Proposition 8F which describes the linearity properties of euclidean linear transformations.
Chapter 8 : Linear Transformations page 10 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
By a transformation from V into W, we mean a function of the type T : V W, with domain V
and codomain W. For every vector u V , the vector T(u) W is called the image of u under the
transformation T.
Definition. A transformation T : V W from a real vector space V into a real vector space W is
called a linear transformation if the following two conditions are satised:
(LT1) For every u, v V , we have T(u +v) = T(u) +T(v).
(LT2) For every u V and c R, we have T(cu) = cT(u).
Definition. A linear transformation T : V V from a real vector space V into itself is called a linear
operator on V .
Example 8.4.1. Suppose that V and W are two real vector spaces. The transformation T : V W,
where T(u) = 0 for every u V , is clearly linear, and is called the zero transformation from V to W.
Example 8.4.2. Suppose that V is a real vector space. The transformation I : V V , where I(u) = u
for every u V , is clearly linear, and is called the identity operator on V .
Example 8.4.3. Suppose that V is a real vector space, and that k R is xed. The transformation
T : V V , where T(u) = ku for every u V , is clearly linear. This operator is called a dilation if
k > 1 and a contraction if 0 < k < 1.
Example 8.4.4. Suppose that V is a nite dimensional vector space, with basis w
1
, . . . , w
n
. Dene a
transformation T : V R
n
as follows. For every u V , there exists a unique vector (
1
, . . . ,
n
) R
n
such that u =
1
w
1
+ . . . +
n
w
n
. We let T(u) = (
1
, . . . ,
n
). In other words, the transformation T
gives the coordinates of any vector u V with respect to the given basis w
1
, . . . , w
n
. Suppose now
that v =
1
w
1
+. . . +
n
w
n
is another vector in V . Then u +v = (
1
+
1
)w
1
+. . . +(
n
+
n
)w
n
, so
that
T(u +v) = (
1
+
1
, . . . ,
n
+
n
) = (
1
, . . . ,
n
) + (
1
, . . . ,
n
) = T(u) +T(v).
Also, if c R, then cu = c
1
w
1
+. . . +c
n
w
n
, so that
T(cu) = (c
1
, . . . , c
n
) = c(
1
, . . . ,
n
) = cT(u).
Hence T is a linear transformation. We shall return to this in greater detail in the next section.
Example 8.4.5. Suppose that P
n
denotes the vector space of all polynomials with real coecients and
degree at most n. Dene a transformation T : P
n
P
n
as follows. For every polynomial
p = p
0
+p
1
x +. . . +p
n
x
n
in P
n
, we let
T(p) = p
n
+p
n1
x +. . . +p
0
x
n
.
Suppose now that q = q
0
+q
1
x +. . . +q
n
x
n
is another polynomial in P
n
. Then
p +q = (p
0
+q
0
) + (p
1
+q
1
)x +. . . + (p
n
+q
n
)x
n
,
so that
T(p +q) = (p
n
+q
n
) + (p
n1
+q
n1
)x +. . . + (p
0
+q
0
)x
n
= (p
n
+p
n1
x +. . . +p
0
x
n
) + (q
n
+q
n1
x +. . . +q
0
x
n
) = T(p) +T(q).
Chapter 8 : Linear Transformations page 11 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Also, for any c R, we have cp = cp
0
+cp
1
x +. . . +cp
n
x
n
, so that
T(cp) = cp
n
+cp
n1
x +. . . +cp
0
x
n
= c(p
n
+p
n1
x +. . . +p
0
x
n
) = cT(p).
Hence T is a linear transformation.
Example 8.4.6. Let V denote the vector space of all real valued functions dierentiable everywhere in R,
and let W denote the vector space of all real valued functions dened on R. Consider the transformation
T : V W, where T(f) = f

for every f V . It is easy to check from properties of derivatives that T


is a linear transformation.
Example 8.4.7. Let V denote the vector space of all real valued functions that are Riemann integrable
over the interval [0, 1]. Consider the transformation T : V R, where
T(f) =

1
0
f(x) dx
for every f V . It is easy to check from properties of the Riemann integral that T is a linear transfor-
mation.
Consider a linear transformation T : V W from a nite dimensional real vector space V into a real
vector space W. Suppose that v
1
, . . . , v
n
is a basis of V . Then every u V can be written uniquely
in the form u =
1
v
1
+. . . +
n
v
n
, where
1
, . . . ,
n
R. It follows that
T(u) = T(
1
v
1
+. . . +
n
v
n
) = T(
1
v
1
) +. . . +T(
n
v
n
) =
1
T(v
1
) +. . . +
n
T(v
n
).
We have therefore proved the following generalization of Proposition 8A.
PROPOSITION 8G. Suppose that T : V W is a linear transformation from a nite dimensional
real vector space V into a real vector space W. Suppose further that v
1
, . . . , v
n
is a basis of V . Then
T is completely determined by T(v
1
), . . . , T(v
n
).
Example 8.4.8. Consider a linear transformation T : P
2
R, where T(1) = 1, T(x) = 2 and T(x
2
) = 3.
Since 1, x, x
2
is a basis of P
2
, this linear transformation is completely determined. In particular, we
have, for example,
T(5 3x + 2x
2
) = 5T(1) 3T(x) + 2T(x
2
) = 5.
Example 8.4.9. Consider a linear transformation T : R
4
R, where T(1, 0, 0, 0) = 1, T(1, 1, 0, 0) = 2,
T(1, 1, 1, 0) = 3 and T(1, 1, 1, 1) = 4. Since (1, 0, 0, 0), (1, 1, 0, 0), (1, 1, 1, 0), (1, 1, 1, 1) is a basis of R
4
,
this linear transformation is completely determined. In particular, we have, for example,
T(6, 4, 3, 1) = T(2(1, 0, 0, 0) + (1, 1, 0, 0) + 2(1, 1, 1, 0) + (1, 1, 1, 1))
= 2T(1, 0, 0, 0) +T(1, 1, 0, 0) + 2T(1, 1, 1, 0) +T(1, 1, 1, 1) = 14.
We also have the following generalization of Proposition 8D.
PROPOSITION 8H. Suppose that V, W, U are real vector spaces. Suppose further that T
1
: V W
and T
2
: W U are linear transformations. Then T = T
2
T
1
: V U is also a linear transformation.
Proof. Suppose that u, v V . Then
T(u +v) = T
2
(T
1
(u +v)) = T
2
(T
1
(u) +T
1
(v)) = T
2
(T
1
(u)) +T
2
(T
1
(v)) = T(u) +T(v).
Also, if c R, then
T(cu) = T
2
(T
1
(cu)) = T
2
(cT
1
(u)) = cT
2
(T
1
(u)) = cT(u).
Hence T is a linear transformation. _
Chapter 8 : Linear Transformations page 12 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
8.5. Change of Basis
Suppose that V is a real vector space, with basis B = u
1
, . . . , u
n
. Then every vector u V can be
written uniquely as a linear combination
u =
1
u
1
+. . . +
n
u
n
, where
1
, . . . ,
n
R. (3)
It follows that the vector u can be identied with the vector (
1
, . . . ,
n
) R
n
.
Definition. Suppose that u V and (3) holds. Then the matrix
[u]
B
=

1
.
.
.

is called the coordinate matrix of u relative to the basis B = u


1
, . . . , u
n
.
Example 8.5.1. The vectors
u
1
= (1, 2, 1, 0), u
2
= (3, 3, 3, 0), u
3
= (2, 10, 0, 0), u
4
= (2, 1, 6, 2)
are linearly independent in R
4
, and so B = u
1
, u
2
, u
3
, u
4
is a basis of R
4
. It follows that for any
u = (x, y, z, w) R
4
, we can write
u =
1
u
1
+
2
u
2
+
3
u
3
+
4
u
4
.
In matrix notation, this becomes

x
y
z
w

1 3 2 2
2 3 10 1
1 3 0 6
0 0 0 2

,
so that
[u]
B
=

1 3 2 2
2 3 10 1
1 3 0 6
0 0 0 2

x
y
z
w

.
Remark. Consider a function : V R
n
, where (u) = [u]
B
for every u V . It is not dicult to see
that this function gives rise to a one-to-one correspondence between the elements of V and the elements
of R
n
. Furthermore, note that
[u +v]
B
= [u]
B
+ [v]
B
and [cu]
B
= c[u]
B
,
so that (u + v) = (u) + (v) and (cu) = c(u) for every u, v V and c R. Thus is a linear
transformation, and preserves much of the structure of V . We also say that V is isomorphic to R
n
. In
practice, once we have made this identication between vectors and their coordinate matrices, then we
can basically forget about the basis B and imagine that we are working in R
n
with the standard basis.
Clearly, if we change from one basis B = u
1
, . . . , u
n
to another basis ( = v
1
, . . . , v
n
of V , then we
also need to nd a way of calculating [u]
C
in terms of [u]
B
for every vector u V . To do this, note that
each of the vectors v
1
, . . . , v
n
can be written uniquely as a linear combination of the vectors u
1
, . . . , u
n
.
Suppose that for i = 1, . . . , n, we have
v
i
= a
1i
u
1
+. . . +a
ni
u
n
, where a
1i
, . . . , a
ni
R,
Chapter 8 : Linear Transformations page 13 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
so that
[v
i
]
B
=

a
1i
.
.
.
a
ni

.
For every u V , we can write
u =
1
u
1
+. . . +
n
u
n
=
1
v
1
+. . . +
n
v
n
, where
1
, . . . ,
n
,
1
, . . . ,
n
R,
so that
[u]
B
=

1
.
.
.

and [u]
C
=

1
.
.
.

.
Clearly
u =
1
v
1
+. . . +
n
v
n
=
1
(a
11
u
1
+. . . +a
n1
u
n
) +. . . +
n
(a
1n
u
1
+. . . +a
nn
u
n
)
= (
1
a
11
+. . . +
n
a
1n
)u
1
+. . . + (
1
a
n1
+. . . +
n
a
nn
)u
n
=
1
u
1
+. . . +
n
u
n
.
Hence

1
=
1
a
11
+. . . +
n
a
1n
,
.
.
.

n
=
1
a
n1
+. . . +
n
a
nn
.
Written in matrix notation, we have

1
.
.
.

a
11
. . . a
1n
.
.
.
.
.
.
a
n1
. . . a
nn

1
.
.
.

.
We have proved the following result.
PROPOSITION 8J. Suppose that B = u
1
, . . . , u
n
and ( = v
1
, . . . , v
n
are two bases of a real
vector space V . Then for every u V , we have
[u]
B
= P[u]
C
,
where the columns of the matrix
P = ( [v
1
]
B
. . . [v
n
]
B
)
are precisely the coordinate matrices of the elements of ( relative to the basis B.
Remark. Strictly speaking, Proposition 8J gives [u]
B
in terms of [u]
C
. However, note that the matrix
P is invertible (why?), so that [u]
C
= P
1
[u]
B
.
Definition. The matrix P in Proposition 8J is sometimes called the transition matrix from the basis (
to the basis B.
Chapter 8 : Linear Transformations page 14 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Example 8.5.2. We know that with
u
1
= (1, 2, 1, 0), u
2
= (3, 3, 3, 0), u
3
= (2, 10, 0, 0), u
4
= (2, 1, 6, 2),
and with
v
1
= (1, 2, 1, 0), v
2
= (1, 1, 1, 0), v
3
= (1, 0, 1, 0), v
4
= (0, 0, 0, 2),
both B = u
1
, u
2
, u
3
, u
4
and ( = v
1
, v
2
, v
3
, v
4
are bases of R
4
. It is easy to check that
v
1
= u
1
,
v
2
= 2u
1
+u
2
,
v
3
= 11u
1
4u
2
+u
3
,
v
4
= 27u
1
+ 11u
2
2u
3
+u
4
,
so that
P = ( [v
1
]
B
[v
2
]
B
[v
3
]
B
[v
4
]
B
) =

1 2 11 27
0 1 4 11
0 0 1 2
0 0 0 1

.
Hence [u]
B
= P[u]
C
for every u R
4
. It is also easy to check that
u
1
= v
1
,
u
2
= 2v
1
+v
2
,
u
3
= 3v
1
+ 4v
2
+v
3
,
u
4
= v
1
3v
2
+ 2v
3
+v
4
,
so that
Q = ( [u
1
]
C
[u
2
]
C
[u
3
]
C
[u
4
]
C
) =

1 2 3 1
0 1 4 3
0 0 1 2
0 0 0 1

.
Hence [u]
C
= Q[u]
B
for every u R
4
. Note that PQ = I. Now let u = (6, 1, 2, 2). We can check that
u = v
1
+ 3v
2
+ 2v
3
+v
4
, so that
[u]
C
=

1
3
2
1

.
Then
[u]
B
=

1 2 11 27
0 1 4 11
0 0 1 2
0 0 0 1

1
3
2
1

10
6
0
1

.
Check that u = 10u
1
+ 6u
2
+u
4
.
Chapter 8 : Linear Transformations page 15 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Example 8.5.3. Consider the vector space P
2
. It is not too dicult to check that
u
1
= 1 +x, u
2
= 1 +x
2
, u
3
= x +x
2
form a basis of P
2
. Let u = 1 + 4x x
2
. Then u =
1
u
1
+
2
u
2
+
3
u
3
, where
1 + 4x x
2
=
1
(1 +x) +
2
(1 +x
2
) +
3
(x +x
2
) = (
1
+
2
) + (
1
+
3
)x + (
2
+
3
)x
2
,
so that
1
+
2
= 1,
1
+
3
= 4 and
2
+
3
= 1. Hence (
1
,
2
,
3
) = (3, 2, 1). If we write
B = u
1
, u
2
, u
3
, then
[u]
B
=

3
2
1

.
On the other hand, it is also not too dicult to check that
v
1
= 1, v
2
= 1 +x, v
3
= 1 +x +x
2
form a basis of P
2
. Also u =
1
v
1
+
2
v
2
+
3
v
3
, where
1 + 4x x
2
=
1
+
2
(1 +x) +
3
(1 +x +x
2
) = (
1
+
2
+
3
) + (
2
+
3
)x +
3
x
2
,
so that
1
+
2
+
3
= 1,
2
+
3
= 4 and
3
= 1. Hence (
1
,
2
,
3
) = (3, 5, 1). If we write
( = v
1
, v
2
, v
3
, then
[u]
C
=

3
5
1

.
Next, note that
v
1
=
1
2
u
1
+
1
2
u
2

1
2
u
3
,
v
2
= u
1
,
v
3
=
1
2
u
1
+
1
2
u
2
+
1
2
u
3
.
Hence
P = ( [v
1
]
B
[v
2
]
B
[v
3
]
B
) =

1/2 1 1/2
1/2 0 1/2
1/2 0 1/2

.
To verify that [u]
B
= P[u]
C
, note that

3
2
1

1/2 1 1/2
1/2 0 1/2
1/2 0 1/2

3
5
1

.
8.6. Kernel and Range
Consider rst of all a euclidean linear transformation T : R
n
R
m
. Suppose that A is the standard
matrix for T. Then the range of the transformation T is given by
R(T) = T(x) : x R
n
= Ax : x R
n
.
Chapter 8 : Linear Transformations page 16 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
It follows that R(T) is the set of all linear combinations of the columns of the matrix A, and is therefore
the column space of A. On the other hand, the set
x R
n
: Ax = 0
is the nullspace of A.
Recall that the sum of the dimension of the nullspace of A and dimension of the column space of A is
equal to the number of columns of A. This is known as the Rank-nullity theorem. The purpose of this
section is to extend this result to the setting of linear transformations. To do this, we need the following
generalization of the idea of the nullspace and the column space.
Definition. Suppose that T : V W is a linear transformation from a real vector space V into a real
vector space W. Then the set
ker(T) = u V : T(u) = 0
is called the kernel of T, and the set
R(T) = T(u) : u V
is called the range of T.
Example 8.6.1. For a euclidean linear transformation T with standard matrix A, we have shown that
ker(T) is the nullspace of A, while R(T) is the column space of A.
Example 8.6.2. Suppose that T : V W is the zero transformation. Clearly we have ker(T) = V and
R(T) = 0.
Example 8.6.3. Suppose that T : V V is the identity operator on V . Clearly we have ker(T) = 0
and R(T) = V .
Example 8.6.4. Suppose that T : R
2
R
2
is orthogonal projection onto the x
1
-axis. Then ker(T) is
the x
2
-axis, while R(T) is the x
1
-axis.
Example 8.6.5. Suppose that T : R
n
R
n
is one-to-one. Then ker(T) = 0 and R(T) = R
n
, in view
of Proposition 8E.
Example 8.6.6. Consider the linear transformation T : V W, where V denotes the vector space of
all real valued functions dierentiable everywhere in R, where W denotes the space of all real valued
functions dened in R, and where T(f) = f

for every f V . Then ker(T) is the set of all dierentiable


functions with derivative 0, and so is the set of all constant functions in R.
Example 8.6.7. Consider the linear transformation T : V R, where V denotes the vector space of
all real valued functions Riemann integrable over the interval [0, 1], and where
T(f) =

1
0
f(x) dx
for every f V . Then ker(T) is the set of all Riemann integrable functions in [0, 1] with zero mean,
while R(T) = R.
PROPOSITION 8K. Suppose that T : V W is a linear transformation from a real vector space V
into a real vector space W. Then ker(T) is a subspace of V , while R(T) is a subspace of W.
Chapter 8 : Linear Transformations page 17 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Proof. Since T(0) = 0, it follows that 0 ker(T) V and 0 R(T) W. For any u, v ker(T), we
have
T(u +v) = T(u) +T(v) = 0 +0 = 0,
so that u +v ker(T). Suppose further that c R. Then
T(cu) = cT(u) = c0 = 0,
so that cu ker(T). Hence ker(T) is a subspace of V . Suppose next that w, z R(T). Then there exist
u, v V such that T(u) = w and T(v) = z. Hence
T(u +v) = T(u) +T(v) = w+z,
so that w+z R(T). Suppose further that c R. Then
T(cu) = cT(u) = cw,
so that cw R(T). Hence R(T) is a subspace of W. _
To complete this section, we prove the following generalization of the Rank-nullity theorem.
PROPOSITION 8L. Suppose that T : V W is a linear transformation from an n-dimensional real
vector space V into a real vector space W. Then
dimker(T) + dimR(T) = n.
Proof. Suppose rst of all that dimker(T) = n. Then ker(T) = V , and so R(T) = 0, and the result
follows immediately. Suppose next that dimker(T) = 0, so that ker(T) = 0. If v
1
, . . . , v
n
is a
basis of V , then it follows that T(v
1
), . . . , T(v
n
) are linearly independent in W, for otherwise there exist
c
1
, . . . , c
n
R, not all zero, such that
c
1
T(v
1
) +. . . +c
n
T(v
n
) = 0,
so that T(c
1
v
1
+ . . . + c
n
v
n
) = 0, a contradiction since c
1
v
1
+ . . . + c
n
v
n
,= 0. On the other hand,
elements of R(T) are linear combinations of T(v
1
), . . . , T(v
n
). Hence dimR(T) = n, and the result again
follows immediately. We may therefore assume that dimker(T) = r, where 1 r < n. Let v
1
, . . . , v
r

be a basis of ker(T). This basis can be extended to a basis v


1
, . . . , v
r
, v
r+1
, . . . , v
n
of V . It suces to
show that
T(v
r+1
), . . . , T(v
n
) (4)
is a basis of R(T). Suppose that u V . Then there exist
1
, . . . ,
n
R such that
u =
1
v
1
+. . . +
r
v
r
+
r+1
v
r+1
+. . . +
n
v
n
,
so that
T(u) =
1
T(v
1
) +. . . +
r
T(v
r
) +
r+1
T(v
r+1
) +. . . +
n
T(v
n
)
=
r+1
T(v
r+1
) +. . . +
n
T(v
n
).
It follows that (4) spans R(T). It remains to prove that its elements are linearly independent. Suppose
that c
r+1
, . . . , c
n
R and
c
r+1
T(v
r+1
) +. . . +c
n
T(v
n
) = 0. (5)
Chapter 8 : Linear Transformations page 18 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
We need to show that
c
r+1
= . . . = c
n
= 0. (6)
By linearity, it follows from (5) that T(c
r+1
v
r+1
+. . . +c
n
v
n
) = 0, so that
c
r+1
v
r+1
+. . . +c
n
v
n
ker(T).
Hence there exist c
1
, . . . , c
r
R such that
c
r+1
v
r+1
+. . . +c
n
v
n
= c
1
v
1
+. . . +c
r
v
r
,
so that
c
1
v
1
+. . . +c
r
v
r
c
r+1
v
r+1
. . . c
n
v
n
= 0.
Since v
1
, . . . , v
n
is a basis of V , it follows that c
1
= . . . = c
r
= c
r+1
= . . . = c
n
= 0, so that (6) holds.
This completes the proof. _
Remark. We sometimes say that dimR(T) and dimker(T) are respectively the rank and the nullity of
the linear transformation T.
8.7. Inverse Linear Transformations
In this section, we generalize some of the ideas rst discussed in Section 8.3.
Definition. A linear transformation T : V W from a real vector space V into a real vector space W
is said to be one-to-one if for every u

, u

V , we have u

= u

whenever T(u

) = T(u

).
The result below follows immediately from our denition.
PROPOSITION 8M. Suppose that T : V W is a linear transformation from a real vector space V
into a real vector space W. Then T is one-to-one if and only if ker(T) = 0.
Proof. () Clearly 0 ker(T). Suppose that ker(T) ,= 0. Then there exists a non-zero v ker(T).
It follows that T(v) = T(0), and so T is not one-to-one.
() Suppose that ker(T) = 0. Given any u

, u

V , we have
T(u

) T(u

) = T(u

) = 0
if and only if u

= 0; in other words, if and only if u

= u

. _
We have the following generalization of Proposition 8E.
PROPOSITION 8N. Suppose that T : V V is a linear operator on a nite-dimensional real vector
space V . Then the following statements are equivalent:
(a) The linear operator T is one-to-one.
(b) We have ker(T) = 0.
(c) The range of T is V ; in other words, R(T) = V .
Proof. The equivalence of (a) and (b) is established by Proposition 8M. The equivalence of (b) and (c)
follows from Proposition 8L. _
Chapter 8 : Linear Transformations page 19 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Suppose that T : V W is a one-to-one linear transformation from a real vector space V into a real
vector space W. Then for every w R(T), there exists exactly one u V such that T(u) = w. We can
therefore dene a transformation T
1
: R(T) V by writing T
1
(w) = u, where u V is the unique
vector satisfying T(u) = w.
PROPOSITION 8P. Suppose that T : V W is a one-to-one linear transformation from a real vector
space V into a real vector space W. Then T
1
: R(T) V is a linear transformation.
Proof. Suppose that w, z R(T). Then there exist u, v V such that T
1
(w) = u and T
1
(z) = v.
It follows that T(u) = w and T(v) = z, so that T(u +v) = T(u) +T(v) = w+z, whence
T
1
(w+z) = u +v = T
1
(w) +T
1
(z).
Suppose further that c R. Then T(cu) = cw, so that
T
1
(cw) = cu = cT
1
(w).
This completes the proof. _
We also have the following result concerning compositions of linear transformations and which requires
no further proof, in view of our knowledge concerning inverse functions.
PROPOSITION 8Q. Suppose that V, W, U are real vector spaces. Suppose further that T
1
: V W
and T
2
: W U are one-to-one linear transformations. Then
(a) the linear transformation T
2
T
1
: V U is one-to-one; and
(b) (T
2
T
1
)
1
= T
1
1
T
1
2
.
8.8. Matrices of General Linear Transformations
Suppose that T : V W is a linear transformation from a real vector space V to a real vector space W.
Suppose further that the vector spaces V and W are nite dimensional, with dimV = n and dimW = m.
We shall show that if we make use of a basis B of V and a basis ( of W, then it is possible to describe
T indirectly in terms of some matrix A. The main idea is to make use of coordinate matrices relative to
the bases B and (.
Let us recall some discussion in Section 8.5. Suppose that B = v
1
, . . . , v
n
is a basis of V . Then
every vector v V can be written uniquely as a linear combination
v =
1
v
1
+. . . +
n
v
n
, where
1
, . . . ,
n
R. (7)
The matrix
[v]
B
=

1
.
.
.

(8)
is the coordinate matrix of v relative to the basis B.
Consider now a transformation : V R
n
, where (v) = [v]
B
for every v V . The proof of the
following result is straightforward.
PROPOSITION 8R. Suppose that the real vector space V has basis B = v
1
, . . . , v
n
. Then the
transformation : V R
n
, where (v) = [v]
B
satises (7) and (8) for every v V , is a one-
to-one linear transformation, with range R() = R
n
. Furthermore, the inverse linear transformation

1
: R
n
V is also one-to-one, with range R(
1
) = V .
Chapter 8 : Linear Transformations page 20 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Suppose next that ( = w
1
, . . . , w
m
is a basis of W. Then we can dene a linear transformation
: W R
m
, where (w) = [w]
C
for every w W, in a similar way. We now have the following
diagram of linear transformations.
Linear Algebra c W W L Chen, 1997, 2006
Suppose next that C = {w
1
, . . . , w
m
} is a basis of W. Then we can dene a linear transformation
: W R
m
, where (w) = [w]
C
for every w W, in a similar way. We now have the following
diagram of linear transformations.
V W
R
n
R
m
T

1

1
Clearly the composition
S = T
1
: R
n
R
m
is a euclidean linear transformation, and can therefore be described in terms of a standard matrix A.
Our task is to determine this matrix A in terms of T and the bases B and C.
We know from Proposition 8A that
A = ( S(e
1
) . . . S(e
n
) ) ,
where {e
1
, . . . , e
n
} is the standard basis for R
n
. For every j = 1, . . . , n, we have
S(e
j
) = ( T
1
)(e
j
) = (T(
1
(e
j
))) = (T(v
j
)) = [T(v
j
)]
C
.
It follows that
(9) A = ( [T(v
1
)]
C
. . . [T(v
n
)]
C
) .
Definition. The matrix A given by (9) is called the matrix for the linear transformation T with respect
to the bases B and C.
We now have the following diagram of linear transformations.
V W
R
n
R
m
T

S

1
Hence we can write T as the composition
T =
1
S : V W.
For every v V , we have the following:
v [v]
B
A[v]
B

1
(A[v]
B
)

S

1
Chapter 8 : Linear Transformations page 21 of 35
Clearly the composition
S = T
1
: R
n
R
m
is a euclidean linear transformation, and can therefore be described in terms of a standard matrix A.
Our task is to determine this matrix A in terms of T and the bases B and (.
We know from Proposition 8A that
A = ( S(e
1
) . . . S(e
n
) ) ,
where e
1
, . . . , e
n
is the standard basis for R
n
. For every j = 1, . . . , n, we have
S(e
j
) = ( T
1
)(e
j
) = (T(
1
(e
j
))) = (T(v
j
)) = [T(v
j
)]
C
.
It follows that
A = ( [T(v
1
)]
C
. . . [T(v
n
)]
C
) . (9)
Definition. The matrix A given by (9) is called the matrix for the linear transformation T with respect
to the bases B and (.
We now have the following diagram of linear transformations.
Linear Algebra c W W L Chen, 1997, 2006
Suppose next that C = {w
1
, . . . , w
m
} is a basis of W. Then we can dene a linear transformation
: W R
m
, where (w) = [w]
C
for every w W, in a similar way. We now have the following
diagram of linear transformations.
V W
R
n
R
m
T

1

1
Clearly the composition
S = T
1
: R
n
R
m
is a euclidean linear transformation, and can therefore be described in terms of a standard matrix A.
Our task is to determine this matrix A in terms of T and the bases B and C.
We know from Proposition 8A that
A = ( S(e
1
) . . . S(e
n
) ) ,
where {e
1
, . . . , e
n
} is the standard basis for R
n
. For every j = 1, . . . , n, we have
S(e
j
) = ( T
1
)(e
j
) = (T(
1
(e
j
))) = (T(v
j
)) = [T(v
j
)]
C
.
It follows that
(9) A = ( [T(v
1
)]
C
. . . [T(v
n
)]
C
) .
Definition. The matrix A given by (9) is called the matrix for the linear transformation T with respect
to the bases B and C.
We now have the following diagram of linear transformations.
V W
R
n
R
m
T

S

1
Hence we can write T as the composition
T =
1
S : V W.
For every v V , we have the following:
v [v]
B
A[v]
B

1
(A[v]
B
)

S

1
Chapter 8 : Linear Transformations page 21 of 35
Hence we can write T as the composition
T =
1
S : V W.
For every v V , we have the following:
Linear Algebra c W W L Chen, 1997, 2006
Suppose next that C = {w
1
, . . . , w
m
} is a basis of W. Then we can dene a linear transformation
: W R
m
, where (w) = [w]
C
for every w W, in a similar way. We now have the following
diagram of linear transformations.
V W
R
n
R
m
T

1

1
Clearly the composition
S = T
1
: R
n
R
m
is a euclidean linear transformation, and can therefore be described in terms of a standard matrix A.
Our task is to determine this matrix A in terms of T and the bases B and C.
We know from Proposition 8A that
A = ( S(e
1
) . . . S(e
n
) ) ,
where {e
1
, . . . , e
n
} is the standard basis for R
n
. For every j = 1, . . . , n, we have
S(e
j
) = ( T
1
)(e
j
) = (T(
1
(e
j
))) = (T(v
j
)) = [T(v
j
)]
C
.
It follows that
(9) A = ( [T(v
1
)]
C
. . . [T(v
n
)]
C
) .
Definition. The matrix A given by (9) is called the matrix for the linear transformation T with respect
to the bases B and C.
We now have the following diagram of linear transformations.
V W
R
n
R
m
T

S

1
Hence we can write T as the composition
T =
1
S : V W.
For every v V , we have the following:
v [v]
B
A[v]
B

1
(A[v]
B
)

S

1
Chapter 8 : Linear Transformations page 21 of 35
Chapter 8 : Linear Transformations page 21 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
More precisely, if v =
1
v
1
+. . . +
n
v
n
, then
[v]
B
=

1
.
.
.

and A[v]
B
= A

1
.
.
.

1
.
.
.

,
say, and so T(v) =
1
(A[v]
B
) =
1
w
1
+. . . +
m
w
m
. We have proved the following result.
PROPOSITION 8S. Suppose that T : V W is a linear transformation from a real vector space V
into a real vector space W. Suppose further that V and W are nite dimensional, with bases B and (
respectively, and that A is the matrix for the linear transformation T with respect to the bases B and (.
Then for every v V , we have T(v) = w, where w W is the unique vector satisfying [w]
C
= A[v]
B
.
Remark. In the special case when V = W, the linear transformation T : V W is a linear operator
on T. Of course, we may choose a basis B for the domain V of T and a basis ( for the codomain V
of T. In the case when T is the identity linear operator, we often choose B ,= ( since this represents a
change of basis. In the case when T is not the identity operator, we often choose B = ( for the sake of
convenience; we then say that A is the matrix for the linear operator T with respect to the basis B.
Example 8.8.1. Consider an operator T : P
3
P
3
on the real vector space P
3
of all polynomials with
real coecients and degree at most 3, where for every polynomial p(x) in P
3
, we have T(p(x)) = xp

(x),
the product of x with the formal derivative p

(x) of p(x). The reader is invited to check that T is a


linear operator. Now consider the basis B = 1, x, x
2
, x
3
of P
3
. The matrix for T with respect to B is
given by
A = ( [T(1)]
B
[T(x)]
B
[T(x
2
)]
B
[T(x
3
)]
B
) = ( [0]
B
[x]
B
[2x
2
]
B
[3x
3
]
B
) =

0 0 0 0
0 1 0 0
0 0 2 0
0 0 0 3

.
Suppose that p(x) = 1 + 2x + 4x
2
+ 3x
3
. Then
[p(x)]
B
=

1
2
4
3

and A[p(x)]
B
=

0 0 0 0
0 1 0 0
0 0 2 0
0 0 0 3

1
2
4
3

0
2
8
9

,
so that T(p(x)) = 2x + 8x
2
+ 9x
3
. This can be easily veried by noting that
T(p(x)) = xp

(x) = x(2 + 8x + 9x
2
) = 2x + 8x
2
+ 9x
3
.
In general, if p(x) = p
0
+p
1
x +p
2
x
2
+p
3
x
3
, then
[p(x)]
B
=

p
0
p
1
p
2
p
3

and A[p(x)]
B
=

0 0 0 0
0 1 0 0
0 0 2 0
0 0 0 3

p
0
p
1
p
2
p
3

0
p
1
2p
2
3p
3

,
so that T(p(x)) = p
1
x + 2p
2
x
2
+ 3p
3
x
3
. Observe that
T(p(x)) = xp

(x) = x(p
1
+ 2p
2
x + 3p
3
x
2
) = p
1
x + 2p
2
x
2
+ 3p
3
x
3
,
verifying our result.
Chapter 8 : Linear Transformations page 22 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Example 8.8.2. Consider the linear operator T : R
2
R
2
, given by T(x
1
, x
2
) = (2x
1
+ x
2
, x
1
+ 3x
2
)
for every (x
1
, x
2
) R
2
. Consider also the basis B = (1, 0), (1, 1) of R
2
. Then the matrix for T with
respect to B is given by
A = ( [T(1, 0)]
B
[T(1, 1)]
B
) = ( [(2, 1)]
B
[(3, 4)]
B
) =

1 1
1 4

.
Suppose that (x
1
, x
2
) = (3, 2). Then
[(3, 2)]
B
=

1
2

and A[(3, 2)]


B
=

1 1
1 4

1
2

1
9

,
so that T(3, 2) = (1, 0) + 9(1, 1) = (8, 9). This can be easily veried directly. In general, we have
[(x
1
, x
2
)]
B
=

x
1
x
2
x
2

and A[(x
1
, x
2
)]
B
=

1 1
1 4

x
1
x
2
x
2

x
1
2x
2
x
1
+ 3x
2

,
so that T(x
1
, x
2
) = (x
1
2x
2
)(1, 0) + (x
1
+ 3x
2
)(1, 1) = (2x
1
+x
2
, x
1
+ 3x
2
).
Example 8.8.3. Suppose that T : R
n
R
m
is a linear transformation. Suppose further that B and (
are the standard bases for R
n
and R
m
respectively. Then the matrix for T with respect to B and ( is
given by
A = ( [T(e
1
)]
C
. . . [T(e
n
)]
C
) = ( T(e
1
) . . . T(e
n
) ) ,
so it follows from Proposition 8A that A is simply the standard matrix for T.
Suppose now that T
1
: V W and T
2
: W U are linear transformations, where the real vector
spaces V, W, U are nite dimensional, with respective bases B = v
1
, . . . , v
n
, ( = w
1
, . . . , w
m
and
T = u
1
, . . . , u
k
. We then have the following diagram of linear transformations.
Linear Algebra c W W L Chen, 1997, 2006
Example 8.8.2. Consider the linear operator T : R
2
R
2
, given by T(x
1
, x
2
) = (2x
1
+ x
2
, x
1
+ 3x
2
)
for every (x
1
, x
2
) R
2
. Consider also the basis B = {(1, 0), (1, 1)} of R
2
. Then the matrix for T with
respect to B is given by
A = ( [T(1, 0)]
B
[T(1, 1)]
B
) = ( [(2, 1)]
B
[(3, 4)]
B
) =

1 1
1 4

.
Suppose that (x
1
, x
2
) = (3, 2). Then
[(3, 2)]
B
=

1
2

and A[(3, 2)]


B
=

1 1
1 4

1
2

1
9

,
so that T(3, 2) = (1, 0) + 9(1, 1) = (8, 9). This can be easily veried directly. In general, we have
[(x
1
, x
2
)]
B
=

x
1
x
2
x
2

and A[(x
1
, x
2
)]
B
=

1 1
1 4

x
1
x
2
x
2

x
1
2x
2
x
1
+ 3x
2

,
so that T(x
1
, x
2
) = (x
1
2x
2
)(1, 0) + (x
1
+ 3x
2
)(1, 1) = (2x
1
+x
2
, x
1
+ 3x
2
).
Example 8.8.3. Suppose that T : R
n
R
m
is a linear transformation. Suppose further that B and C
are the standard bases for R
n
and R
m
respectively. Then the matrix for T with respect to B and C is
given by
A = ( [T(e
1
)]
C
. . . [T(e
n
)]
C
) = ( T(e
1
) . . . T(e
n
) ) ,
so it follows from Proposition 8A that A is simply the standard matrix for T.
Suppose now that T
1
: V W and T
2
: W U are linear transformations, where the real vector
spaces V, W, U are nite dimensional, with respective bases B = {v
1
, . . . , v
n
}, C = {w
1
, . . . , w
m
} and
D = {u
1
, . . . , u
k
}. We then have the following diagram of linear transformations.
V W U
R
n
R
m
R
k
T
1

T
2

S
1

1
S
2

1
Here : U R
k
, where (u) = [u]
D
for every u U, is a linear transformation, and
S
1
= T
1

1
: R
n
R
m
and S
2
= T
2

1
: R
m
R
k
are euclidean linear transformations. Suppose that A
1
and A
2
are respectively the standard matrices
for S
1
and S
2
, so that they are respectively the matrix for T
1
with respect to B and C and the matrix
for T
2
with respect to C and D. Clearly
S
2
S
1
= T
2
T
1

1
: R
n
R
k
.
It follows that A
2
A
1
is the standard matrix for S
2
S
1
, and so is the matrix for T
2
T
1
with respect to
the bases B and D. To summarize, we have the following result.
Chapter 8 : Linear Transformations page 23 of 35
Here : U R
k
, where (u) = [u]
D
for every u U, is a linear transformation, and
S
1
= T
1

1
: R
n
R
m
and S
2
= T
2

1
: R
m
R
k
are euclidean linear transformations. Suppose that A
1
and A
2
are respectively the standard matrices
for S
1
and S
2
, so that they are respectively the matrix for T
1
with respect to B and ( and the matrix
for T
2
with respect to ( and T. Clearly
S
2
S
1
= T
2
T
1

1
: R
n
R
k
.
It follows that A
2
A
1
is the standard matrix for S
2
S
1
, and so is the matrix for T
2
T
1
with respect to
the bases B and T. To summarize, we have the following result.
Chapter 8 : Linear Transformations page 23 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
PROPOSITION 8T. Suppose that T
1
: V W and T
2
: W U are linear transformations, where
the real vector spaces V, W, U are nite dimensional, with bases B, (, T respectively. Suppose further that
A
1
is the matrix for the linear transformation T
1
with respect to the bases B and (, and that A
2
is the
matrix for the linear transformation T
2
with respect to the bases ( and T. Then A
2
A
1
is the matrix for
the linear transformation T
2
T
1
with respect to the bases B and T.
Example 8.8.4. Consider the linear operator T
1
: P
3
P
3
, where for every polynomial p(x) in P
3
,
we have T
1
(p(x)) = xp

(x). We have already shown that the matrix for T


1
with respect to the basis
B = 1, x, x
2
, x
3
of P
3
is given by
A
1
=

0 0 0 0
0 1 0 0
0 0 2 0
0 0 0 3

.
Consider next the linear operator T
2
: P
3
P
3
, where for every polynomial q(x) = q
0
+q
1
x+q
2
x
2
+q
3
x
3
in P
3
, we have
T
2
(q(x)) = q(1 +x) = q
0
+q
1
(1 +x) +q
2
(1 +x)
2
+q
3
(1 +x)
3
.
We have T
2
(1) = 1, T
2
(x) = 1 + x, T
2
(x
2
) = 1 + 2x + x
2
and T
2
(x
3
) = 1 + 3x + 3x
2
+ x
3
, so that the
matrix for T
2
with respect to B is given by
A
2
= ( [T
2
(1)]
B
[T
2
(x)]
B
[T
2
(x
2
)]
B
[T
2
(x
3
)]
B
) =

1 1 1 1
0 1 2 3
0 0 1 3
0 0 0 1

.
Consider now the composition T = T
2
T
1
: P
3
P
3
. Let A denote the matrix for T with respect to B.
By Proposition 8T, we have
A = A
2
A
1
=

1 1 1 1
0 1 2 3
0 0 1 3
0 0 0 1

0 0 0 0
0 1 0 0
0 0 2 0
0 0 0 3

0 1 2 3
0 1 4 9
0 0 2 9
0 0 0 3

.
Suppose that p(x) = p
0
+p
1
x +p
2
x
2
+p
3
x
3
. Then
[p(x)]
B
=

p
0
p
1
p
2
p
3

and A[p(x)]
B
=

0 1 2 3
0 1 4 9
0 0 2 9
0 0 0 3

p
0
p
1
p
2
p
3

p
1
+ 2p
2
+ 3p
3
p
1
+ 4p
2
+ 9p
3
2p
2
+ 9p
3
3p
3

,
so that T(p(x)) = (p
1
+2p
2
+3p
3
) +(p
1
+4p
2
+9p
3
)x+(2p
2
+9p
3
)x
2
+3p
3
x
3
. We can check this directly
by noting that
T(p(x)) = T
2
(T
1
(p(x))) = T
2
(p
1
x + 2p
2
x
2
+ 3p
3
x
3
) = p
1
(1 +x) + 2p
2
(1 +x)
2
+ 3p
3
(1 +x)
3
= (p
1
+ 2p
2
+ 3p
3
) + (p
1
+ 4p
2
+ 9p
3
)x + (2p
2
+ 9p
3
)x
2
+ 3p
3
x
3
.
Example 8.8.5. Consider the linear operator T : R
2
R
2
, given by T(x
1
, x
2
) = (2x
1
+ x
2
, x
1
+ 3x
2
)
for every (x
1
, x
2
) R
2
. We have already shown that the matrix for T with respect to the basis
B = (1, 0), (1, 1) of R
2
is given by
A =

1 1
1 4

.
Chapter 8 : Linear Transformations page 24 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Consider the linear operator T
2
: R
2
R
2
. By Proposition 8T, the matrix for T
2
with respect to B is
given by
A
2
=

1 1
1 4

1 1
1 4

0 5
5 15

.
Suppose that (x
1
, x
2
) R
2
. Then
[(x
1
, x
2
)]
B
=

x
1
x
2
x
2

and A
2
[(x
1
, x
2
)]
B
=

0 5
5 15

x
1
x
2
x
2

5x
2
5x
1
+ 10x
2

,
so that T(x
1
, x
2
) = 5x
2
(1, 0) + (5x
1
+ 10x
2
)(1, 1) = (5x
1
+ 5x
2
, 5x
1
+ 10x
2
). The reader is invited to
check this directly.
A simple consequence of Propositions 8N and 8T is the following result concerning inverse linear
transformations.
PROPOSITION 8U. Suppose that T : V V is a linear operator on a nite dimensional real vector
space V with basis B. Suppose further that A is the matrix for the linear operator T with respect to the
basis B. Then T is one-to-one if and only if A is invertible. Furthermore, if T is one-to-one, then A
1
is the matrix for the inverse linear operator T
1
: V V with respect to the basis B.
Proof. Simply note that T is one-to-one if and only if the system Ax = 0 has only the trivial solution
x = 0. The last assertion follows easily from Proposition 8T, since if A

denotes the matrix for the


inverse linear operator T
1
with respect to B, then we must have A

A = I, the matrix for the identity


operator T
1
T with respect to B. _
Example 8.8.6. Consider the linear operator T : P
3
P
3
, where for every q(x) = q
0
+q
1
x+q
2
x
2
+q
3
x
3
in P
3
, we have
T(q(x)) = q(1 +x) = q
0
+q
1
(1 +x) +q
2
(1 +x)
2
+q
3
(1 +x)
3
.
We have already shown that the matrix for T with respect to the basis B = 1, x, x
2
, x
3
is given by
A =

1 1 1 1
0 1 2 3
0 0 1 3
0 0 0 1

.
This matrix is invertible, so it follows that T is one-to-one. Furthermore, it can be checked that
A
1
=

1 1 1 1
0 1 2 3
0 0 1 3
0 0 0 1

.
Suppose that p(x) = p
0
+p
1
x +p
2
x
2
+p
3
x
3
. Then
[p(x)]
B
=

p
0
p
1
p
2
p
3

and A
1
[p(x)]
B
=

1 1 1 1
0 1 2 3
0 0 1 3
0 0 0 1

p
0
p
1
p
2
p
3

p
0
p
1
+p
2
p
3
p
1
2p
2
+ 3p
3
p
2
3p
3
p
3

,
so that
T
1
(p(x)) = (p
0
p
1
+p
2
p
3
) + (p
1
2p
2
+ 3p
3
)x + (p
2
3p
3
)x
2
+p
3
x
3
= p
0
+p
1
(x 1) +p
2
(x
2
2x + 1) +p
3
(x
3
3x
2
+ 3x 1)
= p
0
+p
1
(x 1) +p
2
(x 1)
2
+p
3
(x 1)
3
= p(x 1).
Chapter 8 : Linear Transformations page 25 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
8.9. Change of Basis
Suppose that V is a nite dimensional real vector space, with one basis B = v
1
, . . . , v
n
and another
basis B

= u
1
, . . . , u
n
. Suppose that T : V V is a linear operator on V . Let A denote the matrix
for T with respect to the basis B, and let A

denote the matrix for T with respect to the basis B

. If
v V and T(v) = w, then
[w]
B
= A[v]
B
(10)
and
[w]
B
= A

[v]
B
. (11)
We wish to nd the relationship between A

and A.
Recall Proposition 8J, that if
P = ( [u
1
]
B
. . . [u
n
]
B
)
denotes the transition matrix from the basis B

to the basis B, then


[v]
B
= P[v]
B
and [w]
B
= P[w]
B
. (12)
Note that the matrix P can also be interpreted as the matrix for the identity operator I : V V with
respect to the bases B

and B. It is easy to see that the matrix P is invertible, and


P
1
= ( [v
1
]
B
. . . [v
n
]
B
)
denotes the transition matrix from the basis B to the basis B

, and can also be interpreted as the matrix


for the identity operator I : V V with respect to the bases B and B

.
Combining (10) and (12), we conclude that
[w]
B
= P
1
[w]
B
= P
1
A[v]
B
= P
1
AP[v]
B
.
Comparing this with (11), we conclude that
P
1
AP = A

. (13)
This implies that
A = PA

P
1
. (14)
Remark. We can use the notation
A = [T]
B
and A

= [T]
B

to denote that A and A

are the matrices for T with respect to the basis B and with respect to the basis
B

respectively. We can also write


P = [I]
B,B

to denote that P is the transition matrix from the basis B

to the basis B, so that


P
1
= [I]
B

,B
.
Chapter 8 : Linear Transformations page 26 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Then (13) and (14) become respectively
[I]
B

,B
[T]
B
[I]
B,B
= [T]
B
and [I]
B,B
[T]
B
[I]
B

,B
= [T]
B
.
We have proved the following result.
PROPOSITION 8V. Suppose that T : V V is a linear operator on a nite dimensional real vector
space V , with bases B = v
1
, . . . , v
n
and B

= u
1
, . . . , u
n
. Suppose further that A and A

are the
matrices for T with respect to the basis B and with respect to the basis B

respectively. Then
P
1
AP = A

and A

= PAP
1
,
where
P = ( [u
1
]
B
. . . [u
n
]
B
)
denotes the transition matrix from the basis B

to the basis B.
Remarks. (1) We have the following picture.
Linear Algebra c W W L Chen, 1997, 2006
Then (13) and (14) become respectively
[I]
B

,B
[T]
B
[I]
B,B
= [T]
B
and [I]
B,B
[T]
B
[I]
B

,B
= [T]
B
.
We have proved the following result.
PROPOSITION 8V. Suppose that T : V V is a linear operator on a nite dimensional real vector
space V , with bases B = {v
1
, . . . , v
n
} and B

= {u
1
, . . . , u
n
}. Suppose further that A and A

are the
matrices for T with respect to the basis B and with respect to the basis B

respectively. Then
P
1
AP = A

and A

= PAP
1
,
where
P = ( [u
1
]
B
. . . [u
n
]
B
)
denotes the transition matrix from the basis B

to the basis B.
Remarks. (1) We have the following picture.
v w
v w
[v]
B
[w]
B

[v]
B
[w]
B
T
I I
T
A

P
A
P
1
(2) The idea can be extended to the case of linear transformations T : V W from a nite dimensional
real vector space into another, with a change of basis in V and a change of basis in W.
Example 8.9.1. Consider the vector space P
3
of all polynomials with real coecients and degree at
most 3, with bases B = {1, x, x
2
, x
3
} and B

= {1, 1 + x, 1 + x + x
2
, 1 + x + x
2
+ x
3
}. Consider also
the linear operator T : P
3
P
3
, where for every polynomial p(x) = p
0
+ p
1
x + p
2
x
2
+ p
3
x
3
, we have
T(p(x)) = (p
0
+p
1
) +(p
1
+p
2
)x+(p
2
+p
3
)x
2
+(p
0
+p
3
)x
3
. Let A denote the matrix for T with respect
to the basis B. Then T(1) = 1 +x
3
, T(x) = 1 +x, T(x
2
) = x +x
2
and T(x
3
) = x
2
+x
3
, and so
A = ( [T(1)]
B
[T(x)]
B
[T(x
2
)]
B
[T(x
3
)]
B
) =

1 1 0 0
0 1 1 0
0 0 1 1
1 0 0 1

.
Next, note that the transition matrix from the basis B

to the basis B is given by


P = ( [1]
B
[1 +x]
B
[1 +x +x
2
]
B
[1 +x +x
2
+x
3
]
B
) =

1 1 1 1
0 1 1 1
0 0 1 1
0 0 0 1

.
Chapter 8 : Linear Transformations page 27 of 35
(2) The idea can be extended to the case of linear transformations T : V W from a nite dimensional
real vector space into another, with a change of basis in V and a change of basis in W.
Example 8.9.1. Consider the vector space P
3
of all polynomials with real coecients and degree at
most 3, with bases B = 1, x, x
2
, x
3
and B

= 1, 1 + x, 1 + x + x
2
, 1 + x + x
2
+ x
3
. Consider also
the linear operator T : P
3
P
3
, where for every polynomial p(x) = p
0
+ p
1
x + p
2
x
2
+ p
3
x
3
, we have
T(p(x)) = (p
0
+p
1
) +(p
1
+p
2
)x+(p
2
+p
3
)x
2
+(p
0
+p
3
)x
3
. Let A denote the matrix for T with respect
to the basis B. Then T(1) = 1 +x
3
, T(x) = 1 +x, T(x
2
) = x +x
2
and T(x
3
) = x
2
+x
3
, and so
A = ( [T(1)]
B
[T(x)]
B
[T(x
2
)]
B
[T(x
3
)]
B
) =

1 1 0 0
0 1 1 0
0 0 1 1
1 0 0 1

.
Next, note that the transition matrix from the basis B

to the basis B is given by


P = ( [1]
B
[1 +x]
B
[1 +x +x
2
]
B
[1 +x +x
2
+x
3
]
B
) =

1 1 1 1
0 1 1 1
0 0 1 1
0 0 0 1

.
Chapter 8 : Linear Transformations page 27 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
It can be checked that
P
1
=

1 1 0 0
0 1 1 0
0 0 1 1
0 0 0 1

,
and so
A

= P
1
AP =

1 1 0 0
0 1 1 0
0 0 1 1
0 0 0 1

1 1 0 0
0 1 1 0
0 0 1 1
1 0 0 1

1 1 1 1
0 1 1 1
0 0 1 1
0 0 0 1

1 1 0 0
0 1 1 0
1 1 0 0
1 1 1 2

is the matrix for T with respect to the basis B

. It follows that
T(1) = 1 (1 +x +x
2
) + (1 +x +x
2
+x
3
) = 1 +x
3
,
T(1 +x) = 1 + (1 +x) (1 +x +x
2
) + (1 +x +x
2
+x
3
) = 2 +x +x
3
,
T(1 +x +x
2
) = (1 +x) + (1 +x +x
2
+x
3
) = 2 + 2x +x
2
+x
3
,
T(1 +x +x
2
+x
3
) = 2(1 +x +x
2
+x
3
) = 2 + 2x + 2x
2
+ 2x
3
.
These can be veried directly.
8.10. Eigenvalues and Eigenvectors
Definition. Suppose that T : V V is a linear operator on a nite dimensional real vector space V .
Then any real number R is called an eigenvalue of T if there exists a non-zero vector v V such that
T(v) = v. This non-zero vector v V is called an eigenvector of T corresponding to the eigenvalue .
The purpose of this section is to show that the problem of eigenvalues and eigenvectors of the linear
operator T can be reduced to the problem of eigenvalues and eigenvectors of the matrix for T with
respect to any basis B of V . The starting point of our argument is the following theorem, the proof of
which is left as an exercise.
PROPOSITION 8W. Suppose that T : V V is a linear operator on a nite dimensional real vector
space V , with bases B and B

. Suppose further that A and A

are the matrices for T with respect to the


basis B and with respect to the basis B

respectively. Then
(a) det A = det A

;
(b) A and A

have the same rank;


(c) A and A

have the same characteristic polynomial;


(d) A and A

have the same eigenvalues; and


(e) the dimension of the eigenspace of A corresponding to an eigenvalue is equal to the dimension of
the eigenspace of A

corresponding to .
We also state without proof the following result.
PROPOSITION 8X. Suppose that T : V V is a linear operator on a nite dimensional real vector
space V . Suppose further that A is the matrix for T with respect to a basis B of V . Then
(a) the eigenvalues of T are precisely the eigenvalues of A; and
(b) a vector u V is an eigenvector of T corresponding to an eigenvalue if and only if the coordinate
matrix [u]
B
is an eigenvector of A corresponding to the eigenvalue .
Chapter 8 : Linear Transformations page 28 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Suppose now that A is the matrix for a linear operator T : V V on a nite dimensional real
vector space V with respect to a basis B = v
1
, . . . , v
n
. If A can be diagonalized, then there exists an
invertible matrix P such that
P
1
AP = D
is a diagonal matrix. Furthermore, the columns of P are eigenvectors of A, and so are the coordinate
matrices of eigenvectors of T with respect to the basis B. In other words,
P = ( [u
1
]
B
. . . [u
n
]
B
) ,
where B

= u
1
, . . . , u
n
is a basis of V consiting of eigenvectors of T. Furthermore, P is the transition
matrix from the basis B

to the basis B. It follows that the matrix for T with respect to the basis B

is
given by
D =

1
.
.
.

,
where
1
, . . . ,
n
are the eigenvalues of T.
Example 8.10.1. Consider the vector space P
2
of all polynomials with real coecients and degree at
most 2, with basis B = 1, x, x
2
. Consider also the linear operator T : P
2
P
2
, where for every
polynomial p(x) = p
0
+p
1
x +p
2
x
2
, we have T(p(x)) = (5p
0
2p
1
) +(6p
1
+2p
2
2p
0
)x +(2p
1
+7p
2
)x
2
.
Then T(1) = 5 2x, T(x) = 2 +6x+2x
2
and T(x
2
) = 2x+7x
2
, so that the matrix for T with respect
to the basis B is given by
A = ( [T(1)]
B
[T(x)]
B
[T(x
2
)]
B
) =

5 2 0
2 6 2
0 2 7

.
It is a simple exercise to show that the matrix A has eigenvalues 3, 6, 9, with corresponding eigenvectors
x
1
=

2
2
1

, x
2
=

2
1
2

, x
3
=

1
2
2

,
so that writing
P =

2 2 1
2 1 2
1 2 2

,
we have
P
1
AP =

3 0 0
0 6 0
0 0 9

.
Now let B

= p
1
(x), p
2
(x), p
3
(x), where
[p
1
(x)]
B
=

2
2
1

, [p
2
(x)]
B
=

2
1
2

, [p
3
(x)]
B
=

1
2
2

.
Chapter 8 : Linear Transformations page 29 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Then P is the transition matrix from the basis B

to the basis B, and D is the matrix for T with respect


to the basis B

. Clearly p
1
(x) = 2 +2x x
2
, p
2
(x) = 2 x +2x
2
and p
3
(x) = 1 +2x +2x
2
. Note now
that
T(p
1
(x)) = T(2 + 2x x
2
) = 6 + 6x 3x
2
= 3p
1
(x),
T(p
2
(x)) = T(2 x + 2x
2
) = 12 6x + 12x
2
= 6p
2
(x),
T(p
3
(x)) = T(1 + 2x + 2x
2
) = 9 + 18x + 18x
2
= 9p
3
(x).
Chapter 8 : Linear Transformations page 30 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
Problems for Chapter 8
1. Consider the transformation T : R
3
R
4
, given by
T(x
1
, x
2
, x
3
) = (x
1
+x
2
+x
3
, x
2
+x
3
, 3x
1
+x
2
, 2x
2
+x
3
)
for every (x
1
, x
2
, x
3
) R
3
.
a) Find the standard matrix A for T.
b) By reducing A to row echelon form, determine the dimension of the kernel of T and the dimension
of the range of T.
2. Consider a linear operator T : R
3
R
3
with standard matrix
A =

1 2 3
2 1 3
1 3 2

.
Let e
1
, e
2
, e
3
denote the standard basis for R
3
.
a) Find T(e
j
) for every j = 1, 2, 3.
b) Find T(2e
1
+ 5e
2
+ 3e
3
).
c) Is T invertible? Justify your assertion.
3. Consider the linear operator T : R
2
R
2
with standard matrix
A =

1 1
0 1

.
a) Find the image under T of the line x
1
+ 2x
2
= 3.
b) Find the image under T of the circle x
2
1
+x
2
2
= 1.
4. For each of the following, determine whether the given transformation is linear:
a) T : V R, where V is a real inner product space and T(u) = |u|.
b) T : /
2,2
(R) /
2,3
(R), where B /
2,3
(R) is xed and T(A) = AB.
c) T : /
3,4
(R) /
4,3
(R), where T(A) = A
t
.
d) T : P
2
P
2
, where T(p
0
+p
1
x +p
2
x
2
) = p
0
+p
1
(2 +x) +p
2
(2 +x)
2
.
e) T : P
2
P
2
, where T(p
0
+p
1
x +p
2
x
2
) = p
0
+p
1
x + (p
2
+ 1)x
2
.
5. Suppose that T : R
3
R
3
is a linear transformation satisfying the conditions T(1, 0, 0) = (2, 4, 1),
T(1, 1, 0) = (3, 0, 2) and T(1, 1, 1) = (1, 4, 6).
a) Evaluate T(5, 3, 2).
b) Find T(x
1
, x
2
, x
3
) for every (x
1
, x
2
, x
3
) R
3
.
6. Suppose that T : R
3
R
3
is orthogonal projection onto the x
1
x
2
-plane.
a) Find the standard matrix A for T.
b) Find A
2
.
c) Show that T T = T.
7. Consider the bases B = u
1
, u
2
, u
3
and ( = v
1
, v
2
, v
3
of R
3
, where u
1
= (2, 1, 1), u
2
= (2, 1, 1),
u
3
= (1, 2, 1), v
1
= (3, 1, 5), v
2
= (1, 1, 3) and v
3
= (1, 0, 2).
a) Find the transition matrix from the basis ( to the basis B.
b) Find the transition matrix from the basis B to the basis (.
c) Show that the matrices in parts (a) and (b) are inverses of each other.
d) Compute the coordinate matrix [u]
C
, where u = (5, 8, 5).
e) Use the transition matrix to compute the coordinate matrix [u]
B
.
f) Compute the coordinate matrix [u]
B
directly and compare it to your answer in part (e).
Chapter 8 : Linear Transformations page 31 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
8. Consider the bases B = p
1
, p
2
and ( = q
1
, q
2
of P
1
, where p
1
= 2, p
2
= 3 +2x, q
1
= 6 +3x and
q
2
= 10 + 2x.
a) Find the transition matrix from the basis ( to the basis B.
b) Find the transition matrix from the basis B to the basis (.
c) Show that the matrices in parts (a) and (b) are inverses of each other.
d) Compute the coordinate matrix [p]
C
, where p = 4 +x.
e) Use the transition matrix to compute the coordinate matrix [p]
B
.
f) Compute the coordinate matrix [p]
B
directly and compare it to your answer in part (e).
9. Let V be the real vector space spanned by the functions f
1
= sin x and f
2
= cos x.
a) Show that g
1
= 2 sin x + cos x and g
2
= 3 cos x form a basis of V .
b) Find the transition matrix from the basis ( = g
1
, g
2
to the basis B = f
1
, f
2
of V .
c) Compute the coordinate matrix [f]
C
, where f = 2 sin x 5 cos x.
d) Use the transition matrix to compute the coordinate matrix [f]
B
.
e) Compute the coordinate matrix [f]
B
directly and compare it to your answer in part (d).
10. Let P be the transition matrix from a basis ( to another basis B of a real vector space V . Explain
why P is invertible.
11. For each of the following linear transformations T, nd ker(T) and R(T), and verify the Rank-nullity
theorem:
a) T : R
3
R
3
, with standard matrix A =

1 1 3
5 6 4
7 4 2

.
b) T : P
3
P
2
, where T(p(x)) = p

(x), the formal derivative.


c) T : P
1
R, where T(p(x)) =

1
0
p(x) dx.
12. For each of the following, determine whether the linear operator T : R
n
R
n
is one-to-one. If so,
nd also the inverse linear operator T
1
: R
n
R
n
:
a) T(x
1
, x
2
, x
3
, . . . , x
n
) = (x
2
, x
1
, x
3
, . . . , x
n
)
b) T(x
1
, x
2
, x
3
, . . . , x
n
) = (x
2
, x
3
, . . . , x
n
, x
1
)
c) T(x
1
, x
2
, x
3
, . . . , x
n
) = (x
2
, x
2
, x
3
, . . . , x
n
)
13. Consider the operator T : R
2
R
2
, where T(x
1
, x
2
) = (x
1
+ kx
2
, x
2
) for every (x
1
, x
2
) R
2
.
Here k R is xed.
a) Show that T is a linear operator.
b) Show that T is one-to-one.
c) Find the inverse linear operator T
1
: R
2
R
2
.
14. Consider the linear transformation T : P
2
P
1
, where T(p
0
+p
1
x+p
2
x
2
) = (p
0
+p
2
) +(2p
0
+p
1
)x
for every polynomial p
0
+p
1
x +p
2
x
2
in P
2
.
a) Find the matrix for T with respect to the bases 1, x, x
2
and 1, x.
b) Find T(2 + 3x + 4x
2
) by using the matrix A.
c) Use the matrix A to recover the formula T(p
0
+p
1
x +p
2
x
2
) = (p
0
+p
2
) + (2p
0
+p
1
)x.
15. Consider the linear operator T : R
2
R
2
, where T(x
1
, x
2
) = (x
1
x
2
, x
1
+x
2
) for every (x
1
, x
2
) R
2
.
a) Find the matrix A for T with respect to the basis (1, 1), (1, 0) of R
2
.
b) Use the matrix A to recover the formula T(x
1
, x
2
) = (x
1
x
2
, x
1
+x
2
).
c) Is T one-to-one? If so, use the matrix A to nd the inverse linear operator T
1
: R
2
R
2
.
Chapter 8 : Linear Transformations page 32 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
16. Consider the real vector space of all real sequences x = (x
1
, x
2
, x
3
, . . .) such that the series

n=1
x
n
is convergent.
a) Show that the transformation T : V R, given by
T(x) =

n=1
x
n
for every x V , is a linear transformation.
b) Is the linear transformation T one-to-one? If so, give a proof. If not, nd two distinct vectors
x, y V such that T(x) = T(y).
17. Suppose that T
1
: R
2
R
2
and T
2
: R
2
R
2
are linear operators such that
T
1
(x
1
, x
2
) = (x
1
+x
2
, x
1
x
2
) and T
2
(x
1
, x
2
) = (2x
1
+x
2
, x
1
2x
2
)
for every (x
1
, x
2
) R
2
.
a) Show that T
1
and T
2
are one-to-one.
b) Find the formulas for T
1
1
, T
1
2
and (T
2
T
1
)
1
.
c) Verify that (T
2
T
1
)
1
= T
1
1
T
1
2
.
18. Consider the transformation T : P
1
R
2
, where T(p(x)) = (p(0), p(1)) for every polynomial p(x)
in P
1
.
a) Find T(1 2x).
b) Show that T is a linear transformation.
c) Show that T is one-to-one.
d) Find T
1
(2, 3), and sketch its graph.
19. Suppose that V and W are nite dimensional real vector spaces with dimV > dimW. Suppose
further that T : V W is a linear transformation. Explain why T cannot be one-to-one.
20. Suppose that
A =

1 3 1
2 0 5
6 2 4

is the matrix for a linear operator T : P


2
P
2
with respect to the basis B = p
1
(x), p
2
(x), p
3
(x)
of P
2
, where p
1
(x) = 3x + 3x
2
, p
2
(x) = 1 + 3x + 2x
2
and p
3
(x) = 3 + 7x + 2x
2
.
a) Find [T(p
1
(x))]
B
, [T(p
2
(x))]
B
and [T(p
3
(x))]
B
.
b) Find T(p
1
(x)), T(p
2
(x)) and T(p
3
(x)).
c) Find a formula for T(p
0
+p
1
x +p
2
x
2
).
d) Use the formula in part (c) to compute T(1 +x
2
).
21. Suppose that B = v
1
, v
2
, v
3
, v
4
is a basis for a real vector space V . Suppose that T : V V is a
linear operator, with T(v
1
) = v
2
, T(v
2
) = v
4
, T(v
3
) = v
1
and T(v
4
) = v
3
.
a) Find the matrix for T with respect to the basis B.
b) Is T one-to-one? If so, describe its inverse.
Chapter 8 : Linear Transformations page 33 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
22. Let P
k
denote the vector space of all polynomials with real coecients and degree at most k.
Consider P
2
with basis B = 1, x, x
2
and P
3
with basis ( = 1, x, x
2
, x
3
. We dene T
1
: P
2
P
3
and T
2
: P
3
P
2
as follows. For every polynomial p(x) = a
0
+ a
1
x + a
2
x
2
in P
2
, we have
T
1
(p(x)) = xp(x) = a
0
x +a
1
x
2
+a
2
x
3
. For every polynomial q(x) in P
3
, we have T
2
(q(x)) = q

(x),
the formal derivative of q(x) with respect to the variable x.
a) Show that T
1
: P
2
P
3
and T
2
: P
3
P
2
are linear transformations.
b) Find T
1
(1), T
1
(x), T
1
(x
2
), and compute the matrix A
1
for T
1
: P
2
P
3
with respect to the bases
B and (.
c) Find T
2
(1), T
2
(x), T
2
(x
2
), T
2
(x
3
), and compute the matrix A
2
for T
2
: P
3
P
2
with respect to
the bases ( and B.
d) Let T = T
2
T
1
. Find T(1), T(x), T(x
2
), and compute the matrix A for T : P
2
P
2
with respect
to the basis B. Verify that A = A
2
A
1
.
23. Suppose that T : V V is a linear operator on a real vector space V with basis B. Suppose that
for every v V , we have
[T(v)]
B
=

x
1
x
2
+x
3
x
1
+x
2
x
1
x
2

and [v]
B
=

x
1
x
2
x
3

.
a) Find the matrix for T with respect to the basis B.
b) Is T one-to-one? If so, describe its inverse.
24. For each of the following, let V be the subspace with basis B = f
1
(x), f
2
(x), f
3
(x) of the space
of all real valued functions dened on R. Let T : V V be dened by T(f(x)) = f

(x) for every


function f(x) in V . Find the matrix for T with respect to the basis B:
a) f
1
(x) = 1, f
2
(x) = sin x, f
3
(x) = cos x
b) f
1
(x) = e
2x
, f
2
(x) = xe
2x
, f
3
(x) = x
2
e
2x
25. Let P
2
denote the vector space of all polynomials with real coecients and degree at most 2,
with basis B = 1, x, x
2
. Consider the linear operator T : P
2
P
2
, where for every polynomial
p(x) = a
0
+a
1
x +a
2
x
2
in P
2
, we have T(p(x)) = p(2x + 1) = a
0
+a
1
(2x + 1) +a
2
(2x + 1)
2
.
a) Find T(1), T(x), T(x
2
), and compute the matrix A for T with respect to the basis B.
b) Use the matrix A to compute T(3 +x + 2x
2
).
c) Check your calculations in part (b) by computing T(3 +x + 2x
2
) directly.
d) What is the matrix for T T : P
2
P
2
with respect to the basis B?
e) Consider a new basis B

= 1 +x, 1 +x
2
, x +x
2
of P
2
. Using a change of basis matrix, compute
the matrix for T with respect to the basis B

.
f) Check your answer in part (e) by computing the matrix directly.
26. Consider the linear operator T : P
1
P
1
, where for every polynomial p(x) = p
0
+ p
1
x in P
1
, we
have T(p(x)) = p
0
+p
1
(x + 1). Consider also the bases B = 6 + 3x, 10 + 2x and B

= 2, 3 + 2x
of P
1
.
a) Find the matrix for T with respect to the basis B.
b) Use Proposition 8V to compute the matrix for T with respect to the basis B

.
27. Suppose that V and W are nite dimensional real vector spaces. Suppose further that B and B

are
bases for V , and that ( and (

are bases for W. Show that for any linear transformation T : V W,


we have
[I]
C

,C
[T]
C,B
[I]
B,B
= [T]
C

,B
.
28. Prove Proposition 8W.
29. Prove Proposition 8X.
Chapter 8 : Linear Transformations page 34 of 35
Linear Algebra c _ W W L Chen, 1997, 2008
30. For each of the following linear transformations T : R
3
R
3
, nd a basis B of R
3
such that the
matrix for T with respect to the basis B is a diagonal matrix:
a) T(x
1
, x
2
, x
3
) = (x
2
+x
3
, x
1
+x
3
, x
1
+x
2
)
b) T(x
1
, x
2
, x
3
) = (4x
1
+x
3
, 2x
1
+ 3x
2
+ 2x
3
, x
1
+ 4x
3
)
31. Consider the linear operator T : P
2
P
2
, where
T(p
0
+p
1
x +p
2
x
2
) = (p
0
6p
1
+ 12p
2
) + (13p
1
30p
2
)x + (9p
1
20p
2
)x
2
.
a) Find the eigenvalues of T.
b) Find a basis B of P
2
such that the matrix for T with respect to B is a diagonal matrix.
Chapter 8 : Linear Transformations page 35 of 35

Vous aimerez peut-être aussi