Vous êtes sur la page 1sur 4

Linear Transformations

Lecture 4, June 21, 2012


1 Introduction
Today were going to discuss linear transformations. These are the functions between vector
spaces that preserve vector addition and scalar multiplication, the two fundamental opera-
tions of vector spaces. So far, were just discussing vector spaces of the form R
n
for some
natural number n (euclidean spaces), and so the linear transformations well be studying
(for now) will be of the form T : R
k
R
n
, but soon we will consider more general vector
spaces and linear transformations (starting in chapter 4).
Our plan for today is as follows. First, well motivate linear transformations by looking
at matrix transformations, which came up while looking at systems of linear equations.
Then well dene linear transformations and observe that in fact theres a close relationship
between matrices and linear transformations. Well consider geometrically some examples of
certain types of linear transformations. Finally well consider the two functional properties
of one-to-one and onto and characterize which linear transformations have these properties.
2 Matrix Transformations
Recall that a system of linear equations can be expressed as a matrix equation Ax = b,
where A is a matrix, b is a column vector, and x is a column vector of variables. A solution
to the system is some actual vector x such that Ax = b. Another way to express this is
functionally. Suppose that A is an nxk matrix. Then x Ax is a function which takes in
a vector x R
k
as input, and spits out a vector Ax R
n
as output. This function x Ax
is called a matrix transformation. A solution of the system Ax = b is a vector that, when
input into this transformation, becomes b.
For example, consider the system Ax = b where A =

1 2 1
1 0 1

and b =

1
2

. Then
the matrix transformation x Ax is a function with domain R
3
and codomain R
2
, and
a solution to the system is just some vector

x
1
x
2
x
3

such that when it is used as input, the


matrix transformation produces

1
2

as output.
Heres a little puzzle for you. Let A be some 2x2 matrix. Suppose we knew that A

1
2

5
3

and A

1
1

1
1

. Is it possible for us to gure out A

0
3

? Yes, it is. Note that


1

1
2

1
1

0
3

. Then we can reason as follows:


A

0
3

= A

1
2

1
1

= A

1
2

+ A

1
1

5
3

1
1

6
2

The property were using of matrix multiplication that I want to point out in this reasoning
is A(x + y) = Ax + Ay. We summarize this property in words by saying that matrix mul-
tiplication distributes over vector addition, or that matrix transformations preserve vector
addition. You should be familiar from elementary school how multiplication distributes over
addition (e.g., 7 (100 + 9) = 7 100 + 7 9) and this is the same idea. It doesnt matter
whether you add rst and then multiply, or multiply and then add. Or expressed in terms of
transformations: it doesnt matter whether you add rst and then apply the matrix trans-
formation, or apply the matrix transformation and then add. Matrix transformations also
satisfy a similar property involving scalar multiplication. Specically A(cx) = c(Ax) for any
scalar c. We say that matrix transformations preserve scalar multiplication. For example,
A

2
2

= 2A

1
1

.
3 Linear Transformations
A linear transformation fromR
k
to R
n
is a function T : R
k
R
n
such that for every x, y R
k
and c R we have T(x+y) = T(x) +T(y) and T(cx) = cT(x). In words, T preserves vector
addition and scalar multiplication. Matrix transformations x Ax are examples of linear
transformations. Are there other examples? Well, yes and no. It turns out that every
linear transformation from R
k
to R
n
can be represented by an nxk matrix. However, this
does not imply that linear transformations and matrix transformations are exactly the same
thing. First of all, later we will be interested in linear transformations whose domain and
codomain are not euclidean. E.g., the derivative is a linear function from the vector space
of (innitely) dierentiable functions to itself. Also, as well see later, more than one matrix
can represent the same linear transformation (albeit with respect to a dierent coordinate
system). With all that said though, there is a unique standard matrix associated with
every linear transformation between euclidean spaces.
Suppose that T : R
k
R
n
is linear. Is T given by matrix multplication for some matrix
A? I.e., is there a matrix A such that T(x) = Ax for every x R
k
? Well, if there is such
a matrix, then we better, in particular, have T(e
i
) = Ae
i
for each i = 1, . . . , k. (e
i
denotes
the vector (0, . . . , 0, 1, 0, . . . , 0) with 0s everywhere except in the i
th
spot.) But observe that
Ae
i
is simply the i
th
column of A. Thus, the desired matrix has to be, if anything works,

T(e
1
) T(e
2
) T(e
k
)

Well, lets check that this works. Let x =

x
1
.
.
.
x
k

R
k
be any vector. Note that x =
x
1
e
1
+ + x
k
e
k
. Repeatedly using the fact that T preserves vector addition and scalar
2
multiplication yields
T(x) = T(x
1
e
1
+ x
2
e
2
+ x
k
e
k
)
= x
1
T(e
1
) + x
2
T(e
2
) + + x
k
T(e
k
)
=

T(e
1
) T(e
2
) T(e
k
)

x
1
.
.
.
x
k

= Ax
So we see that this choice of A actually does work. So, given any linear transformation T, to
nd the standard matrix associated to it, you just need to calculate T at each of the standard
basis vectors e
i
and make the matrix with the results of these calculations as columns.
4 Geometric Examples
Lets think geometrically about some linear transformations on the plane, i.e. from R
2
to
R
2
.
Rotation (by any angle about the origin) is linear because it doesnt matter if you scale
rst and then rotate, or rotate rst and then scale, and similarly it doesnt matter if you
add rst and then rotate, or rotate rst and then add you get to the same place. What
is the standard matrix for rotation by radians? Remember we just need to gure out
where the standard basis vectors e
i
are sent. Well, rotating (1, 0) by puts it at (cos , sin )
recall that cos and sin are dened in exactly this way. Similarly, (0, 1) gets sent to
(cos( + /2), sin( + /2)) which can also be written (sin , cos ). Thus the standard
matrix is

cos sin
sin cos

. For example, if = /2, then the standard matrix is

0 1
1 0

Horizontal and vertical contractions and expansions are also linear. I.e.,

k 0
0 1

scales
horizontally by a factor of k, and

1 0
0 k

scales vertically by a factor of k. If k > 1 we have


expansion, if 0 < k < 1 we have contraction. If k = 0, we actually call it a projection. If
1 < k < 0 we have a reection and a contraction, while if k < 1 we have a reection and
an expansion.
Really the only other kind of linear transformation on the plane (in a sense to be made
precise later) is a shear. The standard matrix for a horizontal shear is

1 k
0 1

, and for a
vertical shear

1 0
k 1

. These transformations x one of the axes and then slide points in a


direction parallel to the xed axis. The amount any point slides in this way is determined
by how far that point is from the xed axis.
One thing to keep in mind when visualizing linear transformations on the plane is that
parallelograms are always mapped to parallelograms, in the sense that given four points
3
x, x + a, x + b, x + a + b in the form of a parallelogram, the image of these four points
T(x), T(x+a), T(x+b), T(x+a+b), i.e. T(x), T(x) +T(a), T(x) +T(b), T(x) +T(a) +T(b),
is also in the form of a parallelogram. Sometimes though, as in the case of projection, things
get attened out and so the parallelogram becomes degenerate.
5 One-to-one, Onto
Recall that a function f : A B is called one-to-one (also: injective) if for all a, a

A,
f(a) = f(a

) implies a = a

. Another way to put this is that a = a

implies f(a) = f(a

). In
words, distinct elements of the domain are mapped to distinct elements in the codomain.
Questions: Is projection onto the x-axis one-to-one? No, because there are points in the
domain that are smooshed to become the same point in the codomain. Is rotation by 90
degrees one-to-one? Yes, because every point has a unique point that rotates to it (to nd
that point, rotate backwards by 90 degrees). Note that in the rst case more than one point
is mapped to 0, and in the second case, only 0 rotates to 0.
In general, if T is a linear transformation, then T is one-to-one i 0 is the only vector T
maps to 0. Its always the case that T(0) = 0 for any linear transformation our claim is
that if 0 is the only thing that works, then T is 1-1 and vice versa. Lets prove it. Suppose
that T is 1-1. Then if T(x) = 0, then x must equal 0 lest T not be 1-1. Now for the other
direction. Suppose that T(x) = 0 implies x = 0. Let T(v) = T(w). We want to show that
v = w. Well, T(v) T(w) = 0 and so, by the linearity of T, T(v w) = 0. Thus v w = 0,
and so v = w.
We can say a bit more about this characterization in the case of euclidean spaces. Let
T : R
k
R
n
be a linear transformation. Let A be the standard matrix for T. I.e., T(x) = Ax
for every x R
k
. Then T is 1-1 i T(x) = 0 implies x = 0 i Ax = 0 implies x = 0 i
the columns of A are linearly independent i there is a pivot in every column of the reduced
echelon form of A (i.e. there is no free variables). Since the columns of A are T(e
1
), . . . , T(e
k
)
we could also say that T is 1-1 i {T(e
1
), . . . , T(e
k
)} is linearly independent.
Recall that a function f : A B is called onto (also: surjective) if for every b B there
is some a A such that f(a) = b. I.e., everything in the codomain is obtained as output
under some appropriate input. In other words, the range equals the codomain.
In the case T is a linear transformation of euclidean spaces, and A is the standard matrix
for T, then T is onto i Ax = b is consistent for every b R
n
i the columns of A span R
n
i there is a pivot in every row of the reduced echelon form of A.
4

Vous aimerez peut-être aussi