Author: William C Brown

© All Rights Reserved

9 vues

Author: William C Brown

© All Rights Reserved

- Algebraic Change Point Detection Fliess
- Exterior Algebra PDF
- An Efficient Reconfigurable Multiplier Architecture for Galois
- besselfunct
- ARE211, Fall2013
- Dynamical Energy Analysis (Tanner)
- tutorial
- Basis and Dimension
- Information Brochure
- Vector Space
- R Tutorial 1
- asmt2ans
- SyllabusMSc Mathematics
- rumus semua
- Linear Transformations
- Kondor regression
- ELE 535 Notes
- UECM1313
- Continuum Mechanics (George Backus).pdf
- artikel akuntansi keuangan

Vous êtes sur la page 1sur 36

in Linear Algebra

A Second Course

in Linear Algebra

WILLIAM C. BROWN

Michigan State University

East Lansing, Michigan

A Wiley-lnterscience Publication

JOHN WILEY & SONS

New York • Chichester • Brisbane • Toronto • Singapore

To Linda

All rights reserved. Published simultaneously in Canada.

beyond that permitted by Section 107 or 108 of the

1976 United States Copyright Act without the permission

of the copyright owner is unlawful. Requests for

permission or further information should be addressed to

the Permissions Department, John Wiley & Sons, Inc.

Brown, William C. (William Clough), 1943-

A second course in linear algebra.

"A Wiley-Interscience publication."

Bibliography: p.

Includes index.

1. Algebras, Linear. I. Title.

QA184.B765 1987 512'.5 87-23117

ISBN 0-471-62602-3

Printed in the United States of America

10 9 8 7 6 5 4 3 2 1

Preface

For the past two years, I have been teaching a first-year graduate-level course in

linear algebra and analysis. My basic aim in this course has been to prepare

students for graduate-level work. This book consists mainly of the linear algebra

in my lectures. The topics presented here are those that I feel are most important

for students intending to do advanced work in such areas as algebra, analysis,

topology, and applied mathematics.

Normally, a student interested in mathematics, engineering, or the physical

sciences will take a one-term course in linear algebra, usually at the junior level.

In such a course, a student will first be exposed to the theory of matrices, vector

spaces, determinants, and linear transformations. Often, this is the first place

where a student is required to do a mathematical proof. It has been my

experience that students who have had only one such linear algebra course in

their undergraduate training are ill prepared to do advanced-level work. I have

written this book specifically for those students who will need more linear

algebra than is normally covered in a one-term junior-level course.

This text is aimed at seniors and beginning iiraduate students who have had

at least one course in linear algebra. The text has been designed for a one-

quarter or semester course at the senior or first-year graduate level. It is assumed

that the reader is familiar with such animals as functions, matrices, determi-

nants, and elementary set theory. The presentation of the material in this text is

deliberately formal, consisting mainly of theorems and proofs, very much in the

spirit of a graduate-level course.

The reader will nQte that many familiar ideas are discussed in Chapter I.

I urge the reader not to skip this chapter. The topics are familiar, but my

approach, as well as the notation I use; is more sophisticated than a junior-level

vii

viii PREFACE

upon (if at all) in a one-term course. I urge the reader to study these chapters

carefully.

Having written five chapters for this book, I obviously feel that the reader

should study all five parts of the text. However, time considerations often

demand that a student or instructor do less. A shorter but adequate course could

consist of Chapter I, Sections 1-6, Chapter II, Sections 1 and 2, and Chapters III

and V. If the reader is willing to accept a few facts about extending scalars, then

Chapters III, IV, and V can be read with no reference to Chapter II. Hence, a

still shorter course could consist of Chapter I, Sections 1-6 and Chapters III

and V.

It is my firm belief that any second course in linear algebra ought to contain

material on tensor products and their functorial properties. For this reason, I

urge the reader to follow the first version of a short course if time does not

permit a complete reading of the text. It is also my firm belief that the basic

linear algebra needed to understand normed linear vector spaces and real inner Contents

product spaces should not be divorced from the intrinsic topology and analysis

involved. I have therefore presented the material in Chapter IV and the first half

of Chapter V in the same spirit as many analysis texts on the subject. My

original lecture notes on normed linear vector spaces and (real) inner product

Chapter I. Linear Algebra 1

spaces were based on Loomis and Sternberg's classic text Advanced Calculus.

Although I have made many changes in my notes for this book, I would still like 1. Definitions and Examples of Vector Spaces 1

to take this opportunity to acknowledge my debt to these authors and their fine 2. ·Bases and Dimension 8

text for my current presentation of this material. 3. Linear Transformations 17

One final word about notation is in order here. All important definitions are

4. Products and Direct Sums 30

clearly displayed in the text with a number. Notation for specific ideas (e.g., 1\1

for the set of natural numbers) is introduced in the main body of the text as 5. Quotient Spaces and the Isomorphism Theorems 38

needed. Once a particular notation is introduced, it will be used (with only a few 6. Duals and Adjoints 46

exceptions) with the same meaning throughout the rest of the text. A glossary of 7. Symmetric Bilinear Forms 53

notation has been provided at the back of the book for the reader's convenience.

Chapter II. Multilinear Algebra 59

WILLIAM C. BROWN

1. Multilinear Maps and Tensor Products 59

East Lansing, Michigan

September 1987 2. Functorial Properties of Tensor Products 68

3. Alternating Maps and Exterior Powers 83

4. Symmetric Maps and Symmetric Powers 94

1. Preliminaries on Fields 98

2. Minimal and Characteristic Polynomials 105

3. Eigenvalues and Eigenvectors 117

4. The Jordan Canonical Form 132

5. The Real Jordan Canonical Form 141

6. The Rational Canonical Form 159

hr

X CONTENTS

1. Basic Definitions and Examples 171

2. Product Norms and Equivalence 180

3. Sequential Compactness and the Equivalence of

Norms 186

4. Banach Spaces 200

1. Real Inner Product Spaces 206 A Second Course

2. Self-adjoint Transformations 221

3. Complex Inner Product Spaces 236 in Linear Algebra

4. Normal Operators 243

References 259

Chapter I

Linear Algebra

In this book, the symbol F will denote an arbitrary field. A field is defined as

follows:

Definition 1.1: A nonempty set F together with two functions (x, y)-+ x + y and

(x, y) -+ xy from F x F to F is called a field if the following nine axioms are

satisfied:

F2. x + (y + z) = (x + y) + z for all x, y, z e F.

F3. There exists a unique· element 0 e F such that x + 0 = x for all x e F.

F4. For every xeF, there exists a unique element -xeF such that

x + (-x) = 0.

FS. xy = yx for all x, yeF.

F6. x(yz) = (xy)z for all x, y, z e F.

F7. There exists a unique element 1 :/= 0 in F such that xl = x for all xeF.

F8. For every x :/= 0 in F, there exists a unique yeF such that xy = 1.

F9. x(y + z) = xy + xz for all x, y, zeF.

Strictly speaking a field is an ordered triple (F, (x, y) -+ x + y, (x, y)-+ xy)

satisfying axioms Fl-F9 above. The map from F x F-+ F given by

(x, y) -+ x + y is called addition, and the map (x, y) -+ xy is called multiplication.

When referring to some field (F, (x, y)-+ x + y, (x, y)-+ xy), references to addition

and multiplication are dropped from the notation, and the letter F is used to

2 LINEAR ALGEBRA

DEFINITIONS AND EXAMPLES OF VECTOR SPACES 3

denote both the set and the two maps satisfying axioms F1-F9. Although this structure of a vector space over F. Nevertheless, we shall drop any reference to

procedure is somewhat ambiguous, it causes no confusion in concrete situations. addition and scalar multiplication when no confusion can arise and just use the

In our first example below, we introduce some notation that we shall use notation V to indicate a given vector space over F.

throughout the rest of this book. IfV is a vector space over F, then the elements ofV will be called vectors and

the elements ofF scalars. We assume the reader is familiar with the elementary

Example 1.2: We shall let iQ denote the set of rational numbers, IR, the set of real arithmetic in V, and, thus, we shall use freely such expressions as -a:, a: - p, and

numbers, and C, the set of complex numbers. With the usual addition and a 1 + · · · + 1X0 when dealing with vectors in V. Let us review some well-known

multiplication, iQ, IR, and C are all fields with iQ ~ IR ~ C. 0 examples of vector spaces.

The fields in Example 1.1 are all infinite in the sense that the cardinal number

Example 1.5: Let N = {1, 2, 3, ...} denote the set of natural numbers. For each

attached to the underlying set in question is infinite. Finite fields are very

n eN, we have the vector space F 0 = {(x1 , ... , x.J I x1e F} consisting of all n-

important in linear algebra as well. Much of coding theory is done over finite

tuples of elements from F. Vector addition and scalar multiplication are defined

algebraic extensions of the field IFP described in Example 1.3 below.

componentwise by (x 1 , ... ,x.J + (Yt•· .. ,y.J = (xl + Yt>· .. ,xn + y.J and

x(x 1 , ••. , x.J = (xx 1 , .•• , xx.J. In particular, when n = 1, we see F itself is a

Example 1.3: Let 7L denote the set of integers with the usual addition x + y and

vector space over F. 0

multiplication xy inherited from iQ. Let p be a positive prime in 7L and set

IFP = {0,1, ... , p -1}. IFP becomes a (finite) field if we define additi?n E9 .and

If A and B are two sets, let us denote the set of functions from A to B by BA.

multiplication · modulo p. Thus, for elements x, y e IF P' there extst umque

Thus, BA = {f: A-+ B 1f is a function}. In Example 1.5, P can be viewed as the

integers k, z e 7L such that x + y = kp + z with z e IF p· We define x E9 y to be z.

set of functions from {1, 2, ... , n} to F. Thus, IX= (x 1 , ... , x.JeFn is identified

Similarly, x·y = w where xy = k'p +wand 0 ~ w < p. .

with the function g.. e F 1 • .. •• nl given by g..(i) = x1 fori = 1, ... , n. These remarks

The reader can easily check that (IFP, Ee, ·)satisfies axioms Fl-F9. Thus,IFP ts

suggest the following generalization of Example 1.5.

a finite field of cardinality p. 0

Except for some results in Section 7, the definitions and theorems in Chapter Example 1.6: Let V be a vector space over F and A an arbitrary set. Then the set

I are completely independent of the field F. Hence, we shall assume that F is an yA consisting of all functions from A to V becomes a vector space over F when

arbitrary field and study vector spaces over F. we define addition and scalar multiplication pointwise. Thus, iff,geVA, f + g is

the function from A to V defined by(f + g)(a) = f(a) + g(a) for all a eA. For xeF

Definition 1.4: A vector space V over F is a nonempty set together with two and feVA, xf is de.fined by (xf)(a) = x(f(a)). 0

functions, (IX, p)-+ a:+ p from V x V to V (called addition) and (x, IX)-+ xa: from

F x V to V (called scalar multiplication), which satisfy the following axioms: If A is a finite set of cardinality n in Example 1.6, then we shall shorten our

notation for the vector space yA and simply write vn. In particular, if V = F,

Vl. a.+ p = p +a for all a., pev. then yn = F 0 and we recover the example in 1.5.

V2. a:+ (p + y) =(IX+ p) + y for all a., p, yeV.

V3. There exists an element 0 e V such that 0 + a. = a. for all a e V. Example 1.7: We shall denote the set of m x n matrices (a11 ) with coefficients

V4. For every IXEV, there exists a peV such that a+ P= 0. aiJeF by Mmxn(F). The usual addition of matrices (a1J)+ (b1J) = (aiJ + biJ) and

scalar multiplication x(aiJ) = (xa1J make Mm x0 (F) a vector space over F. 0

V5. (xy)a: = x(ya.) for all x, y e F, and IX e V.

V6. x(IX + p) = xa. + xP for all xeF, and IX, PeV. Note that our choice of notation implies that Fn and M 1 x 0 (F) are the same

V7. (x + y)IX = xa. + y(l. for all x, y e F, and IX e V. vector space. Although we now have two different notations for the same vector

V8. 1a: = a: for all a: e V. space, this redundancy is useful and will cause no confusion in the sequel.

As with fields, we should make the comment that a vector space over F is Example 1.8: We shall let F[X] denote the set of all polynomials in an

really a triple (V, (a, p) -+a: + p, (x, a:) -+ xa.) consisting of a no?en;tpty s~t V indeterminate X over F. Thus, a typical element in F[X] is a finite sum of the

together with two functions from V x V to V and F x V to .v satisfymg ~xtoms form a 0 X0 +a0 _ 1xn-l+ .. ·+a0 • Here neNu{O}, and a 0 , ... ,a0 EF. The

Vl-V8. There may be many different ways to endow a gtven set V Wtth the usual notions of adding two polynomials and multiplying a polynomial by a

4 LINEAR ALGEBRA DEFINITIONS AND EXAMPLES OF VECTOR SPACES 5

constant, which the reader is familiar with from the elementary calculus, make If we have a collection f/' = {W11ieA} ofsubspaces ofV, then there are some

sense over any field F. These operations give F[X] the structure of a vector obvious ways offorming new subspaces from f/'. We gather these constructions

space over F. D together in. the following example:

Many interesting examples of vector spaces come from analysis. Here are Example 1.13: Let f/' = {Wd ieA} be an indexed collection of subspaces of V.

some typical examples. In what follows, the indexing set A of f/' can be finite or infinite. Certainly the

intersection, nleA WI> Of the SUbspaceS in f/' is a SUbspace Of V. The set of all

Example 1.9: Let I be an interval (closed, open, or half open) in IR. We shall let :finite sums of vectors from UteA W 1 is also a subspace ofV. We shall denote this

C(I) denote the set of all continuous, real valued functions on I. If keN, we shall subspace by LteA W 1• Thus, LteAW, = {LteAa:tla:,eW, for all ieA}. Here and

let Ck(I) denote those f e C(I) that are k-times differentiable on the interior of I. throughout the rest of this book, if A is infinite, then the notation LteA a:1 means

Then C(I) 2 C 1 (I) 2 C 2 (I) 2 · · ·. These sets are all vector spaces over IR when that all a:1 are zero except possibly for finitely many i eA. If A is finite, then

endowed with the usual pointwise addition (f + g)(x) = f(x) + g(x), x e I, and without any loss of generality, we can assume A = {1, ... , n} for some n e ~1. (If

scalar multiplication (yf)(x) = y(f(x)). D Ve<1 w,

A= </J, then = (0).) We shall then write LieA w, wl

= + ... + Wn.'

If f/' has the property that for every i, j e !J. there exists a k e !J. such that

Example 1.10: Let A = [a 1 , b 1] x · · · x [~. bJ s;;; IR0 be a closed rectangle. We w, wj w

u s;;; k• then clearly UteA w,

is a subspace of v. D

shall let Bl(A) denote the set of all real valued functions on A that are Riemann

integrable. Clearly Bl(A) is a vector space over IR when addition and scalar In general, the union of two subspaces of V is not a subspace of V. In fact, if

multiplication are defined as in Example 1.9. D W 1 and W 2 are subspaces of V, then W 1 u W 2 is a subspace if and only if

W 1 s;;; W 2 or W 2 s;;; W 1 • This fact is easy to prove and is left as an exercise. In our

We conclude our list of examples with a vector space, which we shall study first theorem, we discuss one more important fact about unions.

carefully in Chapter III.

Theorem 1.14: Let V be a vector space over an infinite field F. Then V cannot be

Example 1.11: Consider the following system of linear differential equations:

the union of a finite number of proper subspaces.

V = W 1 u · · · u Wn·. We shall show that this equation is impossible. We remind

the reader that a subspace W of V is proper if W =F V. Thus, V - W =F cf> for a

proper subspace W of V.

Here f1 , ••. , fn e C 1(I), where I is some open interval in IR. fj denotes the We may assume without loss of generality that W 1 ¢ W 2 u · · · u W n· Let

derivative off1, and the au are scalars in IR. Set A= (a1J)e M 0 xn(IR). A is called the .· a.eW 1 - Ui= 2 W 1• Let PeV- W 1 • Since F is infinite, and neither a: nor Pis

matrix of the system. If B is any matrix, we shall let Bt denote the transpose of B. zero, !J. = {a: + xp 1x e F} is an infinite subset of V. Since there are only finitely

Set f = (f1 , ••• , f0 )l, We may think off as a function from {1, ... , n} to C 1(I), that many subspaces W" there exists a j e {1, ... , n} such that !J. n W J is infinite.

is, f e C 1(I)n. With this notation, our system of differential equations becomes Suppose j e {2, ... , n}. Then there exist two nonzero scalars x, x' e F

f' = Af. The set of solutions to our system is V = {f e C 1(1) 0 If' = Af}. Clearly, V such that x =F x', and a:+ xp, a.+ x'PeWJ. Since WJ is a subspace,

is a vector space over IR if we define addition and scalar multiplication (x'- x)a: = x'(a: + xp)- x(a: + x'P)eWJ. Since x'- x:).!: 0, we conclude a:eWJ.

componentwise as in Example 1.9. D But this is contrary to our choice of a:¢ W 2 u · · · u W n. Thus, j = 1.

Now if j = 1, then again there exist two nonzero scalars x, x' e F such that

Now suppose Vis a vector space over F. One rich source of vector spaces x =F x', and a.+ xp, a:+ x'PeW 1 • Then (x- x1P =(a:+ xp)- (a:+ x'p)eW 1 •

associated with V is the set of subspaces of V. Recall the following definition: Since x - x' =F 0, peW 1 • This is impossible since Pwas chosen in V - W 1 • We

conclude that Y cannot be equal to the union of W 1 , ••• , W n· This completes the

Definition 1.12: A nonempty subset W of V is a subspace of V if W is a vector proof of Theorem 1.14. D

space under the same vector addition and scalar multiplication as for V.

IfF is finite, then·Theorem 1.14 is false in general. For example, let V = (IF 2 ) 2 •

Thus, a subset W of V is a subspace if W is closed under the operations of V. Then V = W 1 uW 2 uW 3 , where W 1 = {(0,0), (1, 1)}, W 2 = {(0, 0),(0, 1)}, and

For example, C([a, b]), Ck([a, b]), IR[X], and Bl([a, b]) are all subspaces of IJlla,bJ. w 3 = {(0, 0), (1, 0)}.

6 LINEAR ALGEBRA

EXERCISES FOR SECTION 1 7

Any subset S of a vector space V determines a subspace L(S) = n{W I W a (3) Set F ={a+ bj='Sja, beQ}. Show that F is a subfield of C, that is,

subspace ofV, W 2 S}. We shall call L(S) the linear span of S. Clearly, L(S) is the F is a field under complex addition and multiplication. Show that

smallest subspace of V containing S. Thus, in Example 1.13, for instance,

{a + bJ=S"I a, b integers} is not a subfield of C.

L(UieA WJ = Lie4 WI.

Let &l(V) denote the set of all subsets of V. If 9'(V) denotes the set of all (4) Let I be an open interval in !fit Let a E I. Let Va = {f e IR11f has a derivative

subspaces of V, then 9'(V) £ &'(V), and we have a natural function at a}. Show that Ya is a subspace of IR1•

L: &'(V) _.. 9'(V), which sends a subset S e &'(V) to its linear span L(S) e 9'(V).

(5) The vector space IRN is just the set of all sequences {a 1} = (at> a 2, a 3, ••• )

Clearly, L is a surjective map whose restriction to .9'(V) is the identity. We

with a 1E IR. What are vector addition and scalar multiplication here?

conclude this section with a list of the more important properties of the function

L(·). (6) Show that the following sets are subspaces of IRN:

(a) W 1 = {{a1} E IRN llimi .... co a1 = 0}.

Theorem 1.15: The function L: &'(V) -+ 9'(V) satisfies the following poperties: (b) W 2 = {{a1}e1RNI{a1} is a bounded sequence}.

(c) W 3 = {{a 1 }e!RNIL~ 1 a~ < oo}.

(a) For S e&'(V), L(S) is the subspace of V consisting of all finite linear

combinations of vectors from S. Thus, (7) Let (a 1 , ••• , aJeFn- (0). Show that {(x 1 , ••• , xJePILi'= 1 a1x1 = 0} is a

proper subspace of P.

(8) Identify all subspaces of IR 2. Find two subspaces W 1 and W 2 of IR 2 such

that W 1 u W 2 is not a subspace.

(b) If S 1 £ S2, then L(S 1) £ L(S2). (9) Let V be a vector space over F. Suppose W 1 and W 2 are subspaces of V.

Show that W 1 u W 2 is a subspace of V if and only if W 1 £ W 2 or

(c) If ex e L(S), then there exists a finite subset S' £ S such that ex e L(S').

W2 £W1.

(d) S £ L(S) for all S E &'(V).

(10) Consider the following subsets of IR[X]:

(e) For every Se&'(V), L(L(S)) = L(S).

(a) W 1 = {fe!R[X] lf(O) = 0}.

(f) If fJ e L(S u {ex}) and fJ ¢ L(S), then ex E L(S u { fJ} ). Here IX, fJ E V and

Se&'(V). (b) W 2 = {fe IR[X] 12f(O) = f(1)}.

(c) W 3 = {fe IR[X] I the degree off~ n}.

Proof: Properties (a)-(e) follow directly from the definition of the linear span. (d) w4 = {fe!R[X] lf(t) = f(1- t) for all te!R}.

We prove (f). If fJ e L(S u { o:}) - L(S), then fJ is a finite linear combination of In which of these cases is W 1 a subspace of IR[X]?

vectors from S u {ex}. Furthermore, ex must occur with a nonzero coefficie~t

in any such linear combination. Otherwise, fJ e L(S). Thus, there extst (11) Let K, L, and M be subspaces of a vector space V. Suppose K ~ L. Prove

vectors cx 1 , ••• , cxn E S and nonzero scalars x 1, ... , Xn, Xn +1 E F such that Dedekind's modular law: K n (L + M) = L + (K n M).

fJ = x 1IX 1 + .. · + XnOCn + Xn + 1a. Since Xn + 1 =f: 0, we can writ~ ~ as a linear o

(12) Let V = IR 3 • Show that 1 = (1, 0, 0) is not in the linear span of a, {J, andy

· ·

combmat10n of fJ and oc 1, ... , ocn. N arneIy, IX= Xn+l - t fJ - Xn+1X11X1-

-1 where ex = (1, 1, 1), fJ = (0, 1, -1), and y = (1, 0, 2).

.. · - x,;; 1 xnocn. Thus, cxeL(S u {fJ}). D ·

(13) If S 1 and S 2 are subsets of a vector space V, show that

L(S1 u S 2) = L(S 1 ) + L(Sa).

EXERCISES FOR SECTION 1 (14) Let S be any subset of IR[X] £ !Rill. Show that e'" ¢ L(S).

{15) Let tX1 = (a11 'llt 2)eF2 fori= 1, 2. Show that F 2 = L({cx 1 , cx 2}) if and only if

(1) Complete the details in Example 1.3 and argue (IF P' Ea, ·) is a field. the determinant of the 2 x 2 matrix M = (a1J) is nonzero. Generalize this

(2) Let IR(X) = {f(x)fg(x)jf, ge!R[X] and g =f: 0} denote the set of ~a.tional result to Fn.

functions on !fit Show that IR(X) is a field under the usual defimtton of (16) Generalize Example 1.8 to n + 1 variables X 0 , ••• , Xn. The resulting vector

addition f/g + hfk = (kf + gh)/gk and multiplication (f/g)(hfk) = th/gk space over F is called the ring of polynomials in n + 1 variables (over F). It

IR(X) is called the field of rational functions over IR. Does F(X) make sense is denoted F[X0 , ••• , XJ. Show that this vector space is spanned by all

for any field F? monomials Xg'0 , ••• , X:• as (m 0 , ••• , mJe(N u {O})n+ 1 •

8 LINEAR ALGEBRA BASES AND DIMENSION 9

(17) A polynomial f e F[X0 , ... , XJ is said to be homogeneous ofdegree d iff is a We say a partially ordered set (A, ""') is inductive if every totally ordered

finite linear combination of monomials Xg'O, ... , X:'" of degree d (i.e., subset of A has an upper bound in A. The crucial point about inductive sets is

m 0 + · · · + mn = d). Show that the set of homogeneous polynomials of the following result, which is called Zorn's lemma:

degree d is a subspace of F[X0 , ••• , Xn]. Show that any polynomial f can be

written uniquely as a finite sum of homogeneous polynomials. 2.2: If a partially ordered set (A, "') is inductive, then a maximal element of A

exists.

(18) Let V = {AeMnxn(F)IA =A'}. Show that Vis a subspace of Mnxn(F). V

is the subspace of symmetric matrices of Mn x n(F).

We shall not give a proof of Zorn's lemma here. The interested reader may

(19) Let W = {A e Mn,. 0 (F) I A' = -A}. Show that W is a subspace of Mn x 0 (F). consult (3, p. 33] for more details. ·

W is the subspace of all skew-symmetric matrices in Mn x n(F). Now suppose Vis an arbitrary vector space over a field F. LetS be a subset of

(20) Let W be a subspace of V, and let a, pe V. Set A = a: + Wand B = P+ W.

v.

Show that A= B or AnB = 0. Definition 2.3: S is linearly dependent over F if there exists a finite subset

{a:1, ... ,a:n} £;;;; S and nonzero scalars x 1, ... ,xneF such that x 1a: 1 +

· · · + XnCXn = 0. S is linearly independent (over F) if S is not linearly

2. BASES AND DIMENSION dependent.

Before proceeding with the main results of this section, let us recall a few facts Thus, if S is linearly independent, then whenever Lf= 1 x1a:1 = 0 with

from set theory. If A is any set, we shall denote the cardinality of A by !AI. Thus, {a:t> · .. , a:n} £;;;; S and {x1, ... , xn} £;;;; F, then x 1 = .. · Xn = 0. Note that our

A is a finite set if and only if !AI < oo. If A is not finite, we shall write !AI = oo. definition implies the empty set cp is fuiearly independent over F. When

The only fact from cardinal arithmetic that we shall need in this section is the considering questions of dependence, we shall drop the words "over F"

following: whenever F is clear from the context. It should be obvious, however, that if more

than one field is involved, a given set S could be dependent over one field and

2.1: Let A and B be sets, and suppose IAI = oo. If for each x e A, we have some independent over another. The following example makes this clear.

finite set Ax!;;;;; B, then !AI~ IUxeAAxl·

Example 2.4: Suppose V = IR, the field of real numbers. Let F 1 = Q, and

A proof of 2.1 can be found in any standard text in set theory (e.g., [1]), and, F2 =A~. Then V is a vector space over both F 1 and F 2. Let S = {a: 1 = 1,

consequently, we omit it. !X2 = ,J2}. Sis a set of two vectors in V. Using the fact that every integer factors

A relation R on a set A is any subset of the. crossed product A x A. Suppose R uniquely into a product of primes, one sees easily that S is independent over F 1.

is a relation on a set A. lfx, yeA and (x, y) e R, then we shall say x relates toy and But, clearly S is dependent over F 2 since (.j2)cx 1 + (-1)cx 2 = 0. D

write x ""' y. Thus, x ""' y <=> (x, y) e R We shall use the notation (A, ""') to indicate

the composite notion of a set A and a relation R £;;;; A x A. This notation is a bit Definition 2.5: A subset S of V is called a basis of V if S is linearly independent

ambiguous since the symbol "' has no reference to R in it. However, the use of over F and L(S) = V.

""' will always be clear from the context. In fact, the only relation R we shall

systematically exploit in this section is the inclusion relation £;;;; among subsets of If S is a basis Of a vector space V, then every nonzero Vector a: E V can be

BI'(V) [V some vector space over a field F]. Written uniquely in the form CX = X1 0:1 + ·· · + Xn!Xn, where { a 1 ~ ... , a:n} !;;;;; S and

. A set A is said to be partially ordered if A has a relation R £;;;; A x A such that x~, · .. , Xn are nonzero scalars in F. Every vector space h.as a basis. In fact, any

(1) x ""'x for all xeA, (2) if x""' y, andy"' x, then x = y, and (3) if x""' y, and gJ.Ven linearly independent subset S of V can be expanded to a basis.

y ""' z, then x ""' z. A typical example of a partially ordered set is BI'(V) together

with the relation A ""' B if and only if A £;;;; B. If (A, "') is a partially ordered set, !Jteorem 2.6: Let V be a vector space over F, and suppose S is a linearly

and A 1 £;;;; A, then we say A 1 is totally ordered if for any two elements x, y eAt> Independent subset of V. Then there exists a basis B of V such that B ;;2 s.

\

we have at least one of the relations x - y or y ""' x. If (A, ""') is a partially

ordered set, and A1 £;;;; A, then an element x e A is called an upper bound for A 1 if Proof: Let f/ denote .the set of all independent subsets of V that contain s. Thus,

y ""'x for all yeA 1. Finally, an element xe(A, -)is a maximal element of A if 9' = {A e BI'(V) I A ;;2 S and A is linearly independent over F}. We note that

x - y implies x = y. 9' ::P cp since Se9'. We partially order f/ by inclusion Thus for A A e9'

• ' h 2 '

10 LINEAR ALGEBRA BASES AND DIMENSION 11

At - A2 if and only if At s;;; A2. The fact that (9', s;;;) is a partially ordered set is Theorem 2.11: Let V be a vector space over F, and suppose V = L(S). Then S

clear. contains a basis of V.

Suppose f!r = {A1 1i e !:J.} is an indexed collection of elements from 9' that

form a totally ordered subset of 9'. We show f!r has an upper bound. Set Proof: IfS =¢or {0}, then V = (0). In this case,¢ is a basis ofV contained inS.

A= UteAA,. aearly, Ae~(V). s s;;; A, and A, s;;; A for all ie!:J.. If A fails to be So, we can sume S contains a nonzero vector a:. Let 9' = {A s;;; S 1A linearly

linearly independent, then there exists a finite subset {IX 1, .•• , 1X0 } s;;; A and independent over F}. Clearly, {a:} e9'. Partially order [I' by inclusion. If

nonzero scalars x1 , ••• , X0 e F such that Xt1X 1 + · · · + X 0 1Xn = 0. Since f!r is totally ff ={Ad ie!:J.} is a totally ordered subset of 9', then U 1s4A1 is an upper bound

ordered, there exists an index i0 e !:J. such that {IX!> ••• , 1X0 } s;;; A10• But then A,o is for f!r in 9'. Thus, (tl', s;;;) is inductive. Applying 2.2, we see that 9' has a

dependent, which is impossible since A~ae9'. We conclude that A is linearly maximal element B.

independent, and, consequently, Ae9'. Thus, f!r has an upper bound A in 9'. We claim B is a basis for V. Since Betl', B s;;; Sand B is linearly independent

Since f!r was arbitrary, we can now conclude that (9', s;;;) is an inductive set. over F. If L(B) = V, then B is a basis of V, and the proof is complete. Suppose

Applying 2.2, we see that 9' has a maximal element B. Since Be 9', B ;;;! S and B L(B) -:/: V. Then S ¢ L(B), for otherwise V = L(S) s;;; L(L(B)) = L(B). Hence there

is linearly independent. We claim that B is in fact a basis of V. To prove this exists a vector PeS - L(B). Oearly, B u { P} is linearly independent ovet F.

assertion, we need only argue L(B) = V. Suppose L(B)-:/: V. Then there exists a Thus, B u {P} etl'. But P¢L(B) implies P¢B. Hence, B u {P} is strictly larger

vector IX e V - L(B). Since a:¢ L(B), the set B u {a:} is clearly linearly independ- than B in 9'. Since B is maximal, this is a contradiction. Therefore, L(B) = V and

ent. But then B u {a:} e 9', and B u {IX} is strictly larger than B. This is contrary our proof is complete. 0

to the maximality of Bin 9'. Thus, L(B) = V, and B is a basis of V containing

s. 0 A given vector space V has many different bases. For example,

~ = {(0, ... , x, ... , 0) +t5tli = 1, ... , n} is clearly a basis for P for any x-:/: -1

Let us ·look at a few concrete examples of bases before continuing. in F. What all bases ofV have in common is their cardinality. We prove this fact

in our next theorem.

Example 2.7: The empty set ¢ is a basis for the zero subspace (0) of any vector

space V. If we regard a field F as a vector space over itself, then any nonzero

element a: ofF forms a basis of F. 0 Theorem 2.12: Let V be a vector space over F, and suppose B1 and B2 are two

bases of V. Then IB 1 1 = IB2I·

Example 2.8: Suppose V = F 0 , n eN. For each i = 1, ... , n, let Dt =

(0, ... , 1, ... , 0). Thus, 15 1 is then-tuple whose entries are all zero except for a 1 Proof: We divide this proof into two cases.

in the ith position. Set §={D 1 , ••• ,D0 }. Since (xl, ... ,xJ=xtDl+···+

xn t5 D' we see t5- is a basis of P. We shall call t5_ the canonical (standard) basis of CASE 1: Suppose V has a basis B that is finite.

F0 • 0 In this case, we shall argue IB 1 1 = IBI = IB2I· Suppose B = {at, ... , a:n}· It

clearly suffices to show !Btl= n. We suppose IB 1 1-:/: nand derive a contra-

Example 2.9: Let V = Mm x u(F). For any i = 1, ... , m, and j = 1, ... , n, let e1J diction. There are two possibilities to consider here. Either IB 1 1 = m < n or

denote the m x n matrix whose entries are all zero except for a 1 in the (i j)th IB 1 1 > n. Let us first suppose B1 = {Pt, ... , Pm} with m < n. Since Pte L(B),

position. Since (a 1J) = Lt,J 9-tJeiJ, we see B = {e1j 11 ~ i ~ m, 1 ~ j ~ n} is a basis /1 1 = x 1a: 1 + · · · + X 0 CX0 • At least one x1 here is nonzero since /11 -:/: 0. Relabel-

for V. The elements e1J in B are called the matrix units of Mm x n(F). 0 ing the a:1 if need be, we can assume x 1 -:/: 0. Since B is linearly independent

over F, we conclude that /1 1 eL({a:2, ... , a:n} u {1Xl})- L({a:2, ... ,an}). It now

Example 2.10: Let V = F[X]. Let B denote the set of all monic monomials in X. follows from Theorem 1.15(f) that a: 1 e L({/1 1 , a: 2, ... , CX0 }). Since {a:l, ... , a:n}

Thus, B = {1 = X 0 , X, X 2, ...}. Clearly, B is a basis of F[X]. 0 is linearly independent over F, and PI¢L({a:2, ... ,a:0 }), we see that

{Pt> a: 2, ... , a:0 } is linearly independent over F. Since a: 1 e L({P 1, a: 2, ... , 1X0 }),

A specific basis for the vector space Ck(l) in Example 1.9 is hard to write V = L({/1 1 , a: 2, ... , CX0 })." Thus, {/11 , a:2, ... , a:n} is a basis of V.

down. However, since IR[X] s;;; Ck(I), Theorem 2.6 guarantees that one basis of Now we ·can repeat this argument m times. We get after possibly some

Ck(l) contains the monomials 1, X, X2, .... permutation of the a:1 that {P 1 , ... ,pm,a:m+t•· .. ,a:0 } is a basis of V. But

Theorem 2.6 says that any linearly independent subset of V can be expanded {p 1 , ... ,/1m} is ~lready a basis of V. Thus, a:m+ 1 eL({/1 1 , ... ,pm}). This

to a basis of V. There is a companion result, which we shall need in Section 3. implies { p1 , •.. , Pmo a:m + 1 , •.• , a:0 } is linearly dependent which is a contra-

Namely, if some subset S of V spans V, then S contains a basis of V. diction. Thus, IB 1 I cannot be less than n.

L

12' LINEAR ALGEBRA

BASES AND DIMENSION 13

Now suppose !Btl > n (!Btl could be infinite here). By an argument similar proof: It follows from Theorem 2.6 that any basis of a subspace W of V can be

to that given above, we can exchange n vectors ofBt with ext, ... , exn. Thus, we enlarged to a basis of V. This immediately proves (a) and (b). Suppose W is a

construct a basis of V of the form B u S, where S is some nonempty subset of subspace of V. Let B be a basis of W. By Theorem 2.6, there exists a basis C of V

Bt. But B is already a basis of V. Since S :F t/J, BuS must then be linearly such that B s;;; C. Let W' = L(C - B). Since C = B u (C - B),

dependent. This is impossible. Thus, if V has a basis consisting of n vectors, y:::: L(C) = L(B) + L(C- B)= W + W'. Since Cis linearly independent and

then any basis of V has cardinality n, and the proof of the theorem is complete B n (C - B) = t/J, L(B) n L(C - B) = (0). Thus, W n W' = (0), and the proof of

in case 1. (c) is complete.

To prove (d), let B0 ={ext, ... , exn} be a basis ofW 1 n W 2 • IfW 1 n W 2 = (0),

CASE 2: Suppose no basis of V is finite. then we take B0 to be the empty set t/J. We can enlarge B0 to a basis

In this case, both B 1 and B2 are infinite sets. Let exeB 1 • Since B2 is a basis B1 = {cx1, ... , ex0 , Pt> ... , Pm} of W t· We can also enlarge B0 to a basis

of V there exists a unique, finite subset A~~. s;;; B2 such that ex e L(AJ and B2 = {cxl,. .. , exn, 1'1>'"' ')lp} ofW2. Thus, dim wt n w2 = n, dim wt = n + m,

a¢ r.{A1for any proper subset A' of All.' Thus, we have a well-defined function and dim w2 = n + p. We claim that B ={ext, ... ' exn, Pt .... ' Pm. 'Yt· ... ' 1'p} is a

l{J: B1 ~ &J(B 2 ) given by lfJ(cx) = A~~.. Since B1 is infinite, we may apply .2.1 and basis of W 1 + W 2. Oearly L(B) = W 1 + W 2 • We need only argue B is lineatly

conclude that IB 11;:a: IU~~.eB, A~~.l· Since ex e L(AJ for all ex e B1, Y_ = independent. Suppose Ir=t x,a, + Ll"=t y,p, + Lf=t Zj')l, = 0 for some x1, y1,

L(Ufi.EB, AJ. Thus U~~.eB, A~~. is a subset of B2 tha~ spans all of V. Since B2 ~s a z1e F. Then Lf= t Zj')l1e W t n W 2 = L({ ex1, ... , exn}). Thus, Lf= t Zj')l, = Ir= 1 w,ex,

basis of V, we conclude UII.GB, A~~. = B2. In particular, !Btl ;:a: IB2I· Reversmg for some w1e F. Since B2 is a basis of W 2, we conclude that Zt = · · · = Zp = 0.

the roles ofB 1 and B2 gives IB 2 1 ;:a: !Btl· This completes the proof of Theorem Since B1 is a basis ofW1o x 1 = ··· = X 0 = y1 = .. :Ym = 0. In particular, B is

2.12. 0 linearly independent. Thus, dim(W 1 + W 2) = IBI = n + m + p, and the proof of

We shall call the common cardinality of any basis ofV the dimension ofV. We (d) follows. 0

shall write dim V for the dimension of V. If we want to stress what field we are

over, then we shall use the notation dimF V for the dimension of the F-vect~r A few comments about Theorem 2.13 are in order here. Part (d) is true

space V. Thus, dim V = IBI, where B is any basis of V when the base field F IS whether V is finite dimesional or not. The proof is the same as that given above

understood. when dim(W t + W 2) < oo. If dim(W 1 + W 2) = oo, then either W 1 or W 2 is an

Let us check the dimensions of some of our previous examples. In Example infinite-dimensional subspace with the same dimension as W t + W 2. Thus, the

result is still true but rather uninteresting.

2•4J dim Fz y = 1' and dimF I (V) = 1~1. the cardinality of R• In Example(F)

2.7,

dim~O) = 0. In Example 2.8, dim F 0 = n. In Example 2.9, dtm Mm" n = mn. If Vis not finite dimensional, then (b) is false in general. A simple example

In Example 2.10, dim V = IN I, the cardinality of N. illustrates this point.

If the dimension of a vector space Vis infinite, as in Examples 2.4 and 2.10, we

shall usually make no attempt to distinguish which cardin.al nun;tber gives dim Example 2.14: Let V = F[X], and let W be the subspace of V consisting of all

V. Instead, we shall merely write dim V = oo. IfV has a firut~ bas~s {ext, ... , exn}, even polynomials. Thus, W = {L a 1X 21 1~ e F}. A basis of W is clearly all even

we shall call V a finite-dimensional vector space and wnte dtm V < oo, or, powers of X. Thus, dim V = dim W, but W :F V. 0

more precisely, dim V = n < oo. Thus, for example, dimR Ck(l) = oo, wherea.:s

dimR ~n = n < oo. In our next theorem, we gather together some of the mor~ The subspace W' of V constructed in part (c) of Theorem 2.0 is called a

elementary facts about dim V. complement ofW. Note that W' is not in general unique. For ~xample, ifV = ~ 2

and W = L((1, 0)), then any subspace of the form L((a, b)) with b :F 0 is a

Theorem 2.13: Let V be a vector space over F. complement of W.

Finally, part (d) of Theorem 2.13 has a simple extension to finitely many

(a) If W is a subspace of V, then dim W ~ dim V. subspaces W h ••• , W k of V. We record this extension as a corollary.

(b) If y is finite dimensional ~nd W is a subspace of V such that

Corollary 2.15~ Let V be a finite-dimensional vector space of dimension n.

dim W = dim V, then W = V.

Suppose W 1 ,.~ .• Wk are subspaces of V. For each i= 1, ... ,k, set

(c) If w is a subspace of V, then there exists a subspace W' of V such that f1 = n- dimW1• Then

W + W' = V and W n W' = (0).

(d) If y is finite dimensional and W t ~nd W 2 a~e subspaces of V, then (a) dim(W 1 n · · · ~ WJ = n- }J=t f, + "'J;;;f

dim(W 1 + W 2) + dim(W t n W 2) = dtm W t + dtm W 2· {n- dim((Wt n · .. n Wj) + WJ+t)}.

14 LINEAR ALGEBRA

EXERCISES FOR SECTION 2 15

(b) dim(W 1 n · · · n Wk) ~ n -!J= 1 f1• following commutative diagram:

(c) dim(W 1 n···nWJ=n-}:J= 1 f1 if and only if for all i=1, ... ,k,

W; + <nJ'fl WJ) = V. 2.19:

Proof: Part (a) follows from Theorem 2.13 (d) by induction. Parts (b) and (c) are

easy consequences of(a). We leave the technical details for an exercise at the end

of this section. D

Before closing this section, let us develop some useful notation concerning

bases. Suppose V is a finite-dimensional·vector space over F. If~ = {IX1, ••• , cxn}

is a basis of V, then we have a natural function [·]!! V-+ Mnx 1(F) defined as By a diagram, we shall mean a collection of vector spaces and maps (represented

follows. by arrows) between these spaces. A diagram is said to be commutative if any two

sequences of maps (i.e., composites of functions in the diagram) that originate at

Definition 2.16: If ~ = { cx 1, ... , cxn} is a basis of V, then [PJf! = the same space and end at the same space are equal. Thus, 2.19 is commutative if

(x 1 , ••• , xJ'e Mn x 1(F) if and only if L:r=t X;IX; = p. and only if the two paths from V to Mn x 1(F), clockwise and counterclockwise,

are the same maps. This is precisely what Theorem 2.18 says.

Since ~ is a basis of V, the representation of a given vector P as a linear Most of the maps or functions that we shall encounter in the diagrams in this

combination of cx 1 , ••• , 1X0 is unique. Thus, Definition 2.16 is unambiguous. The book will be linear transformations. We take up the formal study of linear

function [·Ja:: V-+ Mnx 1(F) is clearly bijective and preserves vector addition and transformations in Section 3.

scalar multiplication. Consequently, [xp + yo]!!= x[PJf! + y[o]f! for all x, Ye F

and p, (j e V. The column vector [PJa: is often called the ~ skeleton of p.

Suppose ~ = {a 1 , ••• , 1X0 } and ~ = { o1, ... , 00 } are two bases of V. Then there EXERCISES FOR SECTION 2

is a simple relationship between the ~ and ~ skeletons of a given vector p. Let

M(~, ~) denote the n x n matrix whose columns are defined by the following (1) Let Vn = {f(X)eF[X] I degree f ~ n}. Show that each Vn is a finite-

equation: dimensional subspace of F[X] of dimension n + 1. Since F[X] =

U:"= 1 Vn, observe that Theorem 1.14 is false when the word "finite"

2.17: M(~, ~) = ([o1Ja: I·· ·I [c5J~ is taken out of the theorem.

·In equation 2.17, the ith column of M(~, ~) is the n x 1 matrix [oJf!. (2) Let Vn be as in Exercise 1 with F = IF 2 • Find a basis of V 5 containing 1 + x

Multiplication by M(~, ~) induces a map from Mnx 1(F) to Mnx 1 (F) that and x 2 + x + 1.

connects the ~ and ~ skeletons. Namely: (3) Show that any set of nonzero polynomials in F[X], no two of which have

the same degree, is linearly independent over F.

Theorem 2.18: M(~, ~)[p]~ = [PJf! for all Pe V. (4) Let V ={(at> a 2 , •••)e ~1\ill a 1 = 0 for all i sufficiently large}. Show that Vis

an infinite-dimensional subspace of ~N. Find a basis for V.

Proof: Let us denote the ith column of any matrix M by Col;(M). Then

for each i = 1, ... , n, we have M(~, ~)[ 01].4 = M(~, ~)(0, ... , 0, 1, 0, ... , 0)' (5) Prove Theorem 2.13(d) when dim(W 1 + W 2 ) = oo.

= Col1 (M(~, ~)) = [oJa:· Thus, the theorem is correct for Pe~. (6) Prove Corollary 2.15.

Now we have already noted that [·]~and [·]f! preserve vector addition and

scalar multiplication. So does multiplication by M(~, ~) as a map on Mn x 1(F). (7) Find the dimension of the subspace V = L({1X, p, y, c5}) s;; ~4, where

Since any pe V is a linear combination of the vectors in ~. we conclude that a= (1, 2, 1, 0), p = ( -1, 1, -4, 3), y = (2, 3, 3, -1), and o = (0, 1, -1, 1).

M(~, ~)[p]~ = [PJf! for every Pe V. D (8) Compute the following dimensions:

(a) dimR(C).

The matrix M(~, ~) defined in 2.17 is called the change of basis matrix (b) dimo(~).

(between~ and~). It is often convenient to think of Theorem 2.18 in terms ofthe

{c) dim0 (F), where F is the field given in Exercise 3 of Section 1.

16 LINEAR ALGEBRA

LINEAR TRANSFORMATIONS 17

(9) Suppose V is an n-dimensional vector space over the finite field IFp· Argue 3. LINEAR TRANSFORMATIONS

that V is a finite set and find lVI.

(10) Suppose V is a vector space over a field F for which lVI > 2. Show that V Let V and W be vector spaces over a field F.

has more than one basis.

Definition 3.1: A function T: V -+ W is called a linear transformation (linear map,

(11) Let F be a subfield of the field F'. This means that the operations of homomorphism) ifT(xa+ yfJ) = xT(a) + yT{/J) for all x, yeF and a, PeV.

addition and multiplication on F' when restricted to F make F a field.

(a) Show that F' is a vector space over F. Before we state any general theorems about linear transformations, let us

(b) Suppose dim~F1 = n. Let V be an m-dimensional vector space over F'. consider a few examples.

Show that V is an ron-dimensional vector space over F.

Example 3.2: The map that sends every vector in V to 0 e W is clearly a linear

(12) Show that dim(V") = n dim(V). map. We shall call this map the zero map and denote it by 0. If T: V-+ W and

S: W -+ Z are linear trap.sformations, then clearly the composite map ST: V -+ Z

(13) Return to the space v n in Exercise 1. Let pt(X) = LJ~O ajlxj fori = 1' ... ' r.

is a linear transformation. 0

Set A= (a1JeM<n+l)x.(F). Show that the dimension of L({p 1 , ••• , Pr}) is

precisely the rank of A.

Example 3.3: If V is finite dimensional with basis ~ = {a 1 , ... , a 0 }, then [ ·],.:

(14) Show that the dimension of the subspace of homogeneous polynomials of V -+ M 0 x 1 (F) is a linear transformation that is bijective. 0 -

degree din F[X0 , ••• , XJ is the binomial coefficient (n~d).

Example 3.4: Taking the transpose, A-+ A', is clearly a linear map from

(15) Find the dimensions of the vector spaces in Exercises 18 and19 of Section 1.

Mm xo(F} -+ Mo x m(F}. 0

(16) Let AeMmxn(F). Set CS(A) = {AXIXeMnxl(F)}. CS(A) is called the

column space of A. Set NS(A) = {X e Mn x 1(F) IAX = 0}. NS(A) is called the Example 3.5: Suppose V = Mm xn(F) and A E Mm xm(F). Then multiplication by

null space of A. Show that CS(A) is a subspace ofMmx 1(F}, and NS(A) is a A (necessarily on the left) induces a linear transformation T A: V-+ V given by

subspace of M 0 x 1 (F). Show that dim(CS(A)) + dim(NS(A)) = n. TA(B) = AB for all BeY. 0

(17) With the same notation as in Exercise 16, show the linear system AX= B

Example 3.3 and 3.5 show that the commutative diagram in 2.19 consists of

has a solution if and only if dim(CS(A)) = dim(CS(A I B)). Here

linear transformations.

BeMmx 1(F}, and (AlB) is the m x (n + 1) augmented matrix obtained

from A by adjoining the column B.

Example 3,6: Suppose V = Ck(I) with k ~ 2. Then ordinary differentiation f-+ f'

(18) Suppose V and W are two vector spaces over a field F such that lVI = IWI. is a linear transformation from Ck(I) to ck- 1(!). 0

Is dim V = dim W?

Example 3.7: Suppose V = F[X]. We can formally define a derivative f-+ f' on

(19) Consider the set W of 2 x 2 matrices of the form

V as follows: If f(X) = Lf=0 BjXi, then f'(X) = Lf=1 ia1X1- 1 • The reader can easily

check that this map, which is called the canonical derivative on F[X], is a linear

transformation. 0

Example 3.8: Suppose V =~(A) as in Example 1.10. Then T(f) ~ JAfisa linear

and the set Y of 2 x 2 matrir..es of the form transformation from V to IR. D

proceed. At this point, let us introduce a name for the collection of all linear

transformations from V toW.

Show that Wand Yare subspaces of M 2 xiF) and compute the numbers Definition 3.9: Let V and W be vector spaces over F. The set of all linear

dim(W), dim(Y), dim(W + Y), and dim(W n Y). transformations from V toW will be denoted by Hom~V, W).

18 LINEAR ALGEBRA LINEAR TRANSFORMATIONS 19

When the base field F is clear from the context, we shall often Definition 3.14: Let V be a finite-dimensional vector space over F. If a is a basis

write Hom(V, W) instead of Homp(V, W). Thus, Hom(V, W) is the subset of ofV, then(·)!: V- Pis the linear transformation defined by (P).. = ([PJJ1 for all

the vector space wv (Example 1.6) consisting of all linear transformations peV. - -

from V to W. If T, S e Hom(V, W) and x, y e F, then the function xT + yS e wv

is in fact a linear transformation. For if a, be F and a, pe V, then Thus, (p)! = T(~)(p) for all PeV. We can now state the following theorem,

(xT + yS)(aa + bp) = xT(aa + bp) + yS(aa + bp) = xaT(a) + xbT(p) + yaS(a) + whose proof is given in Example 3.13:

ybS(p) = a(xT(a) + yS(a)) + b(xT(p) + yS(p)) = a(xT + yS)(a) + b(xT + yS)(p).

Therefore, xT + ySeHom(V, W). We have proved the following theorem: Theorem 3.15: Let V be a finite-dimensional vector space over F and suppose

dim V = n. Then every basis~ ofV determines an isomorphism(·)!: V-+ F 0 • 0

Theorem 3.10: Hom(V, W) is a subspace of wv. 0

We now have two isomorphisms [·Ji V- Mnx 1(F) and(·).,: V- F 0 for every

choice of basis ~ of a (finite-dimensional) vector space V. We-shall be careful to

Since any T e Hom(V, W) has the property that T(O) = 0, we see that

distinguish between these two maps although they only differ by an isomorph-

Hom(V, W) is always a proper subspace of wv whenever W :f. (0).

ism from Mnx 1(F) to M 1 x0 (F). Notationally, Fn is easier to write than Mnx 1(F),

At this point, it is convenient to introduce the following terminology.

and so most of our subsequent theorems will be written using the map(·).,. With

this in mind, let us reinterpret the commutative diagram given in 2.19.-

Definition 3.11: Let T e Hom(V, W). Then, If A is any n x n matrix with coefficients in F, then A induces a linear

transformation SA: pn- P given by the following equation:

(a) ker T = {aeVIT(a) = 0}.

(b) lm T = {T(a)eWI aeV}. 3.16:

(c) Tis injective (monomorphism, 1 - 1) if ker T =(0).

(d) Tis surjective (epimorphism, onto) if Im T = W.

(e) Tis bijective (isomorphism) if Tis both injective and surjective.

Using the notation in Example 3.5, we see SA is the linear transformation that

(f) We say V and W are isomorphic and write V ~ W if there exists an

makes the following diagram commutative:

isomorphism T e Hom(V, W).

3.17:

The set ker T is called the kernel ofT and is clearly a subspace of V. Im T is

called the image ofT and is a subspace of W. Before proceeding further, let us

give a couple of important examples of isomorphisms between vector spaces. M;.x 1(Ij~') ]Mnx l(F)

(·)' (·)'

Example 3.12: Mnx 1 (F)~ M 1 xn(F) via the transpose A- A'. We have already

mentioned that F:n = M 1 xn(F). Thus, all three of the vector spaces Mnx 1(F), Fn ___.::,SA~-~Fn

M 1 xiF), and Pare isomorphic to each other. 0

Example 3.13: Suppose V is a finite-dimensional vector space over F. Then The vertical arrows in 3.17 are isomorphisms. Clearly, TA is an isomorphism if

every basis ~ = {a 1 , ••• , a 0 } of V deterinines a linear transformation T(!!): ~nd only if A is invertible. Thus, SA is an isomorphism if and only if A is

V- P given by T(~)(p) = (x 1 , •.. , xJ ifand only iQJ= 1 xiai = p. T(~) is just the mvertible.

composite of the coordinate map [·] .. : V- Mn x 1(F) and the transpose We shall replace the notation SA (or TJ with A1 (or A) and simply write

Mnx 1(F)- M 1 xn(F) = P. Since both of-these maps are isomorphisms, we see

T(~) is an isomorphism. 0 A'

Fn·_--~fR or

vectors to row vectors. For this reason, we give a formal name to the I dirrNow_ suppose ~ and Qa~e tw~ bases of a finite-dimensional vector space v of

isomorphism T(~) introduced in Example 3.13. mellSlon n. Ir we combtne diagrams 2.19 and 3.17, we have the following

1 _ _j

20 LINEAR ALGEBRA LINEAR TRANSFORMATIONS 21

commutative diagram: Suppose A is any finite set with IAI = n. We can without any loss of generality

assume A= {1, ... , n}. Then VA= vn. There is a natural isomorphism

3.18: T:V x ··· x V- yn given by T((cx 1, ... ,cxJ) = fevn, where f(i) = cx1 for all

i == 1, ... , n. The fact that T is an isomorphism is an easy exercise, which we

leave to the reader. 0

yA with !AI = nand write just yn to represent any one of these spaces. Using this

notation, we have the following theorem:

Theorem 3.22: Let V and W be vector spaces over F, and suppose Vis finite

dimensional. Let dim V = n.

(a) If~= {ext> ... , CX0 } is a basis of V, then for every ({J 1, ... , fJn)E Wn, there

Since (·)! = ([·Jl and (·)g =([·]g)\ we get the following corollary to Theorem exists a unique T e Hom(V, W) such that T(cx1) = {1 1 for i = 1, ... , n.

3.15: (b) Every basis ~ of V determines an isomorphism 'P(g): Hom(V, W'r-, wn.

Corollary 3.19: Suppose V is a finite-dimensional vector space of dimension n Proof: (a). Let ~={cx 1 , ... ,an} be a basis for V. Then (·).. eHom(V,Fn)

over F. If~ and ~ are two bases of V, then is an isomorphism. Let {J = ({Jl, ... , {J,) E Wn. The n=-tuple {J deter-

mines a linear transformation L 11 eHom(F", W) given by

3.20: L 11((x 1 , ••• , xJ) = Lf=t x1{J1• The fact that L 11 is a linear trans-

formation is obvious. Set T = L11(·)!. Then TeHom(V, W) and

T(aJ = {J1 for all i = 1, ... , n. The fact that T is the only linear

('l«/Fjn

transformation from V to W for which T(cx1) = {11 fori = 1, ... , n is an

y/ M(~,g)'

easy exercise left to the reader.

~yn (b) Fix a basis g = {cx 1, ... , an} of V. Define 'P(g): Hom(V, W) - wn by

'P(~)(T) = (T(a 1), ... , T(aJ). The fact that 'P(~) is a linear trans-

WU- Hom(V, W) by x((fJ 1 , ... , fJJ) = Lp(·)cz. Here fJ = (fJ 1 , ... ,fJJ.

Proof: We have already noted from 3.18 that 3.20 is commutative. Both(·)! and Hence, 'P(g) is an isomorphism. 0 -

(')., are isomorphisms from Theorem 3.15. We need only argue M(§, ~) is an

invertible matrix. Then the map M(~, ~) 1 = SM@,!J: Fn - P is an isomorphism. Theorem 3.22(a) implies that a given linear transformation T e Hom(V, W) is

Now change of basis matrices M(§, ~)are always invertible. This follows from completely determined by its values on a basis ~ of V. This remark is true

Theorem 2.18. For any {JeV, we have M(~. §)M(§, ~)({J]~ = M(~ §)[fJJ! = [fJ]~. whether V is finite dimensional or infinite dimensional. To define a linear

This equation easily implies M(~ ~)M(§, ~) = In, the n x n identity matrix. 0 transformation T from V to W, we need only define t · on some basis

B = {a1lieA} ofVand then extend the definition ofT linearly to all ofL(B) = V.

In our next theorem, we shall need an isomorphic description of the vector Thus, ifT(aJ = {J1 for all ieA, then T(L1e4 x1cxJ is defined to be Lte4 x1{J1• These

space yn introduced in Example 1.6. remarks provide a proof of the following generalization of Theorem 3.22(a):

Example 3.21: In this example, we construct a vector space isomorphic to vn. 3.23: Let V and W be vector spaces over F and suppose B = {a1 1i e A} is a basis

Let V be a vector space over F, and let n e 1'1. Consider the Cartesian product of V. If {fJ 11i e A} is any subset of W, then there exists a unique T e Hom(V, W)

V x ... x V (n times)= {(cx 1 , ... ,cxJia1 eV}. Clearly, V x ... x Vis a vector . such that T(aJ = fJ1 for all i eA. 0

du:;~w suppose

space over F when we define vector addition and scalar multiplication

by (cx 1, ... , cxJ + ({1 1 , • .. , fJJ = (a 1 + {1 1 , ... , an+ fJJ, and x(cx1, ... , an)= I V and W are both finite-<timensional vector spaces over F. Let

(xcx 1, ... , x!XJ. L _ =n and dimW=m. If ~={a 1 , •••

,cxn} is a basis of V and

22 LINEAR ALGEBRA

LINEAR TRANSFORMATIONS 23

~ = {P 1, ... , Pm} is a basis of W, then the pair (g, ~) determines a linear

same notation as in 3.17, we have the following diagram:

transformation r(g, ~): Hom(V, W) -+ Mm X n(F) defined by the following

equation:

3:Z7:

3.24:

In equation 3.24, T e Hom{V, W), and r(g, ~)(T) is the m x n matrix whose ith

column is the m x 1 matrix [T(aJ]f!. If T 1, T 2 e Hom(V, W) and x, y e F, then

= (x[T1(a1)]f! + y[T2(al)]l!l···lx[T1(aJ]l! + y[T2(aJ]l!)

= x([T 1(a1)J2 I·· ·I [T t(aJJl!) + y([T 2(at)]t! I·· ·I [T2(ao)]l!)

= xr(g, ~)(T1) + yr(g, ~)(T 2)

Since all the maps in 3.27 are linear and the bottom square commutes, we

Thus r(g, ~ is indeed a linear transformation from Hom(V, W) to Mm X n(F). need only check [·]f!T = r(g, ~[·]!!on a basis of V. Then the top square of

Suppose T e ker r(g, p). Then r(g, p)(T) = 0. In particular, [T(aJ]p = 0 for all 3.27 is commutative, and the commutativity of 3.26 follows. For any a 1 e g, we

i = 1, ... , n. But then- 0 = [T(aJ]~- = (T(a1))f!, and Theorem ns

implies have ([·]pT)(aJ = [T(cxJ]f! = Col1(r(g, ~) = r(g, ~0, ... , 1, ... , 0)' =

T(aJ = 0. Thus, T = 0, and we conclude that r(g, ~ is an injective linear n~~[aa!. o

transformation.

r(g, p) is surjective as well. To see this, let A = (x11) e Mm x0 (F). Let r(g, p)(T) is called the matrix representation of the linear transformation T

')11 = 2;: 1 XJ 1P1 for i = 1, ... , n. Then {Y 1, ... , Yn} £;; W, and [/'Jp = with res{>ect to the bases g and p. Since the vertical arrows in 3.26 and r(g, p) are

(x 11 , ••• , xmil' = Col1(A) for all i = 1, ... , n. It follows from Theorem 3.22 fhat isomorphisms, V, W, Hom(V;W), and T are often identified with P~ P,

there exists a (necessarily unique) TeHom(V, W) such that T(aJ = y1 for Mm x0 {F), and A = r(g, ~(T). Thus, the distinction between a linear trans-

i = 1, ... , n. Thus, r(g, p)(T) = A and r(g, p) is surjective. We have, now proved formation and a matrix is often blurred in the literature.

the first statement in the following theorem: The matrix representation r(g, p)(T) ofT of course depends on the particular

bases g and pchosen. It is an easy matter to keep track of how r(g, p)(T) changes

Theorem 3.25: Let V and W be finite-dimensional vector spaces over F of with g and ~· -

dimensions nand m, respectively. Let g be a basis ofV and p a basis ofW. Then

the map r(g, ~: Hom{V, W)-+ Mmxn(F) defined by equation 3.24 is an isomor-

Theorem 3.28: Let V and W be finite-dimensional vector spaces over F of

phism. For every T e Hom(V, W), the following diagram is commutative:

dimensions nand m, respectively. Suppose g and g' are two bases ofV and Pand

~~ two bases of W. Then for every T e Hom{V, W), we have . . -

3.26:

3.29:

T

v w

~')(T) = M{~, ~1r(g, ~M(g, gr 1

(•\ 1 1,.• i

I

r(g',

Proof: Before proving equation 3.29, we note that M{~, ~1 (and M(~ g1) is the

pn r(g,f!.'(f)'

Fm I x m (and n x n) change of basis matrix given in equation 2.17. We have

Ill

1 already noted that change of bases matrices are invertible and consequently all

I th\terms in equation. 3.29 make sense.

Proof: We need only argue that the diagram in 3.26 is commutative. Using the

L o see that 3.29 is in fact a valid equation, we merely combine the

_j

24 LINEAR ALGEBRA LINEAR TRANSFORMATIONS 26

commutative diagrams 2.19 and 3.27. Consider the following diagram: If we apply these remarks to our situation in Theorem 3.28, we get the

following corollary:

3.30:

Corollary 3.31: Let V and W be finite-dimensional vector spaces over F of

r<g,l_)(T) dimensions n and m, respectively. Let 1!: and ~ be bases of V and W. Let

Mnx1(F)

/ •• ,(F) T e IIom(V, W), and set A = r(~ ~)(T). If rk(A) = s, then there exist bases g' and

~

Q)

( /(v

~' of V and W, respectively, such that

[·J, )

M(~ g')

T

w (?) M(~, ~')

@) \[·]~: 3.32:

Mnx 1(F)

rc~·. ()(f)

Mmx 1(F)

r(g'. ~')(T) = ( ~ I+) D

The diagram 3.30 is made up offour parts, which we have labeled CD,

@, Q) , There is another representation problem that naturally arises when consider-

and @). By Theorem 2.18, diagrams CD

and (?) are commutative. By ing Theorem 3.28. Suppose V = W. If g is a basis ofV, then any T e Hom(V, V) is

Theorem 3.25, diagrams Q) and @) are commutative. It follows that the entire represented in terms of g by an n x n matrix A = r(f!:, g)(T). If we change 1!: to a

diagram 3.30 is commutative. In particular, M(p, P')r(f!:, P)(T) = r(f!:',P')(T)M(f!:, new basis g' of V, then the representation ofT changes to B = r(g', g1(T).

1!:1· Solving this equation for r(f!:', ~m gives 3.29. 0 - - Equation 3.29 implies that B = PAP - 1, where P = M(g, 1!:1· Recall that two

n x n matrices A and B are similar if there exists an invertible n x n matrix P

Recall that two m x n matrices A, Be Mm x nCF) are said to be equivalent if such that B = PAP- 1. Thus, different representations of the same

there exist invertible matrices P e Mm xm(F) and Q e Mn xo(F) such that T e Hom(V, V) with respect to different bases of V are similar matrices.

A = PBQ. Equation 3.29 says that a given matrix representation r(f!:, P)(T) ofT Now we can ask, What is the simplest representation ofT? If we choose any

relative to a pair of bases (f!:, P) changes to an equivalent matrix when we replace basis g of V and set A = r(f!:, g)(T), then our question becomes, What is the

(f!:, ~ by new bases (g', ~l This leads to the following question: What is the simplest matrix B similar to A? That question is not so easy to answer as the

simplest representation of a given linear transformation T? If we set previous equivalence problem. We shall present some solutions to this question

A = r(~ P)(T), then we are asking, What is the simplest matrix B equivalent to in Chapter III of this book.

A? -

Theorem 3.25 implies that dim Hom(V, W) =(dim V)(dim W) when V and W

Recalling a few facts from elementary matrix theory gives us an easy answer are finite dimensional. In our next theorem, we gather together some miscella-

to that question. Any invertible matrix P is a product, P = E, · · · Eto of neous facts about linear transformations and the dim(·) function.

elementary matrices E 1, ... , E,. PA = E,(· .. (E 1A)·") is them x n matrix

obtained from .A by preforming the elementary row operations on A represented Theorem 3.33: Let V and W be vector spaces over F and suppose

by E 1, ... , E,. Similarly (PA)Q is the m x n matrix obtained from PA by TeHom(V, W). Then

preforming a finite number of elementary column operations on PA. Let us

denote the rank of any m x n matrix A (i.e., the number of linearly independent

(a) If T is surjective, dim V ;;;: dim W.

rows or columns of A) by rk(A). If rk(A) = s, then we can clearly find invertible

matrices P and Q such that (b) If dim V = dim W < oo, then Tis an isomorphism if and only if either Tis

injective or T is surjective.

(c) dim(Im T) + dim(ker T) = dim V.

PAQ=(+I ~)

Here our notation Proof: (a) follows immediately from Theorem 2.11. In (b), if T is an isomorph-

ism, then T is. both injective and surjective. Suppose T is injective, and

n ==dim V =dim W. Let g = {a1, ... , an} be a basis of V. Since T. is injective,

(~I~) Tg == {T(a 1), ... , T(aJ} is a linearly independent set in W. Then dim W = n

implies Tf!: is a basis ofW. In particular, W = L(Tg) = T(L(g)) = T(V). Thus, Tis

means PAQ will have the s x s identity matrix, I., in its upper left-hand corner surjective, and hence, an isomorphism.

and zeros everywhere else. Suppose T is surjective. If g = {a 1 , ... , an} is a basis of V, then

26 LINEAR ALGEBRA

LINEAR TRANSFORMATIONS 27

W = TM = T(L(~)) = L(T~). By Theorem 2.11, T~ contains a basis ofW. Since is said to be exact if 1m d 1+ 1 = ker d 1 for every i e Z.

dim W = n, Tg is a basis of W. Now let aeker T. Write ct = x1ct1• Then L

0 = L x1T(ctJ. Since Tg is a basis of W, x1 = ·· · = x.. = 0. Thus, ct = 0 and T is Let us consider an important example.

injective. This competes the proof of (b).

We prove (c) in the case that dim V = n < oo. The infinite-dimensional case is Example 3.38: Let V and W be vector space over F, and let T e Hom(V, W).

left as an exercise at the end of this section. Let g = {ct 1 , ... , tXr} be a basis of Then

ker T. We take r = 0, and g = ¢ if T is injective. By Theorem 2.6, we can expand

g to a basis 4 = {a1 , ... , a., P 1 , •.. , P.} ofV. Here r + s = n. We complete the C: 0--+ ker T --~ V __T---) 1m T --~ 0

proof of (c) by arguing that TP = {T(P 1), .•• , T(P.)} is a basis of Im T.

Suppose be 1m T. Then - {J = T(y) for some y e V. Since V = L(4),

y = X1 tX1 + ··· + XrtXr + Y1 P 1 + ··· + YsPs for some X~o y1 eF. Applying T to this is an exact chain complex. Here i denotes the inclusion of ker T into V. 0

equation, gives fJ E L(Tp). Thus, TP spans Im T.

Suppose L:=lYIT(PJ = 0 for some y,eF. Then L:=1YIPtekerT. Thus, We can generalize Example 3.38 slightly as follows:

L:= 1 y1p, = Lf= 1 x1ct 1 for some x1 e F. Since 4 is a basis of V, we conclude that Definition 3.39: By a short exact sequence, we shall mean an exact chain

x 1 = ··· = Xr = y 1 = · · · = Ys = 0. In particular, {T(P 1), ••• , T(p.)} is linearly complex C of the following form:

independent. Thus, TP is a basis of 1m T, and the proof of (c) is complete. 0

We finish this section with a generalization of Theorem 3.33(c). We shall need

3.40:

the following definition.

we shall mean an infinite sequence {V1} of vector spaces Vi> one for each integer

i E Z, together with a sequence {d 1} of linear transformations, d 1e Hom(V1, V1_ tl Thus, the example in 3.38 is a short exact sequence with V 2 = ker T, d 2 = i,

for each i E Z, such that d 1+ 1 d 1 = 0 for all i e Z. V1 = V, etc. Oearly, a chain complex C of the form depicted in 3.40 is a short

exact sequence if and only if d 2 is injective, d 1 is surjective, and Im d2 = ker d1.

We usually draw a chain complex as an infinite sequence of spaces and maps Theorem 3.33(c) implies that if C is a short exact sequence, then

as follows: dim V2 - dim V 1 + dim V0 = 0. We can now prove the following generaliza-

tion of this result:

3.35:

Theorem 3.41: Suppose

H a chain complex C has only finitely many nonzero terms, then we can

change notation and write C as

is an exact chain complex. Then Lf=o ( -1)1 dim V1 = 0.

3.36: Proof: The chain complex C can be decomposed into the following short exact

sequences

It is understood here that all other vector spaces and maps not explicitly

appearing in 3.36 are zero.

c: · ·· ..... vl+l

J

28 LINEAR ALGEBRA EXERCISES FOR SECTION 3 29

If we now apply Theorem 3.33(c) to each C 1 and add the results, we get (10) Let Te Hom(V, V) be an involution, that is, T 2 = Iv. Show that there exists

Lf=o( -1)1 dim V1 = 0. D · two subspaces M and N of V such that

(a) M + N= V.

(b) M n N = (0).

EXERCISES FOR SECTION 3 (c) T(a) = a for every a eM.

(d) T(a) = -a for every aeN.

(1) Let V and W be vector spaces over F. In Exercise 10,we assume 2 :1= 0 in F. IfF= IF 2 , are there subspaces M and

(a) Show that the Cartesian product V x W = {(a, p) Ia e V, PeW} is a N satisfying (a)-(d)? ·

vector space under componentwise addition and scalar multiplication. (11) Let T E Homp(V, V). If f(X) = RaXn + ' .. + a 1X + a 0 E F[X], tllen

(b) Compute dim(V x W) when V and W are finite dimensional. f(T) =RaT"+ ... + a 1T + a 0 IveHom(V, V). Show that dimF V = m < oo

(c) Suppose T: V-+ W is a function. Show that T eHom(V, W) if and only implies there exists a nonzero polynomial f(X) e F[X] such that f(T) = .,0.

if the graph GT = {(a, T(a)) e V x WI a e V} of T is a subspace of

(12) If S, T e Homp(V, F) such that S(a) = 0 implies T(a) = 0, prove that T =· xS

VxW. for some xeF.

(2) Let TeHom(V, W) and SeHom(W, V). Prove the following statements:

(13) Let W be a subspace of V with m =dim W ~dim V = n < oo. Let

(a) If ST is surjective, then Sis surjective. Z = {T e Hom(V, V) I T(a) = 0 for all a e W}. Show that Z is a subspace of

(b) If ST is injective, then Tis injective. Hom(V, V) and compute its dimension.

(c) If ST = Iv (the identity map on V) and TS = Iw, then T is an

(14) Suppose V is a finite-dimensional vector space over F, and .let

isomorphism.

S, T e Homp(V, V). If ST = Iv, show there exists a polynomial f(X) e F[X]

(d) IfV and W have the same finite dimension n, then ST = Iv implies Tis such that S = f(T).

an isomorphism. Similarly, TS = lw implies T is an isomorphism.

(15) Use two appropriate diagrams as in 3.27 to prove the following theorem:

(3) Show that Exercise 2(d) is false in general. (Hint: Let V = W be the vector Let V, W, Z be finite-dimensional vector spaces of dimensions n, m, and p,

space in Exercise 4 of Section 2.) respectively. Let g, f}, and 1 be bases of V, W, and Z. If T e Hom(V, W) and

(4) Show that P~ F~n = m. s E Hom(W, Z), then r(g, ¥XST) = r(f}, ¥)(S)r(g, f})(T).

(5) Let T e Hom(V, V). If T is not injective, show there exists a nonzero (16) Suppose

SeHom(V, V) with TS = 0. If T is not surjective, show there exists a

~+1 d,

nonzero S e Hom(V, V) such that ST = 0. c: ... ~ vl+ 1 vi "'-+ Yt

dt

V 0 -+0

(6) In the proof of Corollary 3.19, we claimed that M(g, §)M(§, g)[PJ., = [PJ,

for all pe V implies M(g, §)M(§, g) = In. Give a proof of this facf - and

only if A is an invertible matrix. Give a proof of this fact.

C': ... -+ Vi+t di+t

v; ... -+ V't V0-+0

(8) Show that Theorem 3.33(c) is correct for any vector spaces V and W. Some

are chain complexes with C exact. Let T 0 e HomF(V 0 , V0). Show that there

knowledge of cardinal arithmetic is needed for this exercise.

exists T 1 eHomp(Vb V;) such that T 1_ 1 d 1 = diT1 for all i = 1, .... The

(9) Let T e Hom(V, V). Show that T 2 = 0 if and only if there exist two collection of linear transformations {T1} is called a chain map from C to

subspaces M and N of V such that C'.

(a) M + N = V. (17) Suppose C= {(V.. dJ 1i e Z} and C' = {(VI, d;) 1i e Z} are two chain

(b) M n N = (0). complexes. LetT = {T1hez be a chain map from C to C'. Thus, T 1: V1 -+ v;,

(c) T(N) = 0. and T 1_ 1d 1 = djT1 for all ieZ. For each ieZ, set VI=

(d) T(M) ~ N. vi-1 Xv; = {(cx,fl)laeVI-1• peV;}. Define a map df': VI-+ V'f.-t by

J

30 LINEAR ALGEBRA

PRODUCTS AND DIRECT SUMS 31

dl'(cx, fJ) = ( -d1- 1 (cx), T1- 1(cx) + di(P).) Show that C" = {(Vi', di11 ieZ} is a

notation V 1 X • . • X Vn instead of nleA Vi. Thus, the examples given in 1.5,

chain complex. The complex C" is called the mapping cone ofT.

3.21, and Exercise 1 of Section 3 are all special cases of finite products. Example

(18) Use The~rem 3.33(c) to give another proof of Exercise 16 in Section 2. 1.5 is a product in which every Vi is the same vector space V.

If V = TiieA Vb then for every pair of indices (p, q) e A x A, there exist linear

(19) Find a T e HomR(C, C) that is not C-linear.

transformations n:PeHom(V, VP), and OqeHom(Vq, V) defined as follows:

(20) Let V be a finite-dimensional vector space over F. Suppose T e HomF(V, V)

such that dim(lm(T 2)) = dim(Im(T)). Show that Im(T) n ker(T) = {0}. Definition 4.2:

(21) The special case of equation 3.29 where V = W, ~ = ~. and g' = ~~ is very (a) n:P: V- VP is given by n:p(f) = f(p) for all feY.

important. Write out all the matrices and verify equation 3.29 in the (b) Oq: Vq - V is given by

following example: T: IR 3 - IR 3 is the linear transformation given by

T(l5 1) = 215 1 + 215 2 + 153, T(l5 2) = 151 + 3152 - 153, and T(153) = - .5t + 2152.

Let ~ = {cx 1, cx 2, cx 3}, where cx 1 = (1, 2, 1), cx 2 = (1, 0, -1) and cx 3 =

Oq(cx)(i) = {~ if i = q

if i¥=q

(0, 1, -1 ). Compute r(~, ~)(T), r(g, g)(T), and the change of bases matrices

in 3.29. In Definition 4.2(b), cxeVq. Oq(cx) is that function in V whose only nonzero

(22) Let V be a finite-dimensional vector space over F. Suppose TS = ST for value is ex taken on at i = q. The fact that n:P and Oq are linear transformations is

obvious. Our next theorem lists some of the interesting properties these two sets

every SeHorn~, V). Show that T = xlv for some xeF.

of maps have.

(23) Let A, Be Mn x n(F) with at least one of these matrices nonsingular. Show

that AB and BA are similar. Does this remain true if both A and Bare Theorem 4.3: Let v= n.eA v •. Then

singular?

(a) nPOP = Iv•• the identity map on VP, for all peA.

(b) npOq = 0 for.all p ¥= q in A.

4. PRODUCTS AND DIRECT SUMS (c) If A is finite, LpeA OPn:P = Iv, the identity map on V.

(d) 7tp is surjective and op is injective for all peA.

Let {VilieA} be a collection of vector spaces over a common field F. bur

(e) Let W be a second vector space over F. A function T: W- Vis a linear

indexing set A may be finite or infinite. We define the product nleA vi of the vi as

transformation if and only if n:P T e Hom(W, Vp) for all peA.

follows:

(f) The vector space V together with the set {nP IpeA} of linear trans-

formations satisfies the following universal mapping property: Suppose

Definition 4.1: n.eA v. = {f: A- UieA Vilfis a function with f(i)eVi for all ieA}.

W is any vector space over F and {TPeHom(W, VP)I peA} a set of linear

transformations. Then there exists a unique T e Hom(W, V) such that for

We can give the set nleA VI the structure of a vector space (over F) by defining

every peA the following diagram is commutative:

addition and scalar multiplication pointwise. Thus, iff, g e nleA V., then f + g is

defined by (f + g)(i) = f(i) + g(i). If f e OleA VI and X e F, then xf is defined by

4.4:

(xf)(i) = x(f(i)). The fact that nleA v. is a vector space with these operations is

straightforward. Henceforth, the symbol TiieA V1 will denote the vector space

whose underlying set is given in 4.1 and whose vector operations are pointwise

addition and scalar multiplication.

fl

Suppose V = 1eA V1 is a product. It is sometimes convenient to identify a

given vector fe V with its set of values {f(i) IieA}. f(i) is called the ith coordinate

off, and we think off as the "A-tuple" (f(i))1eA· Addition and scalar multiplication vp

in V are given in temis of A-tuples as follows: (f(i))1e4 + (g(i))ieA = (f(i) + g(i))ieA>

l.

and x(f(i))1eA = (xf(i))1eA. This particular viewpoint is especially fruitful. when i Proof: (a), (b), and (c) follow immediately from the definitions. n:P is surjective

IAI = n < oo. In this case, we can assume A= {1, 2, ... , n}. Each feV ts then I and OP is injective sinee n:POP = Iv•. Thus, (d) is clear. As for (e), we need only

identified with the n-tuple (f(l), ... , f(n)). When IAI = n, we shall use the argue that Tis linear provided ..Tis linear for all peA. Let a, peW and x, y e F.

I

i

Then for every peL\, we have 7tp{T(xa + yP)) = 7tpT(xcx + yP) = X7tpT(cx) Suppose V = V 1 x · · · x Vn is a finite product of vector spaces over F. Let B1

+ y1tPT(P) = 7tp(xT(cx) + yT(P)). Now it is clear from our definitions that two be a basis of Vb i = 1, ... , n. We can think of the vectors in V as n-tuples

functions f, g e V are equal if and only if 7tp(f) = 1tP(g) for all peL.\. Consequently, (er: , ••• , aJ with cx1eV1. For any i and aeV~o 81(cx) = (0, ... , a, 0, ... , 0). Thus,

1

T(xa + yp) = xT(cx) + yT(P), and T is linear. 8 (er:) is the n-tuple of V that is zero everywhere except for an a in the ith slot.

Finally, we come to the proof of (f). We shall have no. use for this fact in this since (}I is injective, (JI: VI~ (}I(V J. In particular, (JI(BJ is a basis of the subspace

text. We mention this result only because in general category theory products (J.(V J. Since Ot(BJ n L(U J1'I OJ(BJ)) = (0), B = UleA 81(BJ is a linearly independ-

are defined as the unique object satisfying the universal mapping property given e~t set. Clearly, V = Li= 1 81(VJ. Consequently, B is a basis ofV. We have now

in (f). The map T: W-+ V making 4.4 commute is given byT(a) = (T1(cx))164 . We proved the following theorem:

leave the details for an exercise at the end of this section. D

Theorem 4.9: Let V ·= V 1 x · · · x Vn be a finite product of vector spaces. If B1is

The map 1tP:V-+ VP in Definition 4.2(a) is called the pth projection or pth a basis of V1, i = 1, ... , n, then B = Ui= 1 81(BJ is a basis of V. In particular, if

coordinate map of V. the map Oq: Vq -+Vis often called the qth injection of Vq each V1 is finite dimensional, then so is V. In this case, we have

into V. These maps can be used to analyze linear transformations to and from dim v = Li=l dim VI. D

products. We begin first with the case where IL.\1 < oo.

At this point, let us say a few words about our last three theorems when

Theorem 4.5: Suppose V = V 1 x · · · x Vn is a. finite product of vector spaces, IL.\1 = oo. Corollary 4.6 is true for any indexing set L\. The map 'P(T) = (1t1T) 164 is

and let W be another vector space. IfT1e Hom(W, VJ fori = 1, ... , n, then there an injective, linear transformation as before. We cannot use Theorem 4.5 to

exists a unique T e Hom(W, V1 x · · · x VJ such that 1t1T = T 1 for all conclude 'Pis surjective, since L164 81T 1makes no sense when IL.\1 = oo. However,

i=1, ... ,n. we can argue directly that 'P is surjective. Let (TJ 164 e U 1e4 Hom(W, V J. Define

TeHom(W,Uie4 VJ by T(oc} = (T1(cx})164 . Clearly 'P(T) = (TJ154. Thus, we have

Proof: Set T = Lf= 1 81T 1 and apply Theorem 4.3. D the following generalization of 4.6:

As an immediate corollary to Theorem 4.5, we get the following result: 4.10: For any indexing set L.\, Hom(W, UteA VJ~ Ute4 Hom(W, VJ.

Corollary 4.6: If IL.\1 = n < oo, then Hom(W,nie4 VI)~ nle4Hom(W, VJ. In general, Corollary 48 is false when IL.\1 = oo. For example, if W = F

and V1= F for all i e L\, then the reader can easily see that

Proof: Define a map 'P: Hom(W, V1 x .. · x VJ-+ Hom(W, V 1) x .. · 1HomF(Uie4 F, F)l > 1n164 Fl when L\ is infinite. Since HomF(F, F)~ F, we see

x Hom(W, VJ by 'P(T) = (1t 1T, ... , 1tnT). One easily checks that 'P is an that Hom(U164 F, F) cannot be isomorphic to U 164 Hom(F,F).

injective, linear transformation. Theorem 4.5 implies 'P is surjective. D v

If = nle4 vi with IL.\1 = 00 and Bt is a basis of VI> then Ute4 (JI(BJ is a

linearly independent subset of V. But in general, V :1: }2164 Ot(V J. For a concrete

We have a similar result for products in the first slot of Hom. example, consider V = IRF\1 in Exercise 5 of Section 1. Thus, U 1e4 91(BJ is not in

general a basis for V. In particular, Theorem 4.9 is false when IL.\1 = oo.

Theorem 4.7: Suppose v = v1 X ••• X vn is a finite product of vector spaces, Let us again suppose v = nle4 VI with L\ an arbitrary set. There is an

and let W be another vector space. IfT1e Hom(Vb W) fori = 1, ... , n, then there important subspace of V that we wish to study.

exists a unique T e Hom(V 1 x · · · x Vn• W) such that T81 = T 1 for all

i = 1, ... ,n. Definition 4.11: Let EBie4 VI= {feUie4 Vdf(i) = 0 exCept possibly for finitely

many iei.\}.

Proof· Set T = :Li=1 T 1t1 and apply Theorem 4.3.

1 D

Clearly EB 164 V1 is a subspace of V under pointwise addition and scalar

Corollary 4.8: If IL.\1 = n < oo, then Hom(U164 Vt> W) ~ U 1e4 Hom(V~o W).

multiplication. In terms of !\-tuples, the vector f = (cxJ1e4 lies in EB leA V1 if and

only if there exists some finite subset L.\0 (possibly empty) of L\ such that ext = 0

Proof: Define a map 'P: Hom(U1e4 Vb W)-+ U1e4 Hom(Vt> W) by 'P(T) =

for all iei.\- L\0 • Ifli.\1 < oo, then E9 1e 4 V1 =UreA V1. Ifli.\1 = oo, then EB1eA V1 is

(T8 1, ... , TOJ. Again the reader can easily check that 'P is an injective, linear

usually a proper suospace of V. Consider the following example:

transformation. Theorem 4. 7 implies that 'P is surjective. D

34 UN EAR ALGEBRA

PRODUCTS AND DIRECT SUMS 35

Example 4.12: Let F = IR, t1 = N, and V1 = IR for all i eA. Then the N-tuple

Theorem 4.16: Let {V,Iieli} be a collection of subspaces of V. Then the

(l),.,N, that is, the function f: N -+ U

ieN IR given by f(i) = 1 for all i e N, is a vector following statements are equivalent:

in v = nieN IR but not a vector in EBleN R D

(a) The VI> i e A are independent.

The vector space EeleA V1 is called the direct sum of the V1• It is also called the

subdirect product of the V1 and written UleA V1• In this text, we shall consistently (b) Every vector ae LieA V1 can be written uniquely in the form a= Li a1

with a 1eV1 for all ieA. eA

use the notation EE) 1eA V1 to indicate the direct sum of the V1• If llil = n < oo,

then we can assume t1 = {1, 2, ... , n}. In this case we shall write V 1 EJ) • • • EJ) V (b') If LieA a, = 0 with a1e VI> then a1 = 0 for all i e tl.

or EB~=1 v, instead of EBieA v,. Thus, v1 Ei) •.• Ei) Vn, E9~=1 vi, v.l X ••• X vn: (c) For every j e t1, V1 n (L

11 J VJ = (0).

nleA VI> and EeieA VI are all the same space when llil = n < 00.

Since EeleA v; = LieA O,(VJ, our comments after 4.10 imply the following Proof: In statements (b) and (b'), LiEA a1 means a1 = 0 for all but possibly finitely

theorem: many ieA. It is obvious that (b) and (b') are equivalent. So, we argue

oo-~-~

Suppose the V1 are independent. If LtaA a1 = 0 with a1e V1 for all i e tl, then

.

Theorem 4.13: Suppose V = EB leA V1 is the direct sum of vector spaces V1• Let B1

be a basis of VI. Then B = uleA O,(BJ is a basis of v. D S((aJ1eJ = LiEA a1 = 0. Since S is injective, we conclude that a1 = 0 for all i e tl.

Thus, (a) implies (b'). Similarly, (b') implies (a).

The subspace EB teA V1 constructed in Definition 4.11 is sometimes called the Suppose we assume (b'). Fix j eA. Let a e V1 n (L11 J V J. Then a = a1 for some

external direct sum of the V1because the vector spaces {V11i e t1} a priori have no a1eV1, and a= LieA-{j}at for some a 1eV1• As usual, all the cx1 here are zero

relationship to each other. We finish this section with a construction that is often except possibly for finitely many indices i ':/: j. Thus, 0 = LteA-{j} a1 + (-l)a1•

called an internal direct sum. (b') then implies a1 = a1 = 0 for all ietl- {j}. In particular, a= 0, and (c) is

Suppose Vis a vector space over F. Let {V1 1ieA} be a collection ofsubspaces established.

of V. Here our indexing set t1 may be finite or infinite. We can construct the Suppose we assume (c). Let LieAa1 = 0 with a 1eV1• If every a1 = 0,

(external) direct sum E9teA V1 of the V1 as in Definition 4.11 and consider the there is nothing to prove. Suppose some al> sa:¥ a1, is not zero. Then

natural linear transformation S: EB teA V1 -+ V given by S((aJ1eJ = LteA a1• Since a1 = - Lt'f' J a1 e V1 n (L11 1VJ implies V1 n (L 11 1 VJ ':/: 0. This is contrary to our

(aJ1eA e E9teA Vi> only finitely many of the a1 are nonzero. Therefore, LieA a1 is a assumption. Thus, (c) implies (b'), and our proof is complete. 0

well defined finite sum in V. Thus, S is well defined and clearly linear.

If {V1lieA} is a .collection of independent subspaces of V such that

Definition 4.14: Let {Vd ietl} be a collection of subspaces of V. We say these LieA V1 = V, then we say V is the internal direct sum of the V1• In this case,

subspaces are independent if the linear transformation S: E9taA V1 -+ V defined V ~ EB leA V1 viaS, and we often just identify V with EB leA V1• If llil = n < oo, we

above is injective. shall simply write V = V 1 EJ) • • • EJ) V n when V is an internal direct sum of

subspaces v 1> ... ' v n•

The reader will note that there is no difference in notation between an

Note Im S = LieA V,. Thus, the subspaces VI> i e tl, are indepedent if and only if

external direct sum and an internal direct sum. This deliberate ambiguity will

E&.aA V1~ LiaA V1 viaS. A simple example of independent subspaces is provided

cause no real confusion in the future.

by Theorem 2.13(c).

Finally, suppose V = V 1 EJ) • • • EJ) V n is an internal direct sum of independent

su~spaces V1> ... , Vn. Then by Theorem 4.16(b), eyery vector aeV can be

Example 4.15: Let V be a vector space over F and W a subspace of V. Let W' be wntten uniquely in the form a= a 1 +···+!Xu with cx1eV1• Thus, the map

any complement ofW. Then W, W' are independent. The direct sum W EJ) W' is ~~: v.-+ V, which sends a to !XJ, is a well-defined function. Theorem 4.16(b)

just the product W x W', and S: W x W' -+ W + W' is given by unpbes that each P 1 is a linear transformation such that 1m P1 = V1. We give

S((a, /3)) = a + fl. If (a, f3) e ker S, then a + f3 = 0. But W n W' = 0. Therefore, formal names to these maps P1, ... , Pn.

a = - f3 e W n W' implies a = f3 = 0. Thus, S is injective, and W, W' are

independent. D

Definition 4.17: Let V = V 1 E9 · · · E9 Vn be the internal direct sum of independent

subspaces V 1 , ... , VI!-. For each i = 1, ... , n, the linear transformation P 1

In our next theorem, we collect a few simple facts about independent defined above is called the ith projection map of V relative to the decomposition

subspaces. v1 ffi · ·· E9 vn.

36 LINEAR ALGEBRA EXERCISES FOR SECTION 4 37

Our next theorem is an immediate consequence of Theorem 4.16(b). Proof: We must show V = V1 + ... + v.. and VJ n (LI'f'J V1) = 0. Let cxeV

and set ex1 = P 1(cx). Then ex = l(cx) = (P 1 + .. · + P nXex) = P 1(a.) + ·.. + P ..(a.) =

Theorem 4.18: Let V = V 1 EB · · · EB Vn be an internal direct sum of the independ- al + '"O:n. Since a.lelmPI =VI> we conclude v £; vl + ... + v... Thus,

ent subspaces V 1 , ••• , Vn. Suppose P 1 , .•• , P n e Hom(V, V) are the associated

V=V1+"·+V...

projection maps. Then

Fix J, and suppose .<5eyJn(L17'JVJ. Then <5=PJ(fJ)=2:17'JP1({J1) for

(a) P 1PJ = 0 if i =I: j. o

some {JeV and fJ1eV (1 =I:J). Then = P1({J) = P1P1(fJ) = PJ(L17' 1P 1({J1)) =

LI7'JPJPI(PJ = 0. Thus, VJ n (LI'f'J VJ = (0), and the proof is complete. 0

(b) P1P1 = P1.

(c) 2:f= 1 P 1 = Iv. the identity map on V. 0

Theorem 4.18 says that every internal direct sum decomposition EXERCISES FOR SECTION 4

V = V 1 EB · · · EB Vn determines a set {P t> ••• , P .. } of pairwise orthogonal

[4.18(a)] idempotents [4.18(b)] whose sum is lv [4.18(c)] in the algebra of (1) Let B = {<51 1i e d} be a basis ofV. Show that Vis the internal direct sum of

endomorphisms G(V) = Homp(V, V). Let us take this opportunity to define {Fo1 lied}.

some of the words in our last sentence.

(2) Show Hom~E91eA VI, W)~ nleAHomp(VIt W).

Definition 4.19: By an associative algebra A over F, we shall mean a vector space

(3) Give a careful proof of Theorem 4.3(f).

(A, (ex, fJ) -+ex + p, (x, ex)-+ xa) over F together with a second function (ex, fJ) -+ a.fJ

from A x A to A satisfying the following axioms: (4) Let V = V 1 x · · · x V"' and for each i = 1, ... , n, set T 1 = 8111:1• Show that

{T1 ,. .. , T ..} is a set of pairwise orthogonal idempotents in 8(V) whose

Al. a.({Jy) = (exp)y for all ex, p, yeA. sum is 1.

A2. ex(p + y) = a.p + a.y for all a., p, yEA.

(5) Let V = V 1 x · · · x Vn. Show that V has a collection of subspaces

A3. (fJ + y)a. = fJa + ya for all ex, p, yeA.

A4. x(a.fJ) = (xa.)fJ = a.(xfJ) for all ex, peA, x e F.

{W~> .. ·• W~} such that V1 ~ W 1 fori= l, ... ,n

and V = E9f= 1 W1•

AS. There exists an element 1 e A such that lex = ttl = a. for all a eA. (6) Give a combined version of Corollaries 4.6 and 4.8 by showing directly that

I/I:Homp(V1 X ... X v.,, wl X ... X W.J-+ nr=1Thm=l Hom(Vh Wj)

We have seen several examples of (associative) algebras in this book already. given by J/!(T) = (xJTOJ1=1 , ... ,n,J= 1 , ... ,m is an isomorphism.

Any field F is an associative alebra over F. Mnxn(F) and F[X] with the usual (7) Suppose V = V 1 EB · · · EB V.,. Let T e Hom(V, V) such that T(V J £; V1 for

multiplication of matrices or polynomials is an algebra over F. IfV is any vector all i = 1, ... , n. Find a basis g of V such that

space over F, then Hom~V, V) becomes an (associative) algebra over F when we

define the product of two linear transformations T 1 and T 2 to be their composite M1

T 1T 2 • Clearly axioms Al-AS are satisfied. Here 1 is the identity map from V to r (g. g)(T) = ( o . . ..

V. Linear transformations from V to V are called endomorphisms of V. The

algebra G(V) = Homp(V, V) is called the algebra of endomorphisms of V. where M 1 describes the action ofT on V1·

Suppose A is any algebra over F. An element ex e A is idempotent if excx = a.. In

F or F[X], for example, the only idempotents are 0 and 1. In M., x.,(F),

(8) If X, Y, Z are subspaces of V such that X EB Y = X EB Z = V, is Y = Z? Is

y~ Z? ·.

e 11 , •.• , e.... are all idempotents different from 0 or 1. Idempotents {a. 1 , •.• , ex..} in

an algebra A are said to be pairwise orthogonal if cx1exJ = 0 whenever i =1: j. Thus, (9) Find three subspaces V 1> V 2 , V 3 of V = F[X] such that V =

{e11 , ••• , e...,} is a set of pairwise orthogonal idempotents in M .. x.,(F). V1 EB V2 EB V 3·

Theorem 4.18 says that every internal direct sum decomposition

(10) If V = V1 + V 2 , show that there exists a subspace W of V such that

V = V1 EB · · · EB Vn determines a set of pairwise orthogonal idempotents whose

W £; V2 'and V = V 1 EB W.

sum is 1 in G(V). Our last theorem of this section is the converse of this result.

(11) Let A be an algebra over F. A linear transformation T e Hom~A, A) is

Theorem 4.20: Let V be a vector space over F, and suppose {P 1 , ••• , P ..} is a set called an algebra homomorphism if T(cxfJ) = T(a.)T(/1) for all ex, {JeA. Ex-

of pairwise orthogonal idempotents in H(V) such that P 1 + · · · + P" = 1. Let hibit a nontrivial algebra homomorphism on the algebras F[X] and

VI= ImPI. Then v = vl EB"·EBV... Mnxn(F).

r

I

(12) Suppose V is a vector space over F. Let S: V~ V be an isomorphism of V. The equivalence relation introduced in Example 5.2 is called a congruence,

Show that the map T ~ S - 1TS is an algebra homomorphism of G(V) and we shall borrow the symbol = to indicate a general equivalence relation.

which is one to one and onto. Thus, if R s;;; A x A is an equivalence relation on A and (x, y) e R, then we shall

write x = y. We shall be careful in the rest of this text to use the symbol s only

(13) Let F be a field. Show that the vector space V = F (over F) is not the direct when dealing with an equivalence relation.

sum of any two proper subspaces. Now suppose = is an equivalence relation on a set A. For each x e A, we set

a., peA.

(14) An algebra A over F is said to be commutative if rx{J = {Jrx for all x = {yeAiy = x}. xis a subset of A containing x. xis called the equivalence

Suppose V is a vector space over F such that dim~ > 1. Show that G(V) class of x. The function from A to f?l'(A) given by x --+ x satisfies the following

is not commutative. properties:

(15) Suppose V is a vector space over F. Let TeG(V) be idempotent. Show 5.3: (a) xex.

V = ker(T) Ee Im(T). =

(b) x = y if and only if x y.

3

(16) Let V be a vector space over F, and let TeG(V). If T = T, show that (c) For any x, yeA, either x = y or iny = rjJ.

V = V0 Ee V 1 Ee V2 where the V1 are subspaces of V with the following (d) A = UxeA i.

properties: rx e V0 => T(a.) = 0, a. e V 1 => T(rx) = ex, and ex e V 2 => T(ex) = -ex.

In this exercise, assume 2 =/:: 0 in F. The proofs of the statements in 5.3 are all easy consequences of the

(17) Suppose V is a finite-dimensional vector space over F. If T eG(V) is definitions. If we examine Example 5.2 again, we see 7L is the disjoint union of the

nonzero, show there exists an S e G(V) such that ST is a nonzero p equivalence classes 0, I, ... , p- 1. It follows from 5.3(c) and 5.3(d) that any

idempotent of G(V). equivalence relation on a set A divides A into a disjoint union of equivalence

(18) Suppose T e G(V) is not zero and not an isomorphism of V. Prove there is ~asses. The reader probably has noted that the equivalence classes {0,

an S e G(V) such that ST = 0, but TS =/:: 0. 1, ... , p - 1} of 7L inherit an addition and multiplication from 7L and form the

field IFP discussed in Example 1.3. This is a common phenomenon in algebra. The

(19) Suppose V is a finite-dimensional vector space over F with subspaces set of equivalence classes on a set A often inherits some algebraic operations

W 1 , ... , W". Suppose V=W 1 +···+Wk, and dim(V)=L~= 1 dim(WJ. from A itself. This type of inheritance of algebraic structure is particularly

Show that V = W 1 Ee · .. Ee Wk. fruitful in the study of vector spaces.

Let V be a vector space over a field F, and suppose W is a subspace ofV. The

5. QUOTIENT SPACES AND THE ISOMORPHISM THEOREMS subspace W determines an equivalence relation =

on V defined as follows:

that, we need to consider equivalence relations. Suppose A is a nonempty set

and R s;;; A x A is a relation on A. The reader will recall from Section 2 that we rx=P if rx-{JeW

used the notation x "'"' y to mean (x, y) e R The relation "'"' is called an

equivalence relation if the following conditions are satisfied: Let ~s. check that the relation = defined in 5.4 is reflexive, symmetric, and

transttive. Clearly, ex = ex. If ex = p, then ex - peW.. Since W is a subspace,

5.1: (a) x"'"' x for all xeA. = = =

f1 -.a. e ~· Therefore, P a.. Suppose ex p and p y. Then ex - p, p - yeW.

~gam, smce W is a subspace, ex- y = (ex- p) + (p- y) e W, and, thus ex = y. So,

(b) If x"'"' y, then y"'"' x for all x, yeA.

(c) Ifx"'"' y andy"'"' z, then x "'"'z for all x, y, zeA. Indeed = is an equivalence relation on V. The reader should realize that the

A relation satisfying 5.1(a) is called reflexive. If 5.1(b) is satisfied, the relation is

equivalence relation = depends on the subspace W. We have deliberately

suppressed any reference to W in the symbol = to simplify notation. This will

said to be symmetric. A relation satisfying 5.1(c) is said to be transitive. Thus, an cause no confusion in the sequel. ·

equivalence relation is a reflexive, symmetric relation that is transitive.

Example 5.2: Let A = 71.., and suppose p is a positive prime. Define a

De6ni.. tion 5.5: Let '?I be a subspace of V, and let = denote the equivalence

= =

relation (congruence mod p) on A by x y if and only if pIx - y. The reader :elation defined in 5.4. If ex e V, then the equivalence class of ex will be denoted by

IX. The set of all equivalence classes {~I a e V} will be denoted by VfW.

can easily check that = is an equivalence relation on 71... 0

40 UN EAR ALGEBRA

QUOTIENT SPACES AND THE ISOMORPHISM THEOREMS 41

Thus, ii = {PeVIP =a} and VjW = {iilaeV}. Note that the elements in nieA(p + WJ, so let IXE nle4(p +WI). Then, for i 'I= j, IX= p +~I= p + ~j

VjW are subsets of V. Hence VfW consists of a collection of elements from t?PM. with ~1 eW1 and ~1 eW1 • But then ~~=~1 and aeP+(W1 nW1). Thus,

nieA(p + WJ £ p + <nleA WJ. Therefore, n 1e4(p + WJ = p + (n 1e4 WJ.

Definition 5.6: If W is a subspace of V and a e V, then the subset Since P+ (n 1e4 W J e dM, the proof of (a) is complete. 0

a + W = {a + 1 I1 e W} is called a coset of W.

We can generalize Theorem 5.8(d)' one step further by introducing the

Clearly, pea + W if and only if a - peW. Thus, the coset a + W is the same concept of an affine map between two vector spaces. If a e V, then by translation

set as the equivalence class ii of a under =. So, VfW is the set of all cosets of W. through tX, we shall mean the function s ..: V --+ V given by S..(p) = a + p. Any

In particular, the equivalence class ii of a has a nice geometric interpretation. coset a + W is just S..(W) for the translations... Note that when a 'I= 0, s .. is not a

ii = a + W is the translate of the subspace W through the vector a. linear transformation.

Let us pause for a second and discuss the other names that some of these

objects have. A coset a+ W is also called an affine subspace or flat ofV. We shall Definition 5.9: Let V and V' be two vector spaces over a field F. A function

not use the word "fiat" again in this text, but we want to introduce formally the f: V--+ V' is called an affine transformation iff= s ..T for some Te HomF{V, V')

set of affine subspaces of V. and some aeV'. The set of all affine transformations from V to V' will be

denoted AffFfV, V').

Definition 5.7: The set of an affine subspaces of v will be denoted dM.

Oearly, HomFfV, V') £ Aft'F(V, V') £ (V')v. Theorem 5.8(d) can be restated as

Thus, A e dM if and only if A = a + W for some subspace W £ V and some follows:

a e V. Note that an affine subspace A = a + W is not a subspace of V unless

a = 0. Thus, we must be careful to use the word "affine" when considering Theorem 5.10: If AedM and feAffFfV, V'), then f(A)ed(V'). 0

elements in dM. Since dM consists of all cosets of all subspaces of V,

VfW £ dM £ t?PM and these inclusions are usually strict. Let us now return to the special subset V/W of dM. The cosets of W can be

The set VfW is called the quotient of V by W and is read "V mod W". We shall given the structure of a vector space. We first define a binary operation -i- on

see shortly that VfW inherits a vector space structure from V. Before discussing VjW by the following formula:

this point, we gather together some of the more useful properties of affine

subspaces in general. 5.11:

Theorem 5.8: Let V be a vector space over F, and let dM denote the set of all

affine subspaces of V. ·

In equation 5.11, a and p are vectors in V and ii and 7J are their corresponding

(a) If {AdieA} is an indexed collection of affine subspaces in dM, then equivalence classes. ii -i- fJ is defined to be the equivalence class that contains

a + p. We note that our definition of ii -i- fJ depends only on the equivalence

either nie4 AI = 4> or nie4 Ate dM.

classes ii and 7J and not on the particular elements aeii and PeP (used to form

(b) If A, BedM, then A+ BedM.

the right-hand side of 5.11). To see this, suppose a 1 eii and.p 1 e]J. Then tXt- a

(c) If AedM and xeF, then xAedM.

and P1 - Pare in W. Therefore, (at·+ P1) - (a+ p)eW and a 1 + P1 =a+ p.

(d) If AedM and TeHom(V, V'), then T(A)ed(V').

Thus, -i-: VfW x VfW --+ VfW is a well-defined function .. The reader can easily

(e) If A' e d(V') and T e Hom(V, V'), then T- 1(A') is either empty or an affine check that (VjW, -i-) satisfies axioms Vl-V4 of Definition 1.4. 0 is the zero

subspace of V. element of V/W, and -a is the inverse of ii under -i-. The function -i- is called

addition on V/W, and, henceforth, we shall simply write + for this operation.

Proof: The proofs of (b)-(e) are ·an straightforward. In (e), T- 1 (A') = Thus, ii + 7J = a + p defines the operation of vector addition on VfW.

{aeVIT(a)eA'}. We give a proof of (a) only. Suppose. A1 = tX1 + W1 We can define scalar multiplication on VfW by the following formula:

for each i eA. Here W 1 is a subspace of V and a 1 a vector m V. Suppose

nle4Ai 'I= <f>. Let pe nie4AI. Then for each ie.6., p =IX!+ 'YI with f'IEWI. But 5.12:

then p + WI = IX! + w" and nieA AI = nleA (p + wJ.

We claim that n 1e4(P + W J = p + (nlsA W J. Clearly, p + (nie4 W J £ ·

Xii =XIX

QUOTIENT SPACES AND THE ISOMORPHISM THEOREMS 43

42 LINEAR ALGEBRA

see TeHom(VjW, V1. TIT(a) = T(a) = T(a) and so 5.16 commutes. Only the

~equation 5.12, xeF~nd aeVfW. Again we observe that if a 1 ea., then

uniqueness of T remains to be proved.

XIX 1 = XIX. Thus (x, a) -+ XIX is a well-defined function from F x VfW to VfW. The If T' e Hom(VjW, V') is another map for which T'IT = T, then T = T' on Im

reader can easily check that scalar multiplication satisfies axioms V5-VS in n. But IT is surjective. Therefore, T = T'. 0

Definition 1.4. Thus, (V/W, (a, lJ) -+a + lJ, (x, a) -+ xa) is a vector space over F.

We shall refer to this vector space in the future as simply VfW. Corollary 5.17: Suppose TeHomp(V, V'). Then ImT~ Vjker T.

Equations 5.11 and 5.12 imply that the natural map IT:V-+ VfW given by

IT(IX) = a. is a linear transformation. Clearly, IT is surjective and has kernel W. Proof We can view T as a surjective, linear transformation from V to 1m T.

Thus, if i: W-+ V denotes the inclusion of W into V, then we have the following Applying Theorem 5.15, we get a unique linear transformation

short exact sequence: T: Vjker T -+ Im T for which the following diagram is commutative:

5.13: 5.18:

V T 1m T

0-+ W--'--~ v-.::.::n~ VfW-+ 0

~/ Vjker T

subspace of V. Then dim V = dim W + dim V/W. 0 In 5.18, IT is the natural map from V to Vjker T. We claim Tis an isomorphism.

Since TIT = T and T: V-+ 1m T is surjective, T is surjective. Suppose iX e ker T.

We shall finish this section on quotients with three theorems that are Then_T(a) = Trr(1X) = T(a) = 0. Thus, aekerT. But, then IT{1X) = 0. Thus, a= 0,

collectively known as the isomorphism theorems. These theorems appear in and T is injective. 0

various forms all over mathematics and are very useful.

The second isomophism theorem deals with multiple quotients. Suppose W is

Theorem 5.15 (First Isomorphism Theorem): Let T E Homp(V, V'), and suppose a subspace of V and consider the natural projection IT: V-+ VfW. If W' is a

W is a subspace of V for which T(W) = 0. Let IT: V -+ V/W be the natural map. subspace of V containing W, then IT{W1 is a subspace of V[W. Hence, we can

Then there exists a unique T E Homp(VfW, V1 such that the following diagram form the quotient space (V/W)(fi(W'). By Corollary 5.17, IT{W1 is isomorphic to

commutes: W'/W. Thus we may rewrite (V/W)(fi(W1 as (V/W)/{W'/W).

5.16: Theorem 5.19 (Second Isomorphism Theorem): Suppose W !;;;; W' are sub-

spaces ofV. Then (V/W)/{W'/W)~ V/W'.

T

v~~·

!'ro?f Let IT: V-+ V/W and IT': VfW-+ (V/W)(fi(W') be the natural pro-

Jec~tons.

Set T = IT'll: V -+ (V/W)(fi(W'). Since IT and IT'. are both surjective,

T ts a surjective, linear transformation. Clearly, W' !;;;; ker T. Let a e ker T.

Then 0 = IT'IT(1X). Thus, a= IT(1X)eiT(W'). Let [:JeW' such that IT([:J) = IT(a).

Then IT([:J- a)= 0. Thus, [:J- IXEkeriT = W!;;;; W'. In particular, IXEW'.

V/W

We have now proved that kerT = W'. Applying Corollary 5.17, we have

Proof We define T by T(a) = T(IX). Again, we remaind the reader that a is a {V!W)(fi(W') =1m T~ VjkerT = VjW'. 0

subset of V containing IX. To ensure that our definition of T makes sense, we

must argue that T(1X 1) = T(IX) for any IX 1 e iX. If 1X 1 E iX, then IX 1 - IX E W. Since T The third isomorphism theorem deals with sums and quotients.

is zero on W, we get T(1X 1 ) = T(a). Thus, our definition of T(iX) depends only

on the coset iX and not on any particular representative of iX. Since Theorem 5.20 (Third Isomorphism Theorem): Suppose W and W' are sub-

spaces ofV. Then (W + W')/W~ W'/{W n W1.

T(xiX + ylJ) = T(xa + y[:J) = T(xa + y[:J) = xT(a) + yT({J) = xT(iX) + y'f{lJ), we

44 LINEAR ALGEBRA EXERCISES FOR SECTION 5 45

Proof: Let IT: W + W' -+ (W + W')/W be the natural projection. The inclusion (8) ~t V be ~ finite-dimensional vector space. If W is a subspace with

map ofW' into W + W' when composed with I1 gives us a linear transformation dim W = d.Im V- 1, then the cosets of W are called hyperplanes in v.

T: W' -+ (W + W1/W· Since the kernel of I1 is W, ker T = W n W'. We claim T Sup.P_ose SIS an affine subspace of V and H = ex + W is a hyperplane. Show

is surjective. To see this, consider a typical element ye(W + W1/W· y is a coset that if H n S = 0, then S ~ p + W for some pe V.

ofW of the form y = t5 + W with fJeW + W'. Thus, {J =ex+ f3 with exeW and

(9) ~ S = .a + We d(V), we define dim S = dim W. Suppose y is finite

peW'. But ex + W = W. So, y = t5 + W = (/3 + ex) + W = f3 + W. In particular,

dimens.Ional, H a hyperplane in V, and S e d(V). Show that s n H =1=

T(p) = p + W = y, and T is surjective. By Corollary 5.17,

0 => dim(S n H) = dim S - 1. Assume S ¢. H.

(W + W')/W = 1m T~ W'fker T = W'/W n W'. 0

(10) tiLet S e d(V) .with dimS = m - 1. Show that S = {Ll"= 1 x,ad Ll"= 1 x1 = 1}

We close this section with a typical application of the isomorphism theorems. or some choice of m vectors a 1 , ... , am e V.

Suppose V is an internal direct sum of subspaces V 1 , ••• , Vn. Thus, (11) Suppose C = {(V" dJiieZ} is a chain complex. For each ieZ, set

V = V 1 Ea · .. Ea Vn· Since v, n (LJ1'' VJ) = (0), Theorem 5.20 implies H1(C) = ker dJim d1+ 1 • H 1(C) is called the ith homology of C.

V(V 1 = (V1 + LHt V1)(V1~ (LJor 1V1)/(V1 n (LJr 1VJ)) = (L; 71 Vj)/(0) = V 1 E9 · .. (a) Show that C is exact if and only if H 1(C) = 0 for all i e z.

E9 V1E9 · •• E9 Vn· Here the little hat(~) above V1 means V1 is not present in this

(b) Let C = {(V" dJiieZ} and C' = {(Vj, d~lieZ} be two chain

sum. complexes. Show that any chain map T = {T1}iez: C -+ C induces a

linear transformation 'f1:Ht(C)-+ H 1(C1 such that 'f1(a+Imd1+ 1 )=

T1(a) + Imdi+1·

EXERCISES FOR SECTION 5 (c) Suppose

C: 0 -+ Vn d. ~

Vn-1~··· dl V 1-+ 0

(1) Suppose f e HomF(V, F). Iff =I= 0, show Vfker f~ F.

is a ~te chain complex. Show that L(- 1)1 dim H,(C) =

(2) Let T e Hom(V, V) and suppose T(ex) = ex for all ex e W, a subspace of V.

L( -1)' dim V,. Here each V1 is assumed finite dimensional.

(a) Show that T induces a map S e Hom(V/W, V/W).

(12) Suppose V is an n-dimensonal vector space over F ' and W 1, ... , Wk are

(b) If Sis the identity map on V/W, show that R = T - Iv has the property

that R 2 = 0.

~ub spaces of codimension e1 = n - dim(WJ. Let S1 = a1 + forw,

2 I= 1, ... , k. IfS 1 n ... nSk = </J, show dim(W 1 n ... n WJ > n- L~= 1 e1•

(c) Conversely, suppose T = Iv + R with ReHom(V, V) and R = 0.

Show that there exists a subspace W ofV such that Tis the identity on (13) Use Exercise 12 to' prove the following assertion: Let S 1 = a 1 + w 1 and

W and the induced map S is the identity on VfW. S2 = ex2 + W 2 be two cosets of dimension k [i.e., dim(W J = k]. Show that

~ 1 and S2 are parallel (i.e., W 1 = W 2) if and only if S 1 and S are contained

(3) A subspace W of V is said to have finite codimension n if dim V/W = n. If m a coset of dimension k + 1, and have empty intersectio~.

W has finite codimension, we write codim W < oo. Show that if W 1 and

W 2 have finite codimension in V, then so does W 1 n W 2. Show U4) I~ IR 3 , ~how ~at ~he ~ntersection of two nonparallel planes (i.e., cosets of

codim(W 1 n W 2) ~ codim W 1 + codim W 2 • dimen~Ion 2) IS a line (I.e., a coset of dimension 1). The same problem makes

sense many three-dimensional vector space V.

(4) In Exercise 3, suppose Vis finite dimensional and codim W 1 = codim W 2·

Show that dim(W 1/W 1 n W 2) = dim(W ~ 1 n W 2). (15) Let S1o S2, and S3 be planes in IR 3 such that S1 n S2 n S 3 = <jJ, but no two S1

are parallel. Show that the lines S 1 n S2, S1 n S3 and S 2 n S3 are parallel.

(5) LetT e Hom(V, V'), and suppose Tis surjective. Set K = ker T. Show there 1

exists a one-to-one, inclusion-preserving correspondence between the ( 6) Let f(X). = xn + au-lxn-l+···+aoEIR[X]. Let w = {pflpe!R[X]}. Show

subspaces of V' and the subspaces of V containing K. Yf

t~a~ IS a subspace of IR[X]. Show that dim(IR[X]/W) = n. (Hint: Use the

diVISion alg~rithm in IR[X].)

(6) LetT EHom(V, V1, and let K = ker T. Show that all vectors ofV t_hat have

(17) In

. Thearem 5. 15, I'fT IS · · and W = kerT, then

· suiJective · Tis

- an isomorph-

the same image under T belong to the same coset of VjK.

Ism [prove!]. In .Particular, S = (f) -l is a well-defined map from V' to

(7) Suppose W is a finite-dimensional subspace of V such that V/W is finite V/W. Show that the process of indefinite integration is an example of such

dimensional. Show V must be finite dimensional. a mapS.

46 LINEAR ALGEBRA DUALS AND ADJOINTS 47

6. DUALS AND ADJOINTS Definition 6.5: Let V, V', and W be vector spaces over F, and let ro: V x V'-+ W

be a function. We call ro a bilinear map if for all aeV, ro{a, ·)eHomF(V', W) and

Let V be a vector space over F. for all PeV', ro(·, P)eHom~, W).

Definition 6.1: V* = Hom~V, F) is called the dual of V. Thus, a function ro: V x V' -+ W is a bilinear map if and only if

w(xa 1 + ya2, P) = xro(al> p) + yro(a2, p), and ro{oc, xP 1 + yp~ = xro(a, {3 1 ) +

lfV is a finite-dimensional vector space over F, then it follows fro~ Theor:~ yw(a, {32) for all x, y e F, a, oct> a2 e V and p, P1, P2 e V'. If V is any vector space

3.25 that V* is finite dimensional with dim V* =dim V. We record th1s fact Wtt over F, there is a natural bilinear map ro: V x V* -+ F given by

a different proof in 6.2.

6.6:

fieorem 6.2: Let V be finite dimensional. Then dim V* =dim V.

Proof tLe! ~V-;. bayl ::; :..:_ann(·)

Here (·) . V -+ p is the isomorphism determined

elemen !Xi e "'I - I "'' "'' • h d' t f In equation 6.6, a e V and T e V*. The fact that ro is a bilinear map is ob-

· a a nd 1t1·• Fn -+ F-is the natural projection

by t he b asts * . . onto b the tt coor ma e o vious.. ro determines a natural, injective, linear transformation 1/1: V -+ V**

Fn. Thus, if;= x 1 a 1 + ... + x0 a 0 EV, then aJ IS gtven Y

in the following way. If aeV, set 1/!(a) = w(a, ·). Thus, for any TeV*,

1/l(a)(T) = w(a, T) = T(a). If x, yeF, a, PeV and TeV*, then

6.3: 1/!(xa + yp)(T) = ro(xa + yp, T) = xro(a, T) + yro(p, T) = (xl/l(a) + yl/l(p))(T).

Consequently, 1/1 e Hom~, V**). To see that 1/1 is injective, we need to

af(x 1oc 1 + .. · + XnaJ = Xt generalize equation 6.3. Suppose g = { a11i e ~} is a basis of V (finite or infinite).

Then for every i e ~ we can define a dual transformation ~Xt* e V* as follows: For

We claim that~·= {aT, ... , a!'} is a basis ofV*. Suppose L~=l Y1ai = 0.:_Let each nonzero vector a e V, there exists a unique finite subset

· e {1, ... , n}. Then equation 6.3 implies?= <l:r=t Y1a;}{aJ) = Ll=

1 Ytal(ocJ- Y1·

~(a) = {jit ... 'jn uk E ~} of~ such that a = xhah + ... + xJ.aj.. Here XJI' ... ' xi.

J _. .. = 0 and oc* is linearly mdependent over F. . . are all nonzer<~ scalars in F. We then define ~Xt*(oc) = x1k if i = jk for some

Thus, Yt V* th~n = rr:l T(ocJar. This last equation follows tmme~ately

T k = 1' ... ' n. If 1¢ ~(a), we set ar(a) = 0. If a = 0, we of course define tXt*(a)= 0.

fro~ T6~3 . Thus, L(~*)

= V*, and g* is a basis of V*. In particular, Clearly arE V*, and

dimV* = 1~*1 = n = dimV. 0

if i = j

· * - {N*

The b astsa ""*} constructed in 6.3 is called the dual basis of ~·

- ""t•·"•""n d'

!Xt*(aJ = {~ if i=Fj

Thus every b~is a of a finite-dimensional vector space V has a corre~pon t~g

dual basis a* ofV*. Furthermore, V ~ V* under the linear map T, which sen s No.wifaekerl/1, then T(a) = Ofor all TeV*. In particular, af(a) = Ofor all ie~.

every a eoc-to the corresponding

. !Xi* E~ *· . . . d'fti t This clearly implies a = 0, and, thus, 1/1 is injective.

If y \s -not finite dimensional over F, then the situation 1s .qu1te t ~re;·

We note in passing that the set ~· = {~Xt* 1ie~} £:;; V*, wJ:rich we have just

Theorem 6.2 is false when dim V = oo. If dim V = oo, t~en dtm V* > dtm · ~onstructed above, is clearly linearly independent over F. If dim V < oo, this is

Instead of proving that fact, we shall content ourselves Wtth an example. ~ust the dual basis ofV* coming from g. If dim V = oo, then~· does not span V*,

~d, therefore, cannot be called a dual basis. At any rate, we have proved the

t V - ffi «> F that is V is the direct sum of the vector spaces st Part of the following theorem:

ExampIe 6•4: Le - Wt=l ' ' f s t' n 4 that V*""'

{V - Flie N} It follows from Exercise 2 o ec to

""noo H 0 (F F)~ n~lF. From Theorem 4.13, we

!~~orern 6.7:

I- •

HomF(ffii'=t.F, F) = ,... 1A=l . ~: c~unti~g e~ercise will convince the reader Let V be a vector space over F and suppose ro: V x V* -+ F is the

know that dtm V = 1•~ 1· strop

t/JIInear map given in· equation 6.6. Then the map 1/1: V-+ V** given by

t h at d tm t=l F) is strictly larger than 1~'~1· 0

. V* = dtm'(fl""

(o:) == co(a, ·) is an injective linear transformation. If dim V < oo, then 1/1 is a

natural isomorphism.

. our next result' we need the following definition:

Before stating

48 UN EAR ALGEBRA DUALS AND ADJOINTS 49

Proof Only the last sentence in Theorem 6. 7 remains to be proved. If 0 = p(a~ = (Lf=1 c,cxt)(a1) = 2J= 1 c 1cxt(a~ = c1• Thus,

dim V < oo, then Theorem 6.2 implies dim V =dim V** < oo. Since 1/1 is L({oc!+l•····cx:}). D

injective, our result follows from Theorem 3.33(b). 0

If T e HomF<V, W), then T determines a linear transformation

The word "natural" in Theorem 6.7 has a precise meaning in category theory, T*eHomp{W*, V*), which we call the adjoint ofT.

but here we mean only that the isomorphism l/f: V ~ V** is independ~nt of any

choice of bases in V and V**. The word "natural" when applied to. an Definition 6.11: Let TeHomF<V, W). Then T*eHom~W*, V*) is the linear

isomorphism l/f: V-+ V** also means ce~ain diagrams mu~t be commutative. transformation defined by T*(f) = IT for all feW*.

See Exercise 4 at the end of this section for more det~s. We h~d ~oted

Since the composite V -+TW -+rF of the linear transformations f and Tis

previously that when dim V < oo, then~~. V*. Th~ type of Isomorphism IS pot

again a linear map from V to F, we see T*(f)eV*. Ifx, yeF and f1of2eW*, then

natural, since it is constructed by first pickmg a basis (! = {cx1, ... , cxn} of V and

T*(xft + yf2) = (xf1 + yf2)T = x(f1 T) + y(f2T) = xT*(f1) + yT*(f2). Thus, T* is a

then mapping cx1 to ext in V*. . linear transformation from W* to V*.

The bilinear map w: V x V* -+ F can also be used to set up certain

correspondences between &i'(V) and &i'(V*). Theorem 6.12: Let V and W be vector spaces over F. The map T-+ T* from

HomF<V,W)-+ Homp{W*,V*) is an injective transformation. If V and W are

Definition 6.8: If A is any subset of V, let A.L = { PE V* I w(ex, P) = 0 for all ex E A}. finite dimensional, then this map is an isomorphism.

Thus A.L is precisely the set of all vectors in V* that vanish on A. It is easy to Proof Let x: Hom(V, W)-+ Hom(W*, V*) be defined by x(T) = T*. Our

A

see that .Lis in fact a subspace of V*. We have a similar definition for subsets of comments above imply x is a well-defined function. Suppose x,y e F,

V*. T 1, T 2 eHom(V, W), and feW*. Then x(xT 1 + yT 2)(f) = (xT1 + yT2)*(f) =

f(xT 1 + yT2) = x(IT 1) + y(IT 2) = xTf(f) + yT~(f) = (xTf + yT~)(f) =

Definition 6.9: If A is a subset of V*, Let A.L = {ex E V I ro(ex, p) = 0 for all PE A}· ' (xx(T 1) + yx(T2))(f). Thus, x(xT 1 + yT 2) = xx(T 1) + yx(T 2), and x is ~ linear

transformation.

Thus, if A £;; V*, then A.L is the set of all vectors in V that are zero under the Suppose Teker X· Then for every feW*, 0 = x(T)(f) = T*(f) =IT. Now if we

maps in A. Oearly, A.L is a subspace of V for any A£;; V*. follow the same argument given in the proof of Theorem 6. 7, we know that if pis

a nonzero vector in W, then there exists an feW* such that f(p) =1: 0. Thus,

Theorem 6.10: Let A and B be subsets of V (or V*). IT = 0 for all feW* implies Im T = (0). Therefore, T = 0, and x is injective.

Now suppose V and Ware finite dimensional. Then Theorems 6.2 and 3.25

(a) A£;; B implies A.L 2 B.L. imply dim{HomF<V, W)} = dim{Homp(W*, V*)}. Since x is injective, Theorem

(b) L(Al = A.L. 3.33(b) implies xis an isomorphism. 0

(c) (Au B).L = A.L n B.L.

We note in passing that forming the adjoint of a product is the product of

(d) A £;; Au. the adjoints in the opposite order. More specifically, suppose T e HomF<V, W)

(e) If W is a subspace of a finite-dimensional vector space V, then and SeHomp{W, Z). Then STeHomF<V, Z). If feZ*, .. then (ST)*(f) =

dimV = dimW + dimW.L. f(ST) = (fS)T = T*(fS) = T*(S*(f)) = T*S*(f). Thus, we get equation 6.13:

Proof (a)-(d) are straightforward, and we leave their proofs .as exercises. w_e 6.13:

prove (e). Let {ex 1 , ••• , cxm} be a basis of W. We extend this set to a basts

(ST)* = T*S*

g = {CXt' •. • 'CXm, exm+ 1> • .. ' exn} 0 f V• Thus ' dim W = m and dim V = n. Let .

g* ={ex!, ... , ex:} be a dual basis of g. We complete the proof of (e) by argmng , The connection between adjoints and Theorem 6.10 is easily described.

that {ex!+ 1' •.• ' ex:} is a basis of w.L. .

If m + 1 ~ j ~ n, then cxf(exJ = 0 for i = 1, ... , m. In. parti~ular, Theorem 6.14: Let T e HomF<V, W). Then

a*+ 1 , ••• , ex: E w.L. Since {a!+ 1 , ••• , ex:} £;; ~·. {ex!+ 1 , ... , ex:} is linearly mde-

;ndent over F. We must show L({oc!+ 1 , ••• ,cx:})=W.L. Let P.ew ·Then (a) (Im T*).L = ker·T.

p = Lf= 1 c,cx;. Since cx 1 , ••• , am E W, we have for any J = 1, ... , Ill, (b) ker T* = (Im T).L.

50 LINEAR ALGEBRA EXERCISES FOR SECTION 6 61

Proof (a) Let cxe(lm T*)1., and suppose ro:V x V*-+ F is the bilinear map r(f}*, I!*)(T*) is the n x m matrix that makes the following diagram

defined in equation 6.6. Then ro(cx, 1m T*) = 0. Thus, for all feW*, commute:

0 = ro(~X. T*(f)) = ro(IX, fT) = IT(ex) = f(T(1X)). But we have seen that

f(T(cx)) = 0 for all feW* implies T(cx) = 0. Thus, ex e kerT. Conversely,

if cxekerT, then 0 = f(T(cx)) = ro(cx, T*(f)) and cxe(lm T*)l..

(b) Suppose fekerT*. Then 0 = T*(f) =fT. In particular, f(T(cx)) =

0 for all cxeV. Therefore, 0 = ro(T(cx), f) and fe(lm T).L. Thus,

kerT* s;;; (1m T).L. The steps in this proof are easily reversed and so

(1m T).L s;;; ker T*. 0

The transpose of A is the n x m matrix A' = (bpq), where bpq = aqp for all

Theorem 6.14 has an interesting corollary. IfTeHom~, W), let us define p=l, ... ,n, and q=l, ... ,m.lt follows from 3.24 that r(~*,g*)(T*)=A'

the rank ofT, rk{T}, to be dim(lm T). Thus, rk{T} = dim(Im T). Then we have provided that the following equation is true:

the following:

6.18:

Corollary 6.15: Let V and W be finite-dimensional vector spaces over F, and let

TeHomp{V, W). Then rk{T} = rk{T*}. for all q = 1, ... , m

Proof The following integers are all equal: Fix q = 1, ... , m. To show that T*(P:) and L:= 1 bpqa: are the same vector in

V*, it suffices to show that these two maps agree on the basis g of V. For any

rk {T} = dim(lm T) = dim V - dim(ker T) [Theorem 3.33(c)] r = 1, ... , n, (T*(&))(cxr) = P:(T(cx,)) = P:<LF= 1 auPJ = LF= 1 8-j,P:<PJ = aqr· On

the other hand, (L..:= 1 bpqCX:)(a,) = L:= 1 bpqCX:(a:,) = brq = aqr· Thus, equation

=dim V- dim{(Im T*).L} [Theorem 6.14] 6.18 is established, and the proof of Theorem 6.16 is complete. 0

=dim V*- dim{(Im T*).L} [Theorem 6.2]

= dim(lmT*) [Theorem 6.10(e)] EXERCISES FOR SECTION 6

Corollary 6.15 has a familiar interpretation when we switch to matrices. If (2) Let V and W be finite-dimensional vector spaces over F with bases g and f},

I! is any basis of V and !} any basis of W, then Theorem 3.25 implies respectively. Suppose T Hom~, W). Show that rk{T} = rk(r(l!. /})(T)).

rk{T} = rk(r(l!, P)(T)). Let A = r{l!, p)(T). In Theorem 6.16 below, we shall show (3) Let 0 :f:: pev and feV*- (0). Define T: V-+ V by T(cx) = f(cx)p. A func-

that the matrix representation ofT*: W* -+ V* with respect toP* and I!* is given tion defined in this way is called a dyad.

by the transpose of A. Thus, r(p*, I!*)(T*) = N. In particular,-Corollary 6.15 is

(a) Show T e Hom(V, V) such that dim(lm T) = 1.

the familiar statement that a matrix A and its transpose A' have the same rank.

(b) If S e Hom(V, V) such that dim(lm S) = 1, show that S is a dyad.

Theorem 6.16: Let V and W be finite-dimensional vector spaces over F. Suppose (c) If Tis a dyad on V, show that T* is a dyad on V*.- ·

I! and Pare bases of V and W, respectively. Let g* and P* be the corresponding

dual bases in V* and W*. Then for all T e Homp{V, w), we have (4) Let V and W be finite-dimensional vector spaces over F. Let 1/Jv: V-+ V**

and 1/Jw: W-+ W** be the isomorphisms given in Theorem 6.7. Show that

6.17: for every T e Hom(V, W) the following diagram is commutative:

I

Proof Suppose g = {IX 1 , ... , 1Xn} and ~ = { P1 , .•• , Pm}. Set A = r(g, p)(T). Then i

I

1/Jvvj Wj1/Jw

A = (aiJ) e Mm xu(F), and from 3.24, we have T(cxJ) = LF= 1 a,Jp, for all T**

j = 1, ... , n. V** _ ____::. _ _-+ W**

I

i

1....

r

62 LINEAR ALGEBRA

SYMMETRIC BILINEAR FORMS 53

(5) Let A= {f1o ... ,fn} s;;; V*. Show A.t = nf= 1 ker f1• . (c) T(p} = JA X 2p(X} dX.

(6) Let A = {f1 , •.• , fn} s;;; V* and suppose g e V* such that g vanishes on A.L. (d) T(p) = dp/dX.

Show geL(A). [Hint: First assume dim(V) < oo; then use Exercise 3 of (e) T(p) = dp/dXIx=o·

Section 5 for the general case.]

(16) Suppose F is a finite field (e.g., IFp). Let V be a vector space over F of

(7) Let V and W be finite-dimensional vector spaces over F, and let dimension n. For every m ~ n, show the number of subspaces of V of

co: V x W --. F be an arbitrary bilinear map. Let T: V --. W* and dimension m is precisely the same as the number of subspaces of V of

S: W--. V* be defined from co as follows: T(ex)(/1) =co(ex, /1) and dimension n - m. ·

s(p)(ex} = co(ex, p). Show that S = T* if we identify W with W** via 1/Jw.

(17) An important linear functional on Mnxn(F) is the trace map

(8) Show that (VfW)*;:;. W .L. Tr: Mnxn(F) - t F defined by Tr(A) = Lf= 1au where A= (a1J).

Show that

(9) Let V be a finite-dimensional vector space over F. Let W = V Ei3 V*. Show Tr() e(Mn xn(F))*.

that the map (ex, /1) --. (p, ex) is an isomorphism between W and W*. (18) In Exercise 17, show Tr(AB) = Tr(BA) for all A, BeMnxn(F).

(10) If (19) Let m, ne ~. Let f1o ... .fm e(F")*. Define T: F" -.Fm by T(a) =

S T

o-. v--=--~w--~z-.o (f1(ex), ... , fm(ex)). Show that T e Homp(F", Fm). Show that every

TeHomp(P, Fm) is given in this way for some f1 , ••. , fm.

is a short exact sequence of vector spaces over F, show that (20) Let V be a finite-dimensional vector space over C. Suppose ex 1, ... , exn are

distinct, nonzero vectors in V. Show there exists a T e V* such that

T• s• T(exJ #: 0 for all k = 1, ... , n.

0--. Z*----+ W*--~ V*--. 0

is exact.

(11) Let {Wd i e Z} be a sequence of vector spaces over F. Suppose for each In this last section of Chapter I, we discuss symmetric bilinear forms on a vector

ieZ, we have a linear transformation e1 eHomp(Wb W1+1). Then space V. Unlike the first six sections, the nature of the base field F is important

D = {(WbeJjieZ} is called a cochain complex ifel+ 1 e1 = 0 for all ieZ. D here. In our main theorems, we shall assume Vis a finite-dimensional vector

is said to be exact if Im e1 = ker ei+ 1 for all i e Z. space over the reals R ·

Let V be a vector space over an arbitrary field F.

(a) If C = {(C~o d 1)1ieZ} is a chain complex, show that C* =

{(C~, e1 = dt+ 1)jieZ} is a cochain complex. Definition 7.1: By a bilinear form co on V, we shall mean any bilinear map

(b) If Cis exact, show that C* is also exact. w: V x V--. F. We say co is symmetric if co(ex, P) = co(p, a) for all ex, pe V.

(12) Prove that(V 1 Ei3 ••• Ei3 VJ* ':!:. Vf EB · · · Ei3 v:. Example 7.2: The standard example to keep in mind here is the form

(13) Let V be a finite-dimensional vector space over F with basis ~ = w((xlt0 · · ·, xJ, (y1, ... , yJ) = Lf=- 1 XtYt· Clearly, co is a symmetric, bilinear form

on F • 0

{exl> ... ,ex0 }. Define T:P-.V by T(:x 1 , ••• ,xJ=~)= 1 x1 ex 1 • Show that

T*(f) = (f)«* for all f e V*. Here you will need to identify (P}* with Fn in a

Suppose co is a bilinear form on a finite-dimensional vect~r space V. Then for

natural way. everyb· aSis ~ = {ex1, ... , an} of V, we can define an ·n x n matrix

(14) Let {z1 }~ 0 be a sequence of complex numbers. Define a map T: C[X] - t C ~(co, ~)e Mnxn(F).whose (i,j)th entry is given by {M(co, ~)h,J = co(exb aJ). In terms

by T(L,:=oakXk} = L~=oakzk. Show that Te(C[X])*. Show that every fthe.usual coordmate map[·]!!: V - t Mnx 1(F), co is then given by the following

T e (C[X]}* is given by such a sequence. equatton:

(15) Let V = IR[X]. Which of the following functions on V are elements in y•: 7.3:

(a) T(p) = JA p(X)dX.

(b) T(p) = JA p(X) 2 dX.

r

54 LINEAR ALGEBRA SYMMETRIC BILINEAR FORMS 56

Clearly, ru is symmetric if and only if M(ru, ~)is a symmetric matrix. In equation 7.7, q is the quadratic form associated with ru. Now if

OJ(a, a) = q(a) = 0 for all a E Y, then 7. 7 implies ru is identically zero. In this case,

Definition 7.4: Suppose ru is a bilinear form on Y. The function q: Y-+ F defined any basis of Y is an ru-orthonormal basis. Thus, we can assume there exists a

by q(e) = ru<e. e) is called the quadratic form associated with ru. nonzero vector {3 E Y such that ru({J, {3) :1= 0. As in the case n = 1, we can then

adjust {3 by a scalar multiple if need be and find an an =F 0 in Y such that

If Y is finite dimensional with basis ~ = {a 1, ... , an}, then equation 7.3 m(an, aJ E { -1, 1}.

implies q@ = [eJ~M(ru, ~)[e),.= L~J=l aux1xJ. Here (x 1 , ••• , xJ = [eJ~ and

1

Next define a linear transformation fEY* by f(e) = ru(an,e). Since

(aiJ) = M(ru, ~). Thus, q(e> is -a quadratic homogeneous polynomial in the f(aJ = ru(an, aJ :1= 0, f is a nonzero map. Set N =kerf. Since f :1= 0, and

e.

coordinates x 1, ... , Xn of That fact explains why q is called a quadratic form dimR IR = 1, f is surjective. Thus, Corollary 5.17 implies YfN ~ IR. In particular,

on Y. In Example 7.2, for instance, q((xt, ... , xJ) = Lf=1 x~. Theorem 5.14 implies dim N = dim Y - 1. ru when restricted to N is clearly a

At this point, a natural question arises. Suppose ru is a symmetric, bilinear symmetric bilinear form. Hence our induction hypothesis implies N has an ru-

form on a finite-dimensional vector space Y. Can we choose a basis ~ of Y so orthonormal basis {at> ... ,an- 1}.

that the representation of ruin equation 7.3 is as simple as possible? What would We claim ~={at> ... , an_ 1, an} is an ru-orthonormal basis of Y. Since

the corresponding quadratic form q look like in this representation? We shall f(aJ :1= 0, Cln ¢ N. In particular, ~ is linearly independent over IR. · Since

give answers to both of these questions when F = R For a more general dimR (V) = n, ~ is a basis of Y. Conditions (a) and (b) of Definitions 7.5 are

treatment, we refer the reader to [2]. satisfied for {a 1 , ••• , an _ 1} since this set is an (I)-Orthonormal basis of N. Since

For the rest of this section, we assume Y is a finite-dimensional vector space N = kerf, ru(a~o aJ = 0 for i = 1, ... , n - 1. Thus, ~ is an (I)-Orthonormal basis

over R Let ru be a symmetric, bilinear form on Y. of V and the proof of Theorem 7.6 is complete. D

Definition 7.5: A basis ~ = {a 1, ... , an} of Y is said to be ru-orthonormal if The existence of ru-orthonormal bases of Y answers our first question about

representing ru. Suppose~= {at> ... , an} is an ru-orthonormal basis ofY. Then

(a) ru(at> aJ) = 0 whenever i :1= j, and the matrix M{ru, ~) is just an n x n diagonal matrix, diag(q(a 1),: .. , q(an)),

(b) ru(ai> aJ E { -1, 0, 1} for all i = 1, ... , n. with q(aJ=ru(abaJE{-1,0,1}. If e. r[EY with [eJ .. =(xl, ... ,Xn)1 and

['7] .. = (y 1, ... , Yn)\ then equation 7.3 implies ru(e, rt)~ Lf= 1 x1y1q(aJ. By

In Example 7.2, for instance, the canonical basis ~ = {c5, = (0,. ·., reordering the elements of ~ if need be, we can assume ~ = {a 1 ,

1, ... , 0) 1i = 1, ... , n} is an ru-orthonormal basis of !Rn. Our first ... , ap} u {ap+ 1, ... , ap+m} u {ap+m+ 1, ... , ap+m+r}, where

theorem in this section guarantees ru-orthonormal bases exist.

7.8:

Theorem 7.6: Let Y be a finite-dimensional vector space over 1R and suppose

is a symmetric, bilinear form on Y. Then Y has an (I)-Orthonormal basis.

trivial. So, suppose n = 1. Then any nonzero vector of Y is a basis of Y. If

OJ

q(aJ= H for

for

for

i = 1, ... , p

i = p + 1, ... , p + m

i = p + m + 1, ... , p +m +r

ru(a, a)= 0 for every aE Y, then any nonzero vector of Y is an ru-orthonormal The vector spaceY then decomposes into the direct sum Y = Y _ 1 EB Y0 EB Y 1,

basis. Suppose there exists a {3 E Y such that ru({J, {3) :1= 0. Then c == Where y_1=L({ap+l•···•IXp+m}), Yo=L({ap+m+l•···•ap+~+r}), and Y1=

lru({J, {3)1- 112 is a positive scalar in IR, and {c{J} is an ru-orthonormal. basis. of V. L({ IX1, ... , ap}). · · ;

Thus, we have established the result for all vector spaces of dtmensiOn 1 Our quadratic form q is positive on Y 1 - (0), zero on Y0, and negative on

over IR. V -1- (0). For example, suppose {JEY _ 1 - (0). Then {3 = x 1o:p+l + ...

Suppose n > 1, and we have proved the theorem for any vector space over IR ' ~xm~+m for some x 1 , ... ,XmEF. Thus, q({J)=ru({J,{J)=l:'f= 1 ~q(ap+J·

of dimension less than n. Since ru is symmetric, we have Since {3 :1= 0, some x1 is nonzero. Since q(aP +1) = - 1 for all i = 1 , ... , m, we see

~{J)<Q .

7.7: The subspaces Y -1> Y 0 , and Y 1 are pairwise ru-orthogonal in the sense that

w(V" V1) = 0 whenever i,j E { -1, 0, 1} and i :1= j. Thus, any ru-orthonormal basis

~ of V decomposes Y 'into a direct sum Y = V _ 1 EB Y 0 EB Y 1 of pairwise ru-

for all e, r[EY orthogonal subspaces Y1. The sign of the associated quadratic form q is constant

56 LINEAR ALGEBRA

r EXERCISES FOR SECTION 7 57

on each V1 - (0). An important fact here is that the dimensions of these three Note in our definition that we do not require that V be finite dimensional. We

subspaces, p, m, and r, depend only on ro and not on the particular ro- finish this section with a few examples of inner products.

orthonormal basis ~ chosen.

Example 7.13: Let V = ~n, and define ro as in Example 7.2. D

Lemma 7.9: Suppose P= {P11 ••• , Po} is a second ro-orthonormal basis of V,

and let V = W _ 1 Ea W~ Ea W 1 be the corresponding decomposition of V. Then Example 7.14: Let V = E9'f=1 ~ and define ro by ro((xh x 2 , ••• ),

dim wj =dim vj for j = -1,0, 1. (y 1, y 2 , •••)) = L~ 1 x1y1. Since both sequences {x1} and {y1} are eventually zero,

w is well defined and is clearly an inner product on V. · D

Proof: W _ 1 is the subspace of V spanned by those P1 for which q(pJ = -1. Let

cxeW - 1 r'\(V0 + V 1). If cx =I= 0, then q(cx) < 0 since cxeW _ 1 • But cxeV0 +V 1

implies q(cx) ;;::: 0, which is impossible. Thus, ex = 0. So, W _ 1 r'\ (V 0 + V 1) = (0).

Example 7.15: Let V = C([a, b]). Define ro(f, g) = J: f(x)g(x) dx. Clearly, ro is an

inner product on V. D

By expanding the basis of W _ 1 if need be, we can then construct a subspace P of

V such that W _ 1 £;; P, and PEa (V 0 + V 1) = V. Thus, from Theorem 4.9, we

We shall come back to the study of inner products in Chapter V.

have dim(W _ 1 ) ~ dim P = dim V - dim V 0 - dim V 1 = dim(V _ 1). Therefore,

dim(W _ 1) ~dim V _ 1 • Reversing the roles of the W 1 and V1 in this proof gives

dim(V _ 1) ~ dim(W _ 1). Thus, dim(W _ 1) = dim(V _ 1). A similar proof shows

dim(W 1) = dim(V 1 ). Then dim(W 0 ) = dim(V 0 ) by Theorem 4.9. This completes EXERCISES FOR SECTION 7

the proof of Lemma 7.9. D

(1) In our proof of Lemma 7.9, we used the following fact: If Wand W' are

Let us agree when discussing ro-orthonormal bases ~ of V always to order the subspaces of V such that W r'\ W' = (0), then there exists a complement of

basis elements cx1eg according to equation 7.8. Then Lemma 7.9 implies that the W' that contains W. Give a proof of this fact.

integers p, m, and r do not depend on g but only on ro. In particular, the (2) Let V = Mm x0 {F), and let C e Mm xm(F). Define a map ro: V x V -+ F by the

following definition makes sense. formula ro(A, B) = Tr(A'CB). Show that ro is a bilinear form. Is ro

symmetric?

Definition 7.10: p- miscalled the signature of q. p +miscalled the rank of q.

(3) Let V = M 0 x 0 {F). Define a map ro: V x V -+ F by ro(A, B~ = n Tr(AB)

We have now proved the following theorem: - Tr(A) Tr(B). Show that ro is a bilinear form. Is ro symmetnc?

(4) Exhibit a bilinear form on ~n that is not symmetric.

Theorem 7.11: Let ro be a symmetric, bilinear form on a finite-dimensional

vector space V over R Then there exists integers m and p such that if (5) Find a symmetric bilinear form on en whose associated quadratic form is

g = {a 1 , ... , ex,} is any ro-orthonormal basis ofV and [eJ.. = (x 11 ••• , X 0 ) 1, then positive definite.

q(e> = :Lr=l x~- rr~.:"+1 x~. o - (6) Describe explicitly all symmetric bilinear forms on ~ •

3

3

A quadratic form q, associated with some symmetric bilinear form ro on V, is (7) Describe explicitly all skew-symmetric bilinear froms on ~ • A bilinear

e

said to be definite if q@ = 0 implies = 0. For instance, in Example 7.2, form ro is skew-symmetric if ro(a, p) = -ro(P, cx).

q((x 1 , ••• , xJ) = Lf= 1 Xf is definite when F =RIfF= C, then q is not definite (8) Let ro: V x V -+ F be a bilinear form on a finite dimensional vector space V.

since, for example, q((l, ~. 0, ... , 0)) = 0. Show that the following conditions are equivalent:

If q is a definite quadratic form on a finite-dimensional vector space V over IR, (a) {aeVjro(a, p) = 0 for all peV} = (0).

e

then Theorem 7.11 implies q(e) > 0 for all E V - (0) or q@ < 0 for all

(b) {aeVjro(p, a)= 0 for all PeV} = (0).

e e V - (0). In general, we say a quadratic form q is positive definite if q(e) > 0 for

g)

(c) ~(m, is nonsingular for any basis ~ of V.

all eev- (0). We say q is negative definite if q(e) < 0 for all eev- (0).

We say ro is nondegenerate if ro satisfies the conditions listed above.

Definition 7.12: Let V be a vector space over R A symmetric, bilinear form ro on (9) Suppose ca: V x V -+ F is a nondegenerate, bilinear form on a finite-

V whose associated quadratic form is positive definite is called an inner product dimensional. vector space V. Let W be a subspace of V. Set

on V. wl. = {aeVI ro(cx, p) = 0 for all PeW}. Show that v = w Ea wl..

L

68 LINEAR ALGEBRA

(10) With the same hypotheses as in Exercise 9, suppose f e V*. Prove that there

exists an rxeV such that f({f) = w(cx,p) for all pev. Chapter II

(11) Suppose co: V x V--+ F is a bilinear form on V. Let W 1 and W 2 be

subspaces of V. Show that (W1 + W 2).L = Wf n wt. If co is nondegen-

erate, prove that (W 1 n W2).L = Wf + Wf.

(12) Let co be a nondegenerate, bilinear form on a finite-dimensional vector

space V. Let co' be any bilinear form on V. Show there exists a unique

T e HomF"fV, V) such that co'(a, {f) = w(T(rx), {f) for all rx, pe V. Show that co'

is nondegenerate if and only if T is bijective.

(13) With the same hypotheses as in Exercise 12, show that for every

T e HomFfV, V) there exists a unique T' e HomFfV, V) such that

w(T(rx), {f) = w(rx, T'(P)) for all rx, p e V.

(14) Let Bil(V) denote the set of all bilinear forms on the vector space V. Define

addition in Bil(V) by (co + co')(rx, P) = co(rx, p) + co'(cx, p), and scalar mult- Multilinear Algebra

iplication by (xco)(cx, {f) = xco(cx, {f). Prove that Bil(V) is a vector space over

F with these definitions. What is the dimension of Bil(V) when Vis finite

dimensional?

(15) Find an co-orthonormal basis for !R2 when co is given by w((xh y1),

(x2,Y2)) = X1Y2 + X2Y1· 1. MULTILINEAR MAPS AND TENSOR PRODUCTS

(16) Argue that w(f, g)= J:f(x)g(x)dx is an inner product on C([a, b]). In Chapter I, we dealt mainly with functions of one variable between vector

(17) Let V = {p(X)e!R[X] ldeg(p) ~ 5}. Suppose co:V x V--+ !R is given by spaces. Those functions were linear in that variable and were called linear

J5

co(f, g) = f(x)g(x) dx. Find an co-orthonormal basis of V. · transformations. In this chapter, we examine functions of several variables

between vector spaces. If such a function is linear in each of its variables, then

(18) Let V be the subspace of C([ -n, n]) spanned by the functions 1, sin(x), the function is called a multilinear mapping. Along with any theory of multilinear

cos(x), sin(2x), cos(2x), ..• , sin(nx), cos(nx). Find an co-orthonormal basis of maps comes a sequence of universal mapping problems whose solutions are the

V where co is the inner product given in Exercise 16. fundamental ideas in multilinear algebra. In this and the next few sections, we

shall give a careful explanation of the principal constructions of the subject

matter. Applications of the ideas discussed here will abound throughout the rest

of the book.

Let us first give a careful definition of a multilinear mapping. As usual, F will

denote an arbitrary field. SuppOse V 1 , ••• , Vn and V are vector spaces over F.

Let ¢: V1 X • • • X Vn --+ V be a function from the finite product V 1 X • • • X Vn to

V. We had seen in Section 4 of Chapter I that a typical vector in V 1 x · · · x Vn is

ann-tuple (rx 1 , ••• , rxJ with rx1 e V1• Thus, we can think of¢ as a function of then

Variable vectors rx 1 , ••• , rxu.

tf for each i = 1, ... , n, we have ·

(a) ¢(rxt> ... ,.rx1 + rxj, ... , rxJ = ¢(rx1, ... , rxh .. ·• rxJ + ¢(rx1 , ... , rxj, ... , rxJ,

and

(b) ¢(rxl> ... , xrx~o ... , rxJ = x¢(rx 1 , ... , rx~o ... , rxJ.

- Algebraic Change Point Detection FliessTransféré parPepe Ordaz
- Exterior Algebra PDFTransféré parSiua
- An Efficient Reconfigurable Multiplier Architecture for GaloisTransféré parS Vasu Krishna
- besselfunctTransféré parFarid Akhtar
- ARE211, Fall2013Transféré parrambori
- Dynamical Energy Analysis (Tanner)Transféré parMattia Bernardi
- tutorialTransféré parEdin Pašić
- Basis and DimensionTransféré parMuhd Faisal Samsudin
- Information BrochureTransféré parmohammedrazi
- Vector SpaceTransféré parRahuldeb Das
- R Tutorial 1Transféré parlynny12
- asmt2ansTransféré parCody Sage
- SyllabusMSc MathematicsTransféré parsrajubasava
- rumus semuaTransféré parDiandra Marseli
- Linear TransformationsTransféré parkaliman2010
- Kondor regressionTransféré parElio Amicarelli
- ELE 535 NotesTransféré parjc4024
- UECM1313Transféré parsadyeh
- Continuum Mechanics (George Backus).pdfTransféré parWanderlei Malaquias Pereira Junior
- artikel akuntansi keuanganTransféré parYlmi Aridah Khaera
- Banach.pdfTransféré parCampo DN
- Download File(1)Transféré parSATSANGBOOK
- Introduction to Compressive SamplingTransféré parKarthik Yogesh
- Talk10-NMF.pptTransféré parPeter Parker
- Week7-9.pdfTransféré parBrian Hara
- Sobel Erosion Dilation ExamplesTransféré parzeeshan
- Advances on Inequalities- S.S.Dragomir.pdfTransféré parGheorghe Ioana
- Vector spacesTransféré parNihla Alp
- Lecture Math PhysTransféré parRay Mondo
- _NVS_upload_UploadSyllabusFiles_Syllabus_PGT.pdfTransféré parkamalesh rajawat

- The Wild Child [1970] InfoTransféré parLeire Gorbeiatik
- huawei drncTransféré parEr Biswajit Biswas
- Handbook of Treasure Signs and Symbols 1980Transféré parali
- vaptTransféré parNaveen Thakur
- Wye Allanbrook is the Sublime a Musical ToposTransféré parjrhee88
- The Ministry of the DeaconTransféré parGrace Church Modesto
- AL 004 Manual Book W143-E1-5 C Series Rack Mount Host Link ManualTransféré parMuhammad Chandra
- 3D-Secure Programmers GuideTransféré parkvirani2013
- PhD Dissertations _ Department of Comparative LiteratureTransféré parNevin Faden Gürbüz
- Celta TaskTransféré parBelinda Jane
- 321lessonplanTransféré parapi-218709553
- petsc.manTransféré parhummingsung
- AFTCV2Transféré parganeshmurthishiva
- ibm_1Transféré parManohari Rd
- Facilitating Learning Issues on Learner-Centered TeachingTransféré parDexter Jimenez Resullar
- Tutorial PipesysTransféré paramelia dana
- cb-english-ebook-6-steps-to-paraphrasing.pdfTransféré parrebeca_latorre
- Rolon Seals Marketing Wise Phne No's Mail Ids & Address as on (03!09!2016)Transféré parAnonymous sNq3rm
- os-phpajax-a4Transféré parMukesh Kumar
- Toeic Exam SampleTransféré parapi-3809909
- Tgbvpn Cg NetGearFVS318v3 EnTransféré parKang YAyan
- Cccam IptablesTransféré pardorinelu
- Life on MarsTransféré parRobin Millan
- 551_IM_04_SetSheets_20110408Transféré parShuhan Mohammad Ariful Hoque
- 25 Useful Phrasal Verbs for Business With Sample SentencesTransféré parSerj Spine
- 5 ws & hTransféré parapi-284752193
- CS6501 All Units Notes 2013 RegulationTransféré paricon4jai_812980786
- A Grammar of the Japanese Written Language, By W.G. Aston (1904)2Transféré parasdf123123
- BAB 2Transféré parnana Alita
- pdf54Transféré parRaulito Marabi

## Bien plus que des documents.

Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.

Annulez à tout moment.