Vous êtes sur la page 1sur 26

3.IV.

Matrix Operations

• In Section 3.III, matrix was defined in terms of linear maps.


• Operations on matrices will be defined to represent operations on linear maps.
• Matrix algebra is a linear algebra ( M, + , · ; R ).

3.IV.1. Sums and Scalar Products


3.IV.2. Matrix Multiplication
3.IV.3. Mechanics of Matrix Multiplication
3.IV.4. Inverses
3.IV.1. Sums and Scalar Products

Definition: Sum of Maps of the Same Domain & Codomain


The sum of maps f, g : V → W is another map
h=f+g: V→W by v  f(v) + g(v)

Definition 1.3: Matrix Addition & Scalar Multiplication


Let A = ( ai j ) & B = ( bi j ) be mn matrices.
Then C = A + B is an mn matrix with elements ci j  ai j + bi j .
And C = α A , with α  R, is an mn matrix with elements ci j  α ai j .

Theorem 1.5: Matrix Representations


Let f, g : V → W be maps with bases B → D so that F = f B → D & G = g B → D .
Then ( f + g )B → D = F + G & (α f ) B → D = α G , where α R

Proof: Do it yourself.
Example 1.1: f, g : R2 → R2

 1 3 0 0
F  fB  D   2 0  G  gB  D   1 2 
 1 0 2 4
 B  D  B  D

 1 3  v1  3v2 
Then  1
  2 0   
v
f  v B D  v    2v1 
 1 0  2 B  v 
 B  D  1 D
0 0  0 
 v1 
g  v B D   1 2  v    v1  2v2 
2 4  2 B  2v  4v 
 B  D  1 2 D

 v1  3v2  1 3 
   v1 
f  g   v B D  f  v B D  g  v B D   v1  2v2    1 2  v 
 3v  4v  3 4   2 B
 1 2 D  B  D
 1 3  0 0   1 3 
F  G   2 0    1 2    1 2 
 1 0  2 4   3 4 
     
 v1  3v2   0 
f  v B D   2v1  g  v B D   v1  2v2 
 v   2v  4v 
 1 D  1 2 D

 v1  3v2  1 3 
 v1 
f  g   v B D  f  v B D  g  v B D   v1  2v2    1 2  v 
 3v  4v  3 4   2 B
 1 2 D  B  D

1 3 

f  g B  D   1 2   hB  D H
3 4 
 B  D

Definition 1.3: Matrix Addition & Scalar Multiplication


Let A = ( ai j ) & B = ( bi j ) be mn matrices.
Then C = A + B is an mn matrix with elements ci j  ai j + bi j .
And C = α A , with α  R, is an mn matrix with elements ci j  α ai j .
Exercise 3.IV.1.

1. The trace of a square matrix is the sum of the entries on the main
diagonal (the 1,1 entry plus the 2,2 entry, etc.; we will see the significance of
the trace in Chapter Five).
Show that trace(H + G) = trace(H) + trace(G).
Is there a similar result for scalar multiplication?
3.IV.2. Matrix Multiplication
Lemma 2.1: A composition of linear maps is linear, i.e.,
If h : V → W and g : W → U are linear maps,
then g  h : V → U is a linear map.

Proof: Composition of linear maps preserves linear combinations.


(see Hefferon, p.215)

Definition 2.3: Matrix-Multiplicative Product


The matrix-multiplicative product of the mr matrix G and the rn matrix H is the
mn matrix P, where
r
pi j   g i k h k j  G i   H  j 
k 1

i.e.,
 h1 j 
    
G H   gi1 gi 2 gi r  
h2 j   pi j P
 
   





  hr j
 
P  j   G  H  j 
Theorem 2.6:
A composition of linear maps is represented by the matrix product of the representatives.

Proof:
Let h: V n → W r and g: W r → U m be linear maps with representations

RepB  Ch  h B  C  H RepC  D g  g C  D  G

where B, C, and D are bases


of V, W and U, resp.
 v1 
 
Let v  V with RepB v  v B   
v 
 n B
If w = h(v)  W, then
w C  RepC  h  v     RepB  Ch   v B  H  v B

If x = g(w) = g  h (v)  U, then


x D  RepD  g h  v    RepD  g  w     RepC  D g   w C  G  wC  G   H  v 
B
RepD  g h  v    G   H  v B 

r r n n
 r  n
G   H  v B   i   gi k wk   gi k  h k j v j     gi k h k j  v j   G  H i j v j
k 1 k 1 j 1 j 1  k 1  j 1

→ RepD  g h  v     G  H   v B  v V

∴ RepB  D g h  G  H   RepC  D g    Rep B  C h 


QED

Example 2.2, 4:
Let h : R4 → R2 and g : R2 → R3 with bases B  R4 , C  R2 , D  R3 so that

 4 6 8 2 1 1
H  RepB  C  h   h B  C   G  RepC  D  g   gC  D   0 1 
 5 7 9 3 B  C
1 0
 C  D
1 1  9 13 17 5 
→ 4 6 8 2
F  RepB  D  g h    0 1   5 7 9 3   5 7 9 3 
1 0  B  C  
 C  D  4 6 8 2 B  D
 v1   v1 
v  v 
Let v B   2  then  4 6 8 2  2  4v1  6v2  8v3  2v4 
 v3  wC  H  vB    
 5 7 9 3 B  C  v3   5v  7v  9v  3v 
     1 2 3 4C
 v4  B  v4 B

1 1  9v1  13v2  17v3  5v4 


 4v1  6v2  8v3  2v4 
xD  G  wC   0 1   5v  7v  9v  3v    5v1  7v2  9v3  3v4 
1 0  1 2 3 4C
 4v  6v  8v  2v 
 C  D  1 2 3 4 D

 v1 
 9 13 17 5  v   9v1  13v2  17v3  5v4 
x D   G  H  v B   5 7 9 3   2   5v1  7v2  9v3  3v4 
4 6 8 2  v3   4v  6v  8v  2v 
 B  D  v   1 2 3 4 D
 4 B

G   H  v B    G  H  v B suggests that matrix products are associative.


To be proved in Theorem 2.12 below.
g h g
g h  v g h  v
h
v v h  v

RepB  D g h  G  H   RepC  D g    Rep B  C h 

Matrix dimensions: m  r times r  n equals m  n.

Example 2.7: Dimension mismatch


This product is not defined  1 2 0   0 0 
 0 10 1.1  0 2 
  
Example 2.9: Matrix multiplication need not commute.

 1 2   5 6   19 22   5 6   1 2   23 34 
 3 4   7 8    43 50   7 8   3 4    31 46 
         

Example 2.10:

 5 6  1 2 0   23 34 0  1 2 0  5 6
 7 8  3 4 0    31 46 0   3 4 0   7 8  is not defined
       

Theorem 2.12:
Matrix products, if defined, are associative
(FG)H = F(GH)
and distributive over matrix addition
F(G + H) = FG + FH and (G + H)F = GF + HF.

Proof: Hefferon p.219


Via component-sumation formulae or the corresponding composite maps.
Exercises 3.IV.2.
1. Represent the derivative map on Pn with respect to B → B where B is the
natural basis ( 1, x, …, xn ). Show that the product of this matrix with itself is
defined. What the map does it represent?

2. ( This will be used in the Matrix Inverses exercises.)


Here is another property of matrix multiplication that might be puzzling at first sight.
(a) Prove that the composition of the projections πx , πy : R3 → R3 onto the x and y
axes is the zero map despite that neither one is itself the zero map.
(b) Prove that the composition of the derivatives d2 /dx2, d3 /dx3 : P4 → P4 is the
zero map despite that neither is the zero map.
(c) Give a matrix equation representing the first fact.
(d) Give a matrix equation representing the second.
When two things multiply to give zero despite that neither is zero, each is said to
be a zero divisor.
3. The infinite-dimensional space P of all finite-degree polynomials gives a
memorable example of the non-commutativity of linear maps.
Let d/dx: P → P be the usual derivative and let s: P → P be the shift map.
s
a0  a1 x   an x n
0  a0 x  a1 x 2   an x n 1

Show that the two maps don’t commute:


d/dx  s  s  d/dx
In fact, not only is d/dx  s  s  d/dx not the zero map, it is the identity map.
3.IV.3. Mechanics of Matrix Multiplication

(AB)i j = ( Row i of A )  ( Column j of B )

1 1  9 13 17 5 
 0 1 4 6 8 2  
 
   5 7 9 3  5 7 9 3 
 1 0    4 6 8 2
   
Lemma 3.7: ( Row i of A )  B = ( Row i of AB )
1 1  9 13 17 5 
 0 1 4 6 8 2  
 
   5 7 9 3  5 7 9 3 
 1 0    4 6 8 2
   

A  ( Column j of B ) = ( Column j of AB )

1 1  9 13 17 5 
 0 1  4 6 8 2    5 7 9 3
   5 7 9 3  
 1 0    4 6 8 2
   
Definition 3.2: Unit Matrix
A matrix with all zeroes except for a one in the i, j entry is an i, j unit matrix.

Let U be an i, j unit matrix, then


• U A → Copy row j of A to row i of zero matrix of appropriate dimenions.
• A U → Copy col i of A to col j of zero matrix of appropriate dimenions.

 0 1 8 9 4 0 2  2 8 2  9 2  4
 0 0  5 6 7  
  0 0 5 6 7  
 
     0 0 0       0 0 0 
 0 0 8 9 4  0 0 0  0 08 9 4  0 0 
       0

 0 1 0 2
 5 6 7    0 5  5 6 7    0 2  5
 8 9 4   0 0   0 8  8 9 4   0 0    0 2  8
   0 0    0 0  
   

 0 1 8 9 4 1 1 5  8 6  9 7  4
 0 0 5 6 7   0 0 0  0 0  5 6 7   
 8 9 4    8 9 4   0 0 0 
 1 0   5 6 7  0 0    0
       0 0 
Definition 3.8: Main Diagonal
The main diagonal (or principal diagonal or diagonal) of a square matrix
goes from the upper left to the lower right.

1 0 0
0 1 0 
Definition 3.9: Identity Matrix I n n   I n n  i j   i j
 
 
0 0 1

Amn Inn  Amn  Imm Amn

Definition 3.12: Diagonal Matrix


A diagonal matrix is square and has zeros off the main diagonal.

 a11 0 0 
 0 a 0 
A nn  22
 diag  a11, a22 , , ann   A nn i j  aii  i j
 
 
 0 0 ann 
Definition 3.14: Permutation Matrix
A permutation matrix is square and is all zeros except for a single one in each
row and column.
From the left (right) these matrices permute rows (columns).

 0 0 1  1 2 3  7 8 9  3rd row to 1st


 1 0 0  4 5 6   1 2 3
     1st row to 2nd
 0 1 0  7 8 9  4 5 6 2nd row to 3rd
    

 1 2 3  0 0 1   2 3 1  1st column to 3rd


 4 5 6 1 0 0   5 6 4 2nd column to 1st
    
 7 8 9  0 1 0  8 9 7 3rd column to 2nd
    
Definition 3.18: Elementary Reduction Matrices
The elementary reduction matrices are obtained from identity matrices with
one Gaussian operation. We denote them:

1 0 0
1
i  k i
I   Mi  k  M2  3   0 3 0 
0 0 1
 

1 0 0
i   j
 2 I 
 Pi , j P2, 3   0 0 1 
 0 1 0
 

 1 0 0
 j  k i   j C2 , 3  3   0 1 0 
 3 I  Ci , j  k 
0 3 1
 

 1 0 0  a b  a b 
C2 , 3  3 A   0 1 0   c d    c d 

 0 3 1 e f   3c  e 3d  f 
 
The Gauss’s method and Gauss-Jordan reduction can be accomplished
by a single matrix that is the product of elementary reduction matrices.

Corollary 3.22:
For any matrix H there are elementary reduction matrices R1, . . . , Rr
such that Rr Rr1 ··· R1 H is in reduced echelon form.
Exercises 3.IV.3
1. The need to take linear combinations of rows and columns in tables of
numbers arises often in practice. For instance, this is a map of part of Vermont
and New York.

In part because of Lake Champlain,


there are no roads directly connecting
some pairs of towns. For instance,
there is no way to go from Winooski to
Grand Isle without going through
Colchester. (Of course, many other
roads and towns have been left off to
simplify the graph. From top to
bottom of this map is about forty miles.)

Continued next page.


(a) The incidence matrix of a map is the square matrix whose i, j entry is the
number of roads from city i to city j. Produce the incidence matrix of this map
(take the cities in alphabetical order).
(b) A matrix is symmetric if it equals its transpose. Show that an incidence
matrix is symmetric. (These are all two-way streets. Vermont doesn’t have
many one-way streets.)
(c) What is the significance of the square of the incidence matrix? The cube?

2. The trace of a square matrix is the sum of the entries on its diagonal (its
significance appears in Chapter Five). Show that trace(GH) = trace(HG).

3. A square matrix is a Markov matrix if each entry is between zero and one
and the sum along each row is one. Prove that a product of Markov matrices
is Markov.
3.IV.4. Inverses
Example 4.1:  x
 y  x
Let π: R3 → R2 be the projection map    y
z  
 
 x
and η : R2 → R3 be the embedding  x  y
 y  
  0
 
The composition π η: R2 → R2 is the identity map on R2 :
 x
 x   y
  x π is a left inverse map of η.
 y    y η is a right inverse map of π.
  0  
 
The composition η  π : R3 → R3 is not the identity map on R3 :

 x    x
 y  x  y π has no left inverse.
   y   η has no right inverse.
z   0
   
The zero map has neither left nor right inverse.

Definition 4.2: Left / Right Inverse & Invertible Matrices


A matrix G is a left inverse matrix of the matrix H if GH = I .
It is a right inverse matrix if HG = I .
A matrix H with a two-sided inverse is an invertible matrix.
That two-sided inverse is called the inverse matrix and is denoted H1 .
Lemma 4.3:
If a matrix has both a left inverse and a right inverse then the two are equal.
Proof: Statement is true on the corresponding linear maps.
Theorem 4.4:
A matrix is invertible if and only if it is nonsingular.
Proof: Statement is true on the corresponding linear maps.
Lemma 4.5:
A product of invertible matrices is invertible: ( GH )1 = H 1 G 1

Proof: Statement is true on the corresponding linear maps.


Lemma 4.8:
A matrix is invertible  it can be written as the product of elem reduction matrices.
The inverse can be computed by applying to I the same row steps, in the same order,
as are used to Gauss-Jordan reduce the invertible matrix.

Proof: An invertible matrix is row equivalent to I.


Let R be the product of the required elementary reduction matrices so that RA = I.

Then

AXI → RAXRI → XRI


Example 4.10:

Example 4.11: A Non-Invertible Matrix

Corollary 4.12:
The inverse for a 22 matrix exists and equals

if and only if ad  bc  0.
Exercises 3.IV.4

1. Show that the matrix  1 0 1  has infinitely many right inverses.


0 1 0  

Show also that it has no left inverse.

2. Show that if T is square and if T4 is the zero matrix then

1  T 
1
 1  T  T2  T3

Generalize.

3. Prove: if the sum of the elements of a square matrix is k, then the sum of
the elements in each row of the inverse matrix is 1/k.