Vous êtes sur la page 1sur 11

Linear Algebra

Shaun Lahert
December 28, 2014

1
1.1

Vector Spaces
Definition

A set V over a field F with two binary operations defined, u, v, w V, c, k F , u + v and cv


is a vector field if the following axioms are held.
Closure under addition ~u + ~v V
Closure under multiplication c~u V
Commutativity ~u + ~v = ~v + ~u
Associativity ~u + (~v + w)
~ = (~u + ~v ) + w
~
Additive Identity ~u + ~0 = ~u = ~0 + ~u
Additive Inverse ~u + (~u) = ~0 = (~u) + ~u
Distributivity c(~u + ~v ) = c~u + c~v
Associativity of Multiplication c(k~u) = (ck)~u
Multiplicative Identity 1~u = ~u = ~u1

1.2

Inner Product Space

An inner product space is a vector space with additional structure. This addition structure
associates every pair of vectors with a scalar < ~u, ~v >= c. An inner product is any function
that satisfies the following axioms.

Commutative < ~u, ~v > = < ~v , ~u >


Distributive < ~u + ~v , w
~ > = < ~u, w
~ > + < ~v , w
~>
Associative

c < ~u, ~v > = < c~u, ~v > = < ~u, c~v >

Positive Definite < ~u, ~v > 0 and < ~u, ~u >= 0 if and only if ~u = ~0

1.3

Norm

Suppose V is an inner product space space, the norm or length of a vector u is defined as
1
k~uk =< ~u, ~u > 2
1

1.4

Distance

The distance between two vectors ~u, ~v is defined as d(~u, ~v ) = k~u ~v k that is the norm of their
difference.

1.5

Orthogonal Vectors

Two vectors are ~u, ~v are said to be orthogonal if < ~u, ~v >= 0

1.6

Linear Transformations

Give two vector spaces V, W a linear transformation is a map from the elements of one vector
space to the other which obeys linearity.
T :VW
with ~u, ~v V:
T (~u + ~v ) = T (~u) + T (~v )
T (c~u) = cT (~u)

2
2.1

Transformation Matrices and Coordinate Vectors


Assigning a Basis

Choosing a basis {~v1 , ~v2 , .., ~vn } for an n-dimensional vector space V allows one to construct a
coordinate system. Where one can assign every vector a coordinate vector relative to the basis.
The fact that the basis spans V means that all vectors have coordinate vectors and linear independence of the basis vectors implies that the coordinate vector representations are unique.
In this way once a basis has been chosen, addition and scalar multiplication of vectors corresponds to addition and scalar multiplication of their coordinate vectors.
Furthermore if V, W are n, m dimensional vector spaces respectively and a basis has been fixed,
then any linear transformation T can be represented by a mn matrix called the transformation
matrix of T with respect to these basis.

2.2

Coordinate Vectors

Suppose u is a vector in V and that V has basis set S = {~v1 , ~v2 , .., ~vn } then
~u = (a1~v1 + a2~v2 + .. + an~vn ) = [S][u]s
where the coordinate vector is
[u]s = (a1 , a2 , .., an )
.
When S is the standard basis {~e1 , ~e2 , .., ~en }, [u]e = ~u

2.3

Changing Basis of Coordinate Vector

~u is a vector in V and V has two basis:


Old Basis A = [~v1 , ~v2 , .., ~vn ]
New Basis B = [w
~ 1, w
~ 2 , .., w
~ n]

2.4

[u]new = [B]1 [A][u]old

Transformation Matrix

T : Rn Rm with [u]e Rn .
T (~u) = A[u]e where A is an m n matrix.
A = [T (~e1 )|T (~e2 )|..|T (~en )]
where {~e1 , ~e2 , .., ~en } is the standard basis.

2.5

Changing Basis of Transformation Matrix

Let A and B be basis sets of a vector space V and ~u a vector in V.


We have:
[u]B = [B]1 [A][u]A

and

[u]A = [A]1 [B][u]B

and

[u]A = P[u]B

Let P = [A]1 [B].


[u]B = P1 [u]A

Let T be a linear transformation and C be its matrix with respect to the basis A and D be its
matrix with respect to the basis B.
[T (~u)]A = C[u]A

and

[T (~u)]B = D[u]B

D[u]B = [C[u]A ]B = P1 C[u]A = P1 CP[u]B


Finally we have:
D = P1 CP
This is only valid for an endomorphism T : Rn Rn

Matrix Properties and Inverses

3.1

Non Singular Square Matrices(A)

A is invertible.
A~x = ~0, ~x is the zero vector.
A reduces to the identity matrix I.
A is the product of elementary row operation matrices(A = LU)
For every row operation there is a corresponding matrix Ei that when multiplied does the
same thing. Therefore knowing the row operations to reduce A to I is the same as:
1 1
E3 E2 E1 A = I then E3 E2 E1 = A1 and A = E1
1 E2 E3 these inverse matrices
correspond to the inverse row operations.
For every A~x = ~b there exists one ~x for every ~b.
det(A) 6= 0
Dimension of the Null Space and Left Null Space is zero
Rank is N
Column and Row Vectors form a basis for Rn
Transformation with the matrix A is bijective/endomorphism.

3.2

Finding Matrix Inverses

Two main methods of finding Inverse matrix:


1. Augmented Matrix: Elimination using Gauss-Jordan
[ A | I ] [ I |A1 ]
2. Adjoint:
A1 =

1
[adjA]
det(A)

[adjA] = [Cof actorA]T


[Cof actorA] = (1)i+j [M inorA]
The minor matrix of A is the matrix of each aij entry of A replaced by the determinant
after crossing out the ith row and jth column.

3.3

Properties of Inverses

(AB)1 = B1 A1
(An )1 = (A1 )n
1
(cA)1 = (A1 )
c
(AT )1 = (A1 )T
4

Determinants & Transposes

4.1

Method of Finding Determinants

Cofactor Expansion: Expanding on any one row i or column j


i+j

det(A) = (1)

n
X

i+j

aij Cij = (1)

i=1

4.2

n
X

aij Cij

j=1

Properties of Determinants

det(AB) = det(A)det(B)
det(A1 ) =

1
det(A)

det(AT ) = det(A)
If A is triangular or diagonal det(A) is the product of the entries on the diagonal.
If B is the matrix resulting from multiplying a row or column of A by c then det(B) =
c det(A). Furthermore det(cA) = cn det(A)
If B is the matrix resulting from a row or column exchange of A then
det(B) = det(A)
If B is the matrix resulting from adding a multiple of one column or row to another then
det(B) = det(A)

4.3

Properties of Transposes & Conjugate Transposes

(A B)T /H = AT /H BT /H
(AB)T /H = BT /H AT /H

4.4

Cramers Rule

If det(A) 6= 0, A~x = ~b is solved by determinants:


x1 =

det(A1 )
det(A2 )
det(An )
x2 =
...... xn =
det(A)
det(A)
det(A)

The matrix Ai has the ith column of A replaced by ~b.

5
5.1

Orthogonality
Fundamental Subspaces

For an m n matrix A:
Row Space: R(A) The space spanned by the linear combinations of a matrixs rows.
Column Space: C(A) The space spanned by the linear combinations of a matrixs columns.
5

Null Space: N (A) The space of vectors ~x from A~x = 0.


Left Null Space: N (AT ) The space of vectors ~y from AT ~y = 0.
C(A) N (AT )
C(A) + N (AT ) = Rm
dim(C(A)) + dim(N (AT )) = m
R(A) N (A)
R(A) + N (A) = Rn
dim(R(A)) + dim(N (A)) = n

5.2

The complete solution to A~x = ~b

For an m n matrix A with rank r:


Rank r is defined as the number of independent rows/columns.
Dependent columns add dimension to n which adds to the null space N (A) by creating an Rn
bigger than just R(A) similarly dependent rows add dimension to m which adds to the left
null space N (AT ) by creating a Rm bigger then just C(A).
r = m and r = n:
Square matrix with no dependent columns or rows. As there is no ~x not in R(A)(injective)
and no ~b not in the C(A)(surjective) there is one solution for every A~x = ~b
r = m and r < n:
When r < n you have dependent columns which means the row space doesnt span Rn
and you have a null space. So there are some ~x which get mapped to zero(Not injective).
All ~b are still in the column space however so there is always a solution.
So A~x = ~b has infinitely many solutions for ~x.
r < m and r = n:
When r < m you have dependent rows which means the column space doesnt span
Rm and you have a left null space. So there are some ~b which have no solution ~x(Not
surjective).
So A~x = ~b has either one or zero solutions
r < m and r < n:
In this case both the column space and row space dont span their respective embedded
spaces so there are ~b which have no solution as they are at least partly in the left null
space and every other ~b in the column space has infinitely many solutions because of the
null space.
So A~x = ~b has either 0 or infinitely many solutions.

By this the domain of A is:


R(A) + N (A) = Rn
The codomain of A is:
C(A) + N (AT ) = Rm
And the image is:
C(A) = Rm N (AT )

5.3

Vector Projection

To project a vector ~x onto a line L with L = {a~v | a R}:


Let c be the length of the projection, projL ~x = c~v
From orthogonality:
(~x projL ~x) ~v = 0
(~x c~v ) ~v = 0
~x ~v c~v ~v = 0
~x ~v = c~v ~v
~x ~v
c=
~v ~v
~x ~v
projL ~x = c~v =
~v
~v ~v

5.4

Subspace Projection

Let ~x be a vector in Rn , V is a subspace of Rn with basis {~v1 , ~v2 , .., ~vm }. Any vector in V can
be represented as a linear combination of the basis vectors (a1~v1 + a2~v2 + .. + am~vm ).
Therefore the projection of ~x onto V can be represented as a linear combination of basis
vectors (a1~v1 + a2~v2 + .. + am~vm ) =

a1
a2

projV ~x = ~v1 ~v2 .. ~vm


. = A~yx
am
Rn is composed of V and everything orthogonal to V.
V + V = Rn
(~x projV ~x) V
Also
V = C(A)

C(A) = V

The orthogonal complement of the column space is the left null space C(A) = N (AT ). Therefore:
V = N (AT )
(~x projV ~x) N (AT )
7

This means that:


AT (~x projV ~x) = 0
AT ~x = AT projV ~x
As A is the basis matrix of a subspace of Rn it is a rectangular matrix with n dim(C(A))
columns. So it is not invertible, however AT A is always invertible and projV ~x = A~yx .
AT ~x = AT A~yx
(AT A)1 AT ~x = ~yx
projV ~x = A~yx = A(AT A)1 AT ~x

5.5

Least Squares

For A~x = ~b where ~b lies outside the column space, the best possible approximation of ~x is the
solution to A
x = projection of ~b onto the C(A). Projecting ~b onto C(A):
projC(A)~b = A(AT A)1 AT ~b
A
x = A(AT A)1 AT ~b
x
= (AT A)1 AT ~b

5.6

Orthogonal & Orthonormal Matrices

An orthogonal matrix satisfies where D is a diagonal matrix:


QT Q = D
If Orthonormal:
QT Q = I
kQ~xk = k~xk
If square:
QT Q = I = QQT

QT = Q1

Changing from an old basis V to an Orthogonal Basis:


[u]Q =

[u]V ~q1
[u]V ~q2
~q1 +
~q2 + ...
2
kq1 k
kq2 k2

If orthonormal:
[u]Q = [u]V ~q1 ~q1 + [u]V ~q2 ~q2 + ...
Projection formula with orthogonal matrices(Subspace with orthogonal basis) takes the
form:
projQ ~x = QQT ~b
This can also be done through vector projection:

projQ ~x =

~x ~q1
~x ~q2
~q1 +
~q2 + ....
kq1 k2
kq2 k2

If orthonormal:
projQ ~x = ~x ~q1 ~q1 + ~x ~q2 ~q2 + ....
Least squares formula with orthogonal matrices takes the form:
x
= QT ~b

5.7

Gram-Schmidt

To construct an orthogonal basis {~q1 , ~q2 , .., ~qn } given an arbitrary basis {~v1 , ~v2 , .., ~vn }:
First let:
~q1 = ~v1
Now let each ~qi equal their corresponding ~vi minus the projections of the ~vi on the previous
orthogonalised vectors.
~v2 ~q1
~q1
~q2 = ~v2 projq~1 ~v2 = ~v2
kq1 k2


~v3 ~q2
~v3 ~q1
~q3 = ~v3 (projq~2 ~v3 + projq~1 ~v3 ) = ~v2
~q2 +
~q1
kq2 k2
kq1 k2
To normalize the vectors simply divide by the their length

5.8

A=QR Factorization

Gram-Schmidt gives a matrix factorization:

A = QR

~v1 ~v2 ~v3 = ~q1 ~q2

5.9

~q1 ~v1 ~q1 ~v2 ~q1 ~v3


~q2 ~v2 ~q2 ~v3
~q3
~q3 ~v3

Unitary Matrices

Unitary Matrices are the complex equivalent of Orthonormal Matrices.


UT U = I
kU~zk = k~zk
If square:
UT U = I = UUT

UT = U1

Eigenvalues & Eigenvectors

6.1

Eigenvalue & Eigenvector Properties

Eigenvalues are values corresponding to an n n matrix A where A~v = ~v for some vector
~v .
To find i :
A~v = ~v
A~v ~v = 0
(A I)~v = 0
This is only true for non-zero ~v if the nullspace of (A I) is non-trivial i.e:
det(A I) = 0

This equation gives a characteristic polynomial of degree n which can be solved for i .
Then using (A i I)~vi = 0 you can solve for ~vi
Properties
The eigenvalues of A2 are i 2 with the same eigenvectors.
The eigenvalues of A1 are

1
with the same eigenvectors.
i

The sum of i is the trace ofA


The product of i is the determinant.
Projection Matrices P have eigenvalues 1, 0
90 Rotation Matrices have eigenvalues i, i
Reflection Matrices have eigenvalues 1

6.2

Diagonalization

A~v = ~v

A ~v1 ~v2 ~v3 = ~v1 ~v2 ~v3

Let

~v1 ~v2 ~v3 = S

=
3

AS = S
A = SS1
A needs n independent eigenvectors to be diagonalizable.

10

6.3

Symmetric & Hermitian Matrices

Symmetric Matrices are of the form A = AT


Hermitian Matrices are of the form A = AH where H is the conjugate transpose.
They have real eigenvalues and orthogonal eigenvectors.
Diagonalization becomes A = QQT
The signs of the pivots match the signs of the eigenvalues.

6.4

Positive Definite Matrices

When a Symmetric or Hermitian Matrix have positive eigenvalues we call them positive definite.
To test for positive definiteness ~v T A~v > 0

6.5

Similar Matrices

Two matrices A, B are similar if A = M1 BM


A and B share the same eigenvalues.
Eigenvectors are multiplied by M1 .
Every matrix is similar to a jordan matrix.

6.6

SVD Factorization

Any real or complex matrix can be factorized into the form A = UVT /H U is an m m
unitary matrix, is an m n diagonal matrix and VT /H is an n n unitary matrix.
To find VT /H and :
AT /H = VUT /H
AT /H A = VUT /H UVT /H
AT /H A = V2 VT /H
This is the eigendecomposition of AT /H A which means that i the diagonal entries of are the
square root of the eigenvalues of AT /H A. And V is just the matrix of eigenvectors of AT /H A
Similarly multiplying on the right by AT /H will give you U

11

Vous aimerez peut-être aussi