Vous êtes sur la page 1sur 8

What is a vector space?

A vector space over a eld F is a set V equipped with two binary oper-
ations. Elements of V are called vectors. Elements of F are called scalars.
(In this course F is either R or C).
The rst operation, called vector addition, assigns to any two vectors v, w
V and assigns a unique vector v + w V . The second operation, called
scalar multiplication, assigns to any scalar c F and any vector v V a
unique vector c v V . The axioms are:
Axiom 1. v
1
+ (v
2
+ v
3
) = (v
1
+ v
2
) + v
3
Axiom 2. v
1
+ v
2
= v
2
+ v
1
Axiom 3. 0 V , called the zero vector, such that v +0 = v for all v V .
Axiom 4. v V, v V , called the additive inverse of v, such that
v + (v) = 0.
Axiom 5. c (v
1
+ v
2
) = c v
1
+ c v
2
.
Axiom 6. (c
1
+ c
2
) v = c
1
v + c
2
v.
Axiom 7. (c
1
c
2
) v = c
1
(c
2
v).
Axiom 8. 1 v = v
A vector subspace of a vector space V is a subset W V , which is closed
under the two operations. In other words, W is a vector space with respect
to the two operations restricted to vectors in W (the scalars are the same).
What is a linear map?
A linear map between two vector spaces vector space V and W over the
same eld F is a map L : V W which respects the two binary operations
in the following sense:
1. L(v
1
+ v
2
) = L(v
1
) + L(v
2
)
2. L(c v) = c L(v)
The Kernel of a linear map L : V W is dened to be Kern(L) = v
V [ L(v) = 0
The Kernel is a vector subspace of V .
1
The Range of a linear map L : V W is Range(L) = w W[ v V
w = L(v)
The Range is a vector subspace of W.
(The symbol stands for such that, for there exists, for for every
for implies, for if and only if, which is sometimes abbreviated
i)
A linear map L : V W is said to be invertible if a linear map L
1
:
W V such that L L
1
and L
1
L are identity maps..
L is invertible if and only if Range(L) = W and Kern(L) = 0.
What is a basis of a vector space?
A set of vectors S V are said to be linearly independent if and only if
for any nite linear combination

K
k=1
c
k
v
k
, where the c
k
s are scalars and
v
k
S, we have
K

k=1
c
k
v
k
= 0 c
k
= 0 for all k = 1, . . . , K
The linear span of a set of vectors S V is the vector subspace of V formed
by all nite linear combinations

K
k=1
c
k
v
k
, where c
k
s are scalars, v
k
S
and K N.
A basis of a vector space is a maximal linearly independent set
of vectors, or equivalently, a minimal spanning set for V .
A set of vectors is a basis if and only if they are linearly indepen-
dent and they span V
All bases have the same number of elements (or cardinality). The dimension
of a vector space V , denoted by dim(V ), is the cardinality (i.e., the number
of elements) of any basis of V .
Given a basis v
1
, . . . , v
N
of a (nite-dimensional) vector space V , any
vector v V can be written in a unique way as a linear combination of the
basis elements v =

N
k=1
c
k
v
k
. in other words, a basis denes an invertible
linear map (or an isomorphism): F
n
= V .
2
Dimension Formula
Nomenclature: For any linear map L : V W,
rank(L) = dim(Range(L))
nullity(L) = dim(Kern(L)).
For any linear map L : V W between nite dimensional vector spaces,
rank(L) + nullity(L) = dim(V )
An m n matrix A denes a linear map x Ax from F
n
to F
m
. The
row rank of A is the dimension of the row space which is dened to be the
span of its row vectors ( in F
n
) and the column rank is the dimension of
the column space which is the span of its column vectors (in F
m
).
For any matrix A the row rank and the column rank are both
equal to the rank of A as a linear map.
This implies that the rank of an mn matrix is m and also n.
Linear Equations
A linear equation is an equation of the form L(v) = w, where L is a
linear map L : V W. In particular, the equation Ax = b, where A is
an m n matrix is a linear equation. Given w W, we want to solve for
v V . First we want to know whether a solution exists and if so whether
it is unique. After that we will devise techniques (sometimes numerical and
algorithmic) to nd solutions.
The solution space of the homogeneous equation L(v) = 0 is the
Kernel of L, which always contain the trivial solution v = 0
The equation L(v) = w has a solution iff w Range(L) and the
solution (if it exists) is unique iff Kern(L) = 0
If L(v
1
) = w and L(v
2
) = w, then L(v
1
v
2
) = L(v
1
) L(v
2
) = w w = 0.
Therefore the dierence of any two solutions of L(v) = w is an element of
Kern(L) and the solution (if it exists) is unique i Kern(L) = 0.
Given any particular solution v
p
of the inhomogeneous linear equa-
tion L(v) = w, all other solutions are obtained by adding all elements of
Kern(L), i.e. all solutions of the homogeneous equation L(v) = 0 to v
p
. In
other words, the solution space of L(v) = w, if non-empty, forms an ane
3
space v
p
+ Kern(L) V . (An example of an ane space in R
3
would be
any plane or line that does not necessarily pass through the origin).
The equation Ax = b, where A is an mn matrix has a solution
iff b lies in the column space of A (which is the range of A) iff
rank(A) = rank(A[b) (the augmented matrix) and the solution (if
it exists) is unique iff Kern(L) = 0 iff rank(A) = n.
If rank(A) = r < n, then the solution space, if it is non-empty, is an ane
space of dimension n r. Note that if m < n (more unknowns than
equations), then rank(A) m < n.
If A is an invertible square matrix, then the unique solution of Ax = b is
given by x = A
1
b.
Determinants
The determinant of a square n n matrix A = (a)
ij
is dened by the
formula:
det(A) =

i
1
i
2
...in
a
1i
1
a
2i
2
. . . a
nin
where the sum runs over all n! permutations (i
1
i
2
. . . i
n
) and
i
1
i
2
...in
= 1
is the sign of the permutation.
The determinant can also be computed (inductively) using the cofactor ex-
pansion:
det(A) =
n

i=1
(1)
i+j
a
ij
det(M
ij
)
=
n

j=1
(1)
i+j
a
ij
det(M
ij
)
where M
ij
is the (n1) (n1) matrix called a minor obtained by deleting
the i
th
row and the j
th
column of A.
A is invertible iff det(A) ,= 0 iff Kern(A) = 0 iff rank(A) = n .
If A is invertible there is a formula for the inverse of A using the cofactors.
Let adj(A) be the transpose of the matrix (1)
i+j
det(M
ij
). Then A
1
=
1
det(A)
adj(A).
Characteristic (or even dening) properties of the determinant:
4
1. The determinant is multilinear in the sense that
det(v
1
, . . . , v
i
+w
i
, . . . , v
n
) = det(v
1
, . . . , v
i
, . . . , v
n
)+det(v
1
, . . . , w
i
, . . . , v
n
)
det(v
1
, . . . , c v
i
, . . . , v
n
) = c det(v
1
, . . . , v
i
, . . . , v
n
)
for any i
2. The determinant changes sign if you switch any two rows or any two
columns.
3. The determinant of the identity matrix is = 1.
These properties can be used to manipulate a matrix, for example adding
(or subtracting) any scalar multiple of a row (or column) from any other
row (column resp.) without changing the determinant.
Another key fact about the determinant:
det(AB) = det(A) det(B)
In particular, det(A
1
BA) = det(B), if A is invertible.
If A is invertible, then the unique solution x = A
1
b of the linear equation
Ax = b can be written as (Cramers Rule):
x
i
=
det(A(b
i
))
det(A)
where A(b
i
) is the matrix obtained from A by replacing the i
th
column with
the vector b.
What is an Inner Product?
An inner product on a vector space V over the eld F is a map:
, : V V F
that satises the following three axioms for all vectors x, y, z V and all
scalars c F
1. Symmetry: x, y = y, x
2. Linearity in the second argument:
x, y + z = x, y +x, z
x, c y = c x, y, for any scalar c F.
3. Non-degeneracy:
x, y = 0 for every y i x = 0
5
An inner product is called positive denite if
x, x 0 for all x and x, y = 0 i x = 0
For a positive denite inner product, we dene the length of a vector to be
[x[ =

x, x and the angle between two non-zero vectors x, y is dened


by cos =
x,y
|x| |y|
.
Two vectors x, y are said to be orthogonal if xy = 0.
The standard positive-denite inner products for R
n
and for C
n
are given
by the formulas:
x, y =
n

k=1
x
k
y
k
= x
T
y
z, w =
n

k=1
z

k
w
k
= (z

)
T
w
A square n n matrix satisfying the equation A
T
A = AA
T
= Id (Id is the
identity matrix) is called an orthogonal matrix. If A is orthogonal, then
Ax, Ay = x, y for all x, y R
n
, and hence an orthogonal matrix preserves
lengths and angles of Euclidean geometry. The set of all orthogonal n n
matrices form a group, denoted by O(n).
A square matrix satisfying: (A

)
T
A = A(A

)
T
= Id is called a unitary
matrix. The set of all unitary n n matrices form a group, denoted by
U(n).
A square matrix satisfying A
T
= A is called symmetric. A square matrix
satisfying A
T
= A is called skew-symmetric.
If A is skew-symmetric, then e
A
is an orthogonal matrix with determinant
= 1. In fact, x(t) = e
tA
x(0) is the unique solution of the O.D.E.
dx
dt
= Ax,
( linear system with constant coecients), with initial conditions x = x(0)
at time t = 0.
A square matrix satisfying the equation (A

)
T
= A is called Hermitian.
A square matrix satisfying (A

)
T
= A called skew-Hermitian. If A is
Hermitian, then i A is skew-Hermitian and e
iA
is a unitary matrix. Moreover
(t) = e
iH

(0) solves the wave equation i


d
dt
= H
For spaces of real or complex valued functions dened on an interval (which
could be the whole real line) [a, b] R (which could be the whole real line),
6
we often use the inner product:
f, g =

b
a
f(t)

g(t) w(t) dt
where w(t) is a suitable non-negative weight function, such as e

t
2
2
on
the real line.
Gram-Schmidt procedure
Given any set of linearly independent set of vectors x
1
, . . . , x
k
in a vector
space with a positive denite inner product, we can recursively dene a set
of orthonormal vectors:
e
1
=
x
1
|x
1
|
e
2
=
x
2
x
2
,e
1
e
1
|x
2
x
2
,e
1
e
1
|
e
3
=
x
3
x
3
,e
1
e
1
x
3
,e
2
e
2
|x
3
x
3
,e
1
e
1
x
3
,e
2
e
2
|

etc., so that we nally arrive at vectors e
1
, e
2
, . . . , e
k
which are pairwise
orthogonal and of unit length and such that the span of x
1
, . . . , x
i
is the same
as the span of e
1
, . . . , x
i
for each i = 1, . . . , k. The Gram-Schmidt procedure
gives rise to the extremely important QR decomposition in numerical linear
algebra. (The other important LU decomposition is related to Gaussian
elimination).
What are eigenvalues and eigenvectors?
A linear map L : V V from a vector space to itself is called an
endomorphism. Given a basis of V , an endomorphism is represented by
an nn matrix A, where n = dim(V ). Under a change of bases, the matrices
are related by a similarity transformation:

A = B
1
AB, where B is the
invertible matrix representing the base change.
An eigenvalue of L is a scalar, i.e. an element of R or C, such that the
linear equation L(v) = v has a non-trivial solution v V, v ,= 0. Hence
is an eigenvalue i Kern(L Id) ,= 0, where Id is the identity map. If
A is the matrix representing L w.r.t. a basis, then is an eigenvalue i is
a root of the characteristic polynomial:
p(t) = det(At I)
7
, where I is the identity matrix. This is a polynomial of degree n = dim(V )
and always has n complex roots (counted with multiplicities). The set of all
eigenvalues is sometimes called the spectrumof A. Note that the character-
istic polynomial is invariant under base change, since det(B
1
AB t I) =
det(B
1
(At I)B) = det(B
1
) det(At I) det(B) = det(At I).
The subspace of V

= Kern(L Id) is called the eigenspace of the eigen-


value and the elements in V

are called eigenvectors corresponding to the


eigenvalue . It can be shown that eigenvectors corresponding to distinct
eigenvalues are linearly independent.
If we can nd a basis of eigenvectors (v
1
, . . . , v
n
) for a linear map L : V V
, then we say that the map is diagonalizable, since w.r.t a basis consisting
of eigenvectors the matrix representing L is diagonal, with the eigenvalues
as the diagonal entries.
Theorem:
1. Every symmetric matrix has real eigenvalues and an or-
thonormal basis of eigenvectors
2. Every Hermitian matrix has real eigenvalues and a unitary
basis of eigenvectors
In other words, for every symmetric nn matrix A, there exists an orthog-
onal matrix O O(n), whose column vectors are eigenvectors, such that
O
1
AO = D, where D is a diagonal matrix with the eigenvalues as the di-
agonal elements. Similarly, for every Hermitian nn matrix H, there exists
a unitary matrix U U(n), whose column vectors are eigenvectors, such
that U
1
H U = D, where D is a diagonal matrix whose diagonal entries
are real, consisting of all the eigenvalues of H. It is easy to show that the
determinant of an orthogonal matrix is 1 and that of a unitary matrix is a
complex number of unit length, so an orthogonal or unitary change of base
does not change the volume element.
The trace of a square matrix is dened to be the sum of its diagonal ele-
ments. We note that both the trace and the determinant are invariant under
change of base, i.e. conjugation.
trace(B
1
AB) = trace(A) , det(B
1
AB) = det(A)
Moreover we have:
e
B
1
AB
= B
1
e
A
B
since B
1
A
n
B = (B
1
AB)
n
for all n.
8

Vous aimerez peut-être aussi