Vous êtes sur la page 1sur 7

Chapter 20.

Vector Spaces and Bases

In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have
looked at lines, planes, hyperplanes, and have seen that there is no limit to this hierarchy of structures.
We have indexed these objects by their dimension in an ad hoc way. It is time to unify these structures
and ideas; this chapter gives a brief introduction to this more abstract viewpoint. Paradoxically, this
abstraction also makes Linear Algebra more applicable to other areas of Mathematics and Science.
In this higher viewpoint, Linear Algebra is the study of vector spaces and linear mappings between
vector spaces.
Definition of a vector space: A vector space is a set V of objects v called vectors which can be
added and multiplied by scalars t R, subject to the rules:
u + v = v + u,
t(u + v) = tu + tv,

(u + v) + w = u + (v + w),
(t + s)v = tv + sv,

t(sv) = (ts)v,

these holding for all u, v, w V and s, t R. There must also be a vector 0 V with the properties
that
0 + v = v, 0v = 0
for all v V.
Examples of vector spaces:
Rn is a vector space. Any line, plane, hyperplane,... in Rn is a vector space. Even the set
consisting of one vector {0} is a vector space, called the trivial vector space. In fact, any subset
of Rn that is closed under addition and scalar multiplication is a vector space; such a vector space
is called subspace of Rn .
For n 0, the set Pn of all polynomials c0 + c1 x + + cn xn with real coefficients is a vector
space. Here the vectors are polynomials.
The set V of real valued functions f (x) satisfying the differential equation f 00 + f = 0 is a vector
space. For if f and g are solutions so are f + g and cf , where c is a scalar. Note that V is a
subspace of the giant vector space C (R), consisting of all infinitely differentiable functions
f : R R. In fact V = E(1) is an eigenspace for the linear map
d2
: C (R) C (R).
dx2

Any nonzero vector space V contains infinitely many vectors. To write them all down, we want to have
something like the standard basis e1 , . . . , en of Rn . Here, we have
(x1 , x2 , . . . , xn ) = x1 e1 + x2 e2 + xn en .

(1)

Thus, any vector x = (x1 , x2 , . . . , xn ) Rn is a linear combination of the vectors e1 , en . Moreover,


this linear combination is unique: Equation (1) is the only way to write x as a combination of the ei .
Definition of a Basis: A basis of a vector space V is a set {v1 , . . . , vn } of vectors in V with the
property that every vector in V can be uniquely expressed as a linear combination of v1 , . . . , vn .
Examples of bases:
A basis of a line ` is a nonzero vector v in `. Every vector in ` can be uniquely expressed as cv,
for some a R.
A basis of a plane P is a pair {u, v} of non-proportional vectors in P . Every vector in P can be
uniquely expressed as au + bv, for some a, b in R.
A basis of R3 is any set of three vectors {u, v, w} in R3 not contained in a plane.
A basis of a hyperplane H R4 is a set of three vectors in H not contained in a plane.
A basis of R4 is a set of four vectors not contained in a hyperplane.
For any n 1, the standard basis {e1 , . . . , en } is a basis of Rn . There are many other bases,
such as {e1 , e1 + e2 , . . . , e1 + e2 + + en }.
The set {1, x, x2 , x3 , . . . , xn } is a basis of Pn . This means that any polynomial can be uniquely
expressed as a finite linear combination c0 + c1 x + c2 x2 + + cn xn .
The vector space of solutions of the differential equation f 00 + f = 0 has basis {cos x, sin x}. In
other words, any function satisfying f 00 + f = 0 can be uniquely expressed as f = c1 cos x +
c2 sin x for some real numbers c1 , c2 .
The above definition of a basis was simple to state, but it often takes time to fully grasp the idea.
This is because the phrase can be uniquely is actually two conditions in compressed form. We now
uncompress them, starting with can be.
Definition of Span : The span of a set of vectors {v1 , . . . , vk } V is the set of all linear combinations
of the vi , namely
{c1 v1 + + cn vn : ci R}.
We also use span as a verb: {v1 , . . . , vk } spans V if every vector in V can be expressed as a linear
combination of the vi .
The span of a {v1 , . . . , vk } is closed under addition and scalar multiplication, hence forms a vector
space in its own right.
Now to unravel the second part of the definition of Basis: uniqueness.
Definition of Linear Independence: A collection of vectors S in a vector space V is linearly independent
if no vector in S is a linear combination of the other vectors in S. If some vector in S is a linear combination of the others, we say that S is linearly dependent.
2

Proposition 0.1 A subset S V is linearly dependent if and only if there exist vectors v1 , . . . , vk in
S and nonzero scalars c1 , . . . , ck in R such that
c1 v1 + c2 v2 + + ck vk = 0.
Proof: Suppose there exist vectors vi in S and nonzero scalars ci such that c1 v1 + + ck vk = 0.
Then
ck
c2
v 1 = v 2 vk ,
c1
c1
so v1 is a linear combination of the other vectors v2 , . . . , vk and therefore S is linearly dependent.
Conversely, if S is linearly dependent, then we can write some v in S as a linear combination of other
vectors v1 , . . . , vk in S:
v = c1 v 1 + + ck v k ,
where the ci are nonzero scalars. Then
v c1 v 1 ck v k = 0
with all coefficients nonzero. 
Example: The three vectors
u = (1, 2, 3),

v = (4, 5, 6),

w = (7, 8, 9)

are linearly dependent because


u 2v + w = 0.

Definition of a Basis, rephrased: A basis of a vector space V is subset {v1 , . . . vn } V satisfying


the two properties:
(i) {v1 , . . . , vn } spans V;
(ii) {v1 , . . . , vn } is linearly independent.

To check that a given subset {v1 , . . . , vn } V is a basis of V, you have to check items (i) and (ii).
That is,
(i) To show spanning: Take an arbitrary vector v V , and show that there are scalars c1 , . . . , cn such
that
v = c1 v 1 + + cn v n .
(ii) To show linearly independence: Suppose you have an equation of the form
c1 v1 + + cn vn = 0,
and show this implies c1 = c2 = = cn = 0.
3

Example 1: Let V be the hyperplane in R4 with equation


x + y + z + w = 0.
The vectors
v 1 = e1 e2 ,

v2 = e2 e3 ,

v3 = e3 e4

are in V. Lets show that {v1 , v2 , v3 } is a basis of V.


Step (i): To check spanning, let v = (x, y, z, w) be an arbitary vector in V. We want to find scalars
c1 , c2 , c3 such that
v = c1 v 1 + c2 v 2 + c3 v 3 .
Since w = x y z, we have
v = xv1 + (x + y)v2 + (x + y + z)v3 ,
so c1 = x, c2 = x + y, c3 = x + y + z do the job. This shows that S spans V.
Step (ii): To check linear independence, we suppose there are scalars c1 , c2 , c3 such that
c1 v1 + c2 v2 + c3 v3 = 0.
Writing both sides out in terms of components, this means
(c1 , c2 c1 , c3 c2 , c3 ) = (0, 0, 0, 0),
which amounts to the equations
c1
c2 c1
c3 c2
c3

=0
=0
=0
= 0.

The only solution to these equation is c1 = c2 = c3 = 0. This shows that {v1 , v2 , v3 } is linearly
independent. We have now shown that {v1 , v2 , v3 } is a basis of V. This means that every vector
v V can be uniquely written as
v = c1 v1 + c2 v2 + c3 v3 = (c1 , c2 c1 , c3 c2 , c3 ).

Example 2: Let us now take the vector space W of vectors (x, y, z, w, t) in R5 satisfying the same
equation x + y + z + w = 0. Then the set {v1 , v2 , v3 } in example 2 is no longer a basis of W: the
vector e5 is in W, but e5 is not in the span of {v1 , v2 , v3 }. By the method of example 2 one can check
that the enlarged set {e1 e2 , e2 e3 , e3 e4 , e5 } is a basis of W.
Example 3: We have seen that the vector space Pn of polynomials of degree n has the basis
{1, x, x2 , x3 , . . . , xn }. This may be the most obvious basis of Pn , but for many purposes it is not the
best one. For numerical integration, one prefers to use instead the basis
{P0 , P1 , P2 , . . . , Pn }
4

consisting of Legendre Polynomials. These are the unique polynomials satisfying the conditions
Z 1
Pk P` dx = 0 if k 6= `.
deg Pk = k, Pk (1) = 1,
1

The first few Legendre polynomials are


P0 = 1,

P1 = x,

P2 = 21 (3x2 1),

P3 = 12 (5x3 3x),

(2)

and a general formula for them is


Pk (x) =

1
dk
k [(x2 1)k ].
2 4 (2k) dx

All we need to know from this formula is that


Pk (x) = ak xk + (lower powers of x).
Lets show that {P0 , P1 , P2 , . . . , Pn } is a basis of Pn .
Step (i): The span Ln of {P0 , P1 , P2 , . . . , Pn } is a vector space, consisting of all linear combinations
of the Pk , and we want to show that Ln = Pn . We first note that 1 = P0 and x = P1 , so 1 and x are in
Ln . Next,
P2 = 23 x2 12 x
and both P2 and x are in Ln , so 32 x2 Ln , so x2 Ln .
Likewise,
P3 = a3 x3 + (linear combination of 1, x, x2
and since 1, x, x2 and P3 are in Ln we have x3 Ln . In general
Pk = ak xk + (linear combination of 1, x, . . . , xk1
and up to this point we will have shown 1, x, . . . , xk1 Ln , along with having Pk Ln by definition,
so we get xk Ln , and continue like this, until we have all powers 1, x, . . . , xn in Ln . Now, since a
general polynomial Q Pn is a linear combination of 1, x, . . . , xn , we have Q Ln as well. This
proves spanning.
Step (ii): To show linear independence, suppose c0 , c1 , . . . , cn are scalars such that
c0 P0 + c1 P1 + + cn P n = 0
is the zero polynomial. Since deg Pn = n and deg Pk < n for k < n, it follows that xn appears in
cn Pn and nowhere else. Since the coefficient of xn must cancel out, we must have cn = 0. The same
argument now shows cn1 = 0, etc., so all ck = 0 and this proves that {P0 , P1 , P2 , . . . , Pn } is linearly
independent and is therefore a basis of Pn .
Example 4: Let V be the vector space of solutions to the differential equation
f 00 + af 0 + bf = 0,
5

where a and b are constants. Let and be the roots of the polynomial x2 + ax + b and assume for
simplicity that 6= . I claim that {ex , ex } is a basis of V.
Let D : C (R) C (R) be the linear map of the vector space of infinitely differentiable functions
given by Df = f 0 . The eigenvectors of D with eigenvalue are solutions to the equation Df = f ,
namely f = kex , where k is a constant. In other words, ex spans ker(D ). Since x2 + ax + b =
(x )(x ) we have
f 00 + af 0 + bf = (D2 + aD + b)f = (D )(D )f = (D )(D )f.
If f 00 + af 0 + bf = 0 then (D )f ker(D ) and vice-versa. so
(D )f = k1 ex

and (D )f = k2 ex ,

for some constants k1 and k2 . Solving these two equations for Df we get
( )Df = k1 ex k2 ex .
Integrating both sides, we get
f=


1
k1 ex k2 ex .

This shows that {ex , ex } spans V.


We now show linear independence. Suppose
c1 ex + c2 ex = 0
for some constants c1 and c2 . Differentiating, we get
c1 e x + c2 ex = 0.
Multiplying the first equation by and subtracting we get
c1 ( )e x = 0.
Since 6= this forces c1 = 0. Then c2 ex = 0 so c2 = 0. This proves that {ex , ex } is linearly
independent and is therefore a basis of the solution space V.

Exercise 20.1 Find a basis of the hyperplane in R4 with equation


x + 2y + 3z + 4w = 0.

Exercise 20.2 Let V be the vector space of solutions of the differential equation f 00 2f 0 + f = 0.
(a) Show that the functions ex and xex belong to V.
(b) Show that the functions ex and xex are linearly independent.1
Exercise 20.3 Let Pn be the vector space of polynomials of degree at most n. Let p0 , p1 , . . . , pn be
polynomials with deg(pi ) = i. Show that {p0 , p1 , . . . , pn } is a basis of Pn .
Exercise 20.4 On Pn , we have the an analogue of the dot product, given by
Z 1
p(x)q(x) dx.
hp, qi =
1

We say p and q are orthogonal if hp, qi = 0. In this problem you may use without proof the fact that
distinct Legendre polynomials Pk and P` are orthogonal: hPk , P` i = 0 if k 6= `.
(a) Suppose f Pn is orthogonal to each Legendre polynomial Pk , for all 0 k n. Show that
f = 0.
(b) We know that any f Pn may be uniquely expressed as a linear combination of the Pk . Show
that this unique linear combination is given by
n
X
hf, Pk i
Pk (x).
f (x) =
hPk , Pk i
k=0

[Hint: Let be the difference of the two sides and show that h, Pk i = 0 for all 0 k n.
Then invoke part (a). ]

Exercise 20.5 Let v1 , . . . vk be nonzero vectors in Rn which are orthogonal with respect to the dot
product. Prove that {v1 , . . . , vk } is linearly independent.

In fact {ex , xex } is a basis of V, but that is harder to prove.

Vous aimerez peut-être aussi