Vous êtes sur la page 1sur 28

Chapter 2

VECTOR SPACES
§1. Definition and examples

The notion of vector space constitues the study of linear algebra


and represents one of the most important algebraic structures used in
different branches of mathematics, as in the applicated disciplines.
1.1 Definition A non-empty set it is called vector space (linear) over
field K (K-vector space) if the following conditions are
satisfied :
(V, +) forms an abelian group structure, namely
a) (x+y)+z = x+(y+z) ,  x, y, z  V
b)  0  V such that x  V, x+0=0+x
c)  x  V ,   x  V , x + (-x) = (-x) + x = 0
d)  x, y  V , x  y  y  x

II. The law of external composition: K  V, (, x) = x, satisfies the


following axioms:
a)  (x + y) = x + y
b) ( + ) x = x + x
c)  (x) = () x
d) 1 x = x,  ,   K,  x, y  V.
Conditions I and II represent the vector space axioms over field K.
The elements of vector V are called vectors, the elements of field
K are called scalars and the external law of composition is called scalars
multiplication.
If the commutative corp K is the corp of real numbers R or
complex C, we will talk about a real space vector, respectively complex
vector space.
Most of the times we will have vector spaces over the real number
corp and will call then simple “vector spaces", and in the other cases will
indicate the scalar field.

1
If we note with 0V the null vector of the additive group V and with
0K the null scalar, then from the axioms that define the vector space V over
field K we have the following properties:
1.2 Corollary If V is a vector space over field K, then for,  x  V,
   K occur the proprieties:
1) 0K x = 0V
2)  0V = 0V
3) (-1) x= -x .

Demonstration: 1) Using axioms IIb and IId we have 0K x = (0K + 0K)x =


= 0K x + 0K x  0K x = 0V .
2) Taking into acount Ib and IIa, 0V = (0V + 0V) = 0V + 0V from which
we obtain α 0V  0V .
3) from the additive group axioms of field K, the consequence 1) and axiom
Ic we have x + (-1)x = [1 + (-1)]x = 0Kx = 0V we obtain (-1) x= -x.

Example
1° Let it be K a commutative corp. Considering the additive
abelian structure of field K, then set K represents K-vector space. More, if
K' K is a subcorp, then K is a K'-vector space. The set of complex
numbers C can be seen as a C-vector space or R-vector space, Q- vector
space.
2° The set Kn = K  K  …  K, where K is a commutative corp, is
K- vector space, called arithmetic space (standard), according to the
operations:  x,y V ,  K , x= (x1, x2,..,xn), y = (y1, y2,..,yn)
x  y : ( x1  y1 , x 2  y 2 ,..., xn  y n )
x : (x1 , x2 ,...,xn )

3° The set matrices of Mmn(K), is a K- vector space with respect to


the operations: A  B : (aij  bij ) and
A : ( aij ) ,  A = (aij), B = (bij)  Mmn(K),   K.

4° The set K[X] of polynoms with coefficients from field K is a K- vector


space according with the operations:
f  g : (a0  b0 , a1  b1 ,...) , g : (a0 ,a1 ,...) ,
 f = (a0, a1,..), g = (b1, b2,..)  K[X],   K.

5° The set of solutions of a linear and homogenous equation system


2
forms a vector space K of the coefficients of this system. The solutions of a
system of m equations and n unknown variables, seen as elements from Kn
(n-uple), can be summed and multiplied with a scalar respecting the sum
and product with scalars defined on Kn.
6° The set of free vectors V3 from the punctual space of elementary
geometry is R- vector space.
To build this set we consider the geometric space E3 and set
M = E3  E3 = {(A, B)/ A, B  E3}. The elements of set M are called bipoint
or oriented segments and will be noted with AB . Point A will be called
origin and B will be called the extremity of segment AB . In case the origin
and extremity coincide is obtain the nul segment (A, A). The line determined
by points A and B is called the support line of segment AB . Two oriented
segments have the same direction if the support lines are parallel or
coincide.
Two non-zero oriented segments AB and CD have the same
direction, have the same sense if their extremities are in the same semi-plan
determined by the line which unites the two segments,
B

A D

C
Fig.1
The length (module or norm) of an oriented segment AB is defined as
the geometric length of the non-oriented segment [AB], namely the distance from
point A to point B and will be noted with | AB | (|| AB ||). The nul segment has
length zero.
On the set M we introduce the equipolent relation "~".
Two oriented segments AB and CD are called equipolent if they have
the same direction, same sense and same length, (fig.2) :
fig.2
B D

A C
It is easy two verify that the equipolent relation is an equivalence
relation on set M (is reflexive, symmetric and transitive).
The set of equivalence classes, according to this relation:

3
M/~ = {( A, B ) | A,B  E3 } = V3
defines the set of free vectors of the geometric space E3. The equivalence
class of the oriented segment AB will be noted with AB  v and will be
called free vector and the oriented segment AB  AB will be the
representant of the free vector v in point A. Direction, sense and length
which are commune to all the elements from an equivalence class define
the direction, sense and the length of the free vector. For the length of a free
vector we will use the notation | v | or || v ||. The free vector of length zero is
called nul vectorul and is noted with 0 . A free vector of length one is called
an unit vector or versor.
Two free vectors u and v are equal u  v if their representants
are two oriented echipolent segments.
Two free vectors which have the same direction are called colinear
vectors. Two collinear vectors with the same length and opposite sense are
called oppose vectors.
Three free vectors are called coplanar if their oriented
corresponding segments are parallel with a plane.
The set V3 can be organized as an abelian additive group.
If the free vectors u and v are represented by the oriented
segments AB and AC , then the vector represented by the oriented
segment AD defines the sum of vectors u and v and is noted with
w  u  v (fig. 3)
fig.3
B D
 
u w

A  C
v

The rule which defines the sum of two free vectors u and v is
called the parallelogram rule (or the triangle rule).
The sum of two free vectors “+”: V3  V3  V3, (u , v )  u  v is a
well-defined internal composition law (does not depend on the representants
choosing). The abelian additive group axioms are easy to verify.
The external composition law
 : K V3  V3,  ( , v )  v

4
where the vector v is characterized by the same direction with v , the
same sense if   0 , oppose sense if   0 and || v || = |  | || v ||, satisfies
the group II axioms from the definition of the vector space.
In conclusion, the the free vectors set is a real vector space .

§ 2. Vector subspaces

Let it be V a vector space over field K.

2.1 Definition. A non-empty subset U  V is called vector subspace of


V if the algebraic operations from V induce on U a
structure of K-vector space.

2.2 Theorem. If U is a subset of K-vector space V, then the following


affirmations are equivalent:

1° U is a vector subspace in V
2°  x, y U,  a K we have
a) x + y U
b) ax U
3°  x, y U ,  ,   U   x + y  U.

Demonstration
1°  2°: if U  V is a subspace then for x, y  U  x  y  U
and for  x  U and, because the two operations induce on subset U a
structure of vector space.
b
2°  3°: x, y  U ,  ,   K  x  U and
a
y  U  x  y  U .
3°  1°:  x, y  U and for  = 1,  = -1 results that x - y  U
which proves that U  V is an abelian subgroup. On the other side
 x, y  U ,    K and β  0  x  U and the axioms II from
the definition of the vector space verifies immediately, so the subset U  V
posses a structure of vector space.

Example

5
1° The set {0}  V is subspace in V, called nul subspace of V.
Any subspace different from the vector space V and the nul subspace {0} is
called proper subspace.
2° The set of the symmetric matrices (antisymmetric) of n order is
a subspace of the square matrices set of n order.
3° The polynomial set with real coefficients of degree  n, R[X] =
{f  R[X]/grad f  n} represents a vector subspace of the vector space of the
polynoms with real coefficients.

4° Subsets
Rx = {(x, 0)/x  R}  R2 Ry = {(0, y)/x  R}  R2.

are vector subspaces of the arithmetic space R2. Generally, the set of the
points from any line which passes through the origin of space R2, determine
a vector subspace. This vector subspaces represents the solution’s set of
some linear and homogenous equations in two unknowns.

2.3Proposition. Let it be V1 and V2 two subspaces in K-vector space V.


The subsets V1  V2  V and V1 + V2 = =
{v V / v  v1  v2 , v1  V1 , v2  V2 }  V are vector
subspaces.

Demonstration. For  x, y  V1  V2  x, y  V1 and x, y  V2 so V1


and V2 are vector subspaces of V results that for  , β  K we have
αx  y  V1 and αx  y  V2 , so x + y  V1  V2. Using Theorem
2.1 results the first part of the proposition.
If u  u1  u 2  V1  V2 and v  v1  v2  V1  V2 then for
 , β  K , au  y   (u1  u 2 )   (v1  v2 )  (u1  u 2 )  ( v1  v2 ) .
For V1 and V2 vector spaces,  αu1  v1  V1 and αu 2  v 2  V2 ,
q.e.d.

Observation. The subspace V1  V2  V is not an vector space.


Example. The vector subspaces Rx and Ry are defined in exampe 4°, it
verifies the relations:
Rx  Ry = {0} and Rx + Ry = R2.
Indeed, if (x, y)  Rx  Ry  (x, y)  Rx and (x, y)  Ry  y = 0 and x = 0,
which proves that subspace Rx  Ry is formed only from the nul vector.
6
For  (x, y)  R2 ,  (x, 0)  Rx ,  (0, y)  Ry , such that (x, y)
= (x, 0) + (0, y) which proves that R2  Rx + Ry. The reverse inclusion is
obvious.

2.4 Proposition Let V1 , V2  V be two vector subspaces and v  V1 + V2.


The decomposiotion v  v1  v 2 is unique if and only if
V1  V2 = {0} .

Demonstration: The necessity of the condition is proved by reduction to


absurd. Let’s suppose that V1  V2  {0}   v  0 which can be written
v = 0 +v or v = v+ 0, which contradicts the unicity of writing, so V1  V2 =
{0}.
To prove the condition’s sufficiency we admit that
v  v1  v 2  v1'  v2' . Because v1 , v1'  V1 and v 2 , v 2'  V2 , vector
u  v1  v1 '  v 2'  v 2 is contained in V1  V2. For V1  V2 = {0} results
that v1  v1' and v 2  v 2' .
If V1 and V2 are two vector subspaces of the vector space V and V1
 V2 = {0} then the sum V1 + V2 is called direct sum and is noted with V1
 V2. Plus, if V1  V2 = V, then V1 and V2 are called supplementary
subspaces. In case that V1  V is a given vector space and there exists an
unique subspace V2  V such that V = V1  V2, then V2 is called the
algebraic complement of subspace V1.

Example. Vector subspaces Rx and Ry, satisfying proprieties Rx  Ry = {0},


Rx + Ry = R2, are supplementary vector subspaces, and the arithmetic space
R2 can be represented under the form R2 = Rx  Ry. This fact permits that
any vector (x, y)  R2 can be written in an unique way as the sum of the
vectors. (x, 0)  R2 and (0, y)  R2, (x, y) = (x, 0) + (0, y).

Observation. The notions of sum and direct sum can be extended to a finite
number of terms.
2.5 Definition. Let it be V a vector space over field K and S a non-zero
subset of it. A vector v  V as
v   1 x1   2 x 2      p xp, i  K, xi R (2.1)
is called linear finite combination of elements from S.

2.6 Theorem. If S is anon-zero subset of V, then the set of all linear


finite combinations of elements from S, is noted L(S) or
<S>, is a vectorial subspace of V, called generated
7
subspace by set S or linear covering of S.

Demonstration Applying the result of theorem 2.1 for  x, y  L(S), 


p q p q

,   K, ax  y    i xi     j y j   (i ) xi   (  j ) y j sum
i 1 j 1 i 1 j 1

represents a linear finite combination with elements from S, so


x  y  L(S) .

2.7 If V1 and V2 are two vectorial subspaces of vector space


Consequence. V then L(V1  V2)=V1 + V2.

2.8 Definition. A subset S  V is called generators system for vector


space V if the generated subspace by subset S coincides
with V, L (S)=V.

If S subset is finite, and for any vector v  V,  i  K, i  1, n


n
such that v   i xi , we say that vector space V is finite generated.
i 1
A generalization of the notion of vector space is given by the
notion of linear variety.

2.9 Definition. Is called linear variety in vector space V a subset L  V


for which exists a vector x0  L such that the set
VL  {v  x  x0 / x  L} is a vector subspace of V.

Subspace VL is called the director subspace of linear variety L.

Example. Let’s consider the standard vector space R2 endowed with the
coordinates system axes x O y (fig. 4)
Let’s consider a line L which passes through point x0  (a 0 , b0 )  L
. The point v  x  x0  (a  a0 , b  b0 ) ,  (a, b)  L is situated on a parallel
line with L  R2 which passes through the origin .

8
y L
V1
x
b
b-b0 v

x0 b0

a0 0 a a-a0 x
fig.4

In finely, the subset of points from the vector space R2 situated on


any line (L) from plane represents a linear variety having as director vector
space the line which passes through origin and is parallel with line (L).
A vector subspace represents a particular case of linear variety; is
that of linear variety of vector space V which contains the nul vector of the
vector space V (v0 = 0).
Let it be V a K-vector space and subset S = {x1,x2,…,xp}  V.
2.10 The vectors subset S = {x1, x2, …, xp}  V is called
Definition. linearly independent ( free or x1, x2, …, xn are linear
independent) if the equality 1 x1   2 x 2  ...   p x p  0 ,
i  K, i  1, p , happens only if 1   2  ...   p  0 .

A set(finite or not) of vectors from a vector space is linear


independent if any finite system of vectors is a system of linear independent
vectors.

2.11 The vector subset S = {x1, x2, …, xp}  V is called


Definition. linearly dependent (tied or vectors x1, x2,.., xn are linear
dependent), if () 1, 2, …, p  K not all zero, such
that 1 x1   2 x 2  ...   p x p  0 .

Remark: If the canceling of a linear finite combination, formed with


vectors x1, x2, …, xn  V, permits to express a vector in function of others
(there exists at least one null coefficient ) then vectors x1, x2, …, xp are linear
dependent, contrary they are linear independents.

9
2.12 Theorem. If S = {x1, x2, …, xp}  V is a linear independent set and
L(S) The linear covering of S, then for any set of p + 1
elements from L(S) is linear dependent.
p

a
Demonstration. Let it be vectors yi = j 1 ij xj , i = 1,2,…, p + 1 from the
linear covering L(S).
The relation 1y1 + 2y2 + …+p+1yp+1 = 0 is equivalent with
p
 p 1

   i aij x j  0 . With respect that the vectors x1 , x 2 ,  , x p are linear
j 1  i 1 
independents we obtain for j  1, p relations 1a1j + 2a2j + +…
+p+1ap+1j = 0, which represents a system of p linear equations with p + 1
unknowns (i), admits also different solutions from the casual solutions,
which means that the vectors y1, y2,…,yp+1 are linear dependent, q.e.d.

§3. Base and dimension

Let it be V a K-vector space


3.1 Definition. A subset B (finite or not) of vectors from V is called base
of the vector space V if:
1) B is linear independent
2) B represents a system of generators for V.

The vector space V is called finite generated or finite dimensionally


if there exists a finite system of generators.

3.2 Theorem. If V  {0} is a finite generated vector space and S is a


system of generators for V, then there exists a base B  S
of the vector space V. (From any finite system of
generators of the vector space it can be extracted a base).

Demonstration: First, prove that S contains also non-zero vectors. Suppose


that S = {0}, than  x  V \  0 can be written as x =   0 = 0
(S – system of generators) contradiction, so S  {0}.
Let it be now x1  S a non-zero vector. The set L = {x1}  S
represents a linear independent system. We continue adding non-null
vectors from S for which the subset L represents a linear independent set.
Let’s suppose that S contains n elements, then S has 2n finite subsets. If after
10
a finite number of steps we will find L  S, a system of linear independent
vectors and for  L’  S’ with L  L’, L’ represents a linear dependent
subset (L is maximum in the relation sense order).
L is a generator system for V. Indeeed, if L = { x1, x2, …, xm} for
m = n  L = S and is a system of generators, and if m < n, then
L'  L   x m1  ,  xm1  S \L , represents a system of linear dependent
m
vectors (L is maximal) and xm 1  S / L, x m 1   i xi , xi  L, i  1, m
i 1
m
. Results that x  V , x   i xi ,   K, xi  L, i  1, m .The set L
n
satisfies the theorem conditions 4.1 so forms a base of the vector space V,
q.e.d.

3.3 If V  {0} and S  V a generator finite system and L1 


Consequence. S a linear independent system, then exists a base B of
vector space V, such that L1  B  S.

A vector space V is finite dimensional if it has a finite base or if


V = {0}, contrary is called dimensional infinite.
Examples
1°În arithmetic space Kn the vector subset B={e1,e2,…, en}, where
e1={1, 0, …, 0}, e2={0, 1, …, 0},…, en={0, 0, …, 0, 1}, represents a base of
the vector space Kn, called canonic base.
2° In vector space of polynomswith real coefficients R[X] subset
B = {1, x, x2,..,xn,..}, constitutes a base. R[X] is an infinite dimensional
space.

3.4 Proposition. In K-vector space V finite generaedt, any two bases


have the same number of elements.

Demonstration. Let’s consider in vector space V finite generated the bases


B and B, having card B= n, respective card B= n. Using consequence 3.3
we obtain sequently n  n and n n, so n = n.
The last proposition permits the introduction of the dimension
notion of a vector space.

3.5 Definition. Is called dimension of a finite generated vector space,


the number of vectors from a base of it noted with
dimV. The null space{0} has dimension 0.

11
Observation If V is a vector space with dimension dimV = n then:
a) a system of n vectors is base  is free independent.
b) a system of n vectors is base is a system of generators.
c) Any system of m > n vectors is linear dependent.

We will note K- n-dimensional vector space with Vn, dimVn = n.

3.6 Proposition. If B ={e1, e2,…, en} is a base of K-vector space Vn then


any vector x  Vn admits an unique expressions
n
x  λe ,
i 1
i i λi  K .

Demonstration We suppose that x  Vn will have another expression


n n
x    i ei . Equalizing the two expressions we obtain  (λ i  μi ) ei  0 ,
i 1 i 1
a non-null linear combination of the linear independent vectors of the base,
equivalent with λi   i ,  i  1,n .
The scalars 1, 2,…, n are called the coordinates of vector x
in base B, and the bijections f: Vn  K , x  (λ1 , λ2 ,..., λn ) are called
system of coordinates on V.

3.7 Theorem. (Steinitz–exchange theorem). If B = {e1, e2,…, en}


is a base in vector space Vn and S = {f1, f2,…, fp} is a
system of linear independent vectors from Vn then p  n
and after an eventual remuneration of base vectors B,
the system B ={f1, f2,…, fp, ep+1,…, en} represents also a
base for V.
Demonstration: Applying the result of consequence 3.3 and the fact that
any two bases have the same cardinal results that p  n.

For the second part of the theorem we use the method of complete
mathematical induction. For p = 1, f1  V we write in base B under the form
n
f1   i ei .With f1  0 results that exists at least one i  0. Admitting that
i 1

1 λ λ
1  0 we have e1  f1 - 2 e2 -... - n en , that means that {f1, e2,…, en}
λ1 λ1 λ1
12
is a system of generated vectors of space Vn, so a base. Admitting that {f1,
f2,…, fp-1, ep,…, en} is a base then vector fp  S can be expressed under the
form fp = 1f1 + 2f2+…+ p-1fp-1+ pep+…+ nen. In this relation at least one
coefficient amongst p, p+1,…, n is non-null, because contrary the set S
would be linear dependent. Doing eventual o remuneration of vectors ep,
ep+1, …, en, e can suppose that p  0 and we obtain
μ μ μ p 1 1 μ p 1 μ
e p  1 f1 - 2 f 2 -... - f p 1  fp - e p 1  ...- n en , from
μp μ1 μp μp μp μp
which results that {f1, f2,…, fp, ep+1,…, en} is a system of n generating vectors
of n-dimensional space Vn, so a base for Vn, q.e.d.
3.8 (completing theorem) Any system of linear independent
Consequence. system from a vector space Vn can be completed up to a
base in Vn.
3.9 Any subspace V’ of a finite generated vectorial space Vn
Consequence. admits at least one suplimentary subspace.
3.10 Theorem. (Grassmann – dimension theorem). If V1 and V2 are two
vectorial subspaces of K-vector space Vn then
din (V1 + V2) = dimV1 + dimV2 – dim(V1  V2) (3.1)
Demonstration: Let it be {f1, f2,…, fr} a base of subspace (V1V2)  V1.
From the consequence 3.8 we can complete this system of linear
independent vectors at a base in V1 , this is given by set B1={f1,f2,…,fr,er+1,
…,es}. In a similar way we consider in vector space V2, base B2 ={f1, f2,…, fr,
gr+1, …,gp}. It is easy to demonstrate that the subset B ={f1,f2,
…,fr,er+1,..,es,gr+1,..,gp}, is a system of generators for V1+ V2. The subset B is
linear independent. Indeed,
r s p r s p

 αi fi 
i1
 βi ei 
i r 1
 γi gi  0 
i r 1
 αi fi 
i 1
 βi ei  -
i r 1
γ g
i r 1
i i ,
p
which means that vector v  γ g
i  r 1
i i  V1  V2 , because the sum from

the left member represents a vector of subspace V1 and the one from the
p r
right a vector from V2. In space V1V2 we have v    i gi 
i  r 1

i 1
i fi 
p r
   i gi    i fi  0
i  r 1 i 1
 r+1 = r+2 = ... = r+p = 1 = 2 = ... = r =

13
0.
Using this result in the first relation and with respect to the fact that
B1 is a base in V1 results1 = 2 = … = r =  r+1 =  r+2 = … = 0, so B is
linear independent, is a base in V1+V2.
In these conditions we can write dim (V1+V2) = r + s + p = = (r
+s) + (r + p) – r = dimV1 + dimV2 - dim(V1V2). Q.e.d.

3.11 If the vector space Vn is represented under the form V1


Consequence. = V1  V2 then dimVn = dimV1 + dimV2.

Let’s consider a K-vector space Vn and B = {e1, e2,…, en} respective


B = {e1, e2,…, en} two bases in Vn. Any vector from B can be expressed
in function of the elements of the other base. So we have the relations:

 e'1  a11e1  a21e2  ...  an1en


e'  a e  a e  ...  a e n
 2

12 1 22 2 n2 n
or e' j  a e , ij i  j  1,n
 .......... .......... .......... ............ i 1

e' n  a1n e1  a2 n e2  ...  a nn en
(3.2)

Noting with B = t[e1, e2,…, en], B = t[e1, e2,…, en] and with
 a11 , a12 ,...,a1n 
 
 a21, a22 ,..., a 2 n 
A   matrix of n  n which has for columns the
...................... 
 
 a , a ,..., a 
 n1 n 2 nn 

vectors coordinates ej, j  1,n , relation (4.2) can be written under the form

B = tAB (3.2)

Let it be now a vector x  Vn, expressed in the two bases of the


vector space Vn through relations:
n n
x   x i ei and respective x   x'
j 1
j e' j
i 1
(3.3)
With respect to relations (4.2), we obtain

14
n n
 n  n  n 
x   x ' j e' j   x' j   aij ei  
 i 1 
   aij x' j ei .

i 1  j 1

j 1 j 1 

n  n  n
With B is a base, the equality    aij x' j ei   xi ei is
i 1  j 1  i 1

equivalent with
n
xi   a x'
j 1
ij j ,  i  1,n (3.4)

relations which characterize the transforming of a vectors coordinates to a


change of the vector space base Vn .
If we note with X = t[x1, x2,…,xn] the column matrix of the vectors
coordinates x  Vn în baza B and respective with X = t[x1, x2,…,xn], the
coordinate matrix of the same vector x  Vn in baseB, we can write
X = AX (3.4)
Matrix A = (aij) is called passing matrix from base baza B to base
B. In conclusion, in a finite dimensional vector space we have the change
of base theorem:
3.12 Theorem. If in vector space Vn, the change of base B with baseB
is given by relation B = tAB, then the relation
between the coordinates of a vector x  Vn, in the two
bases ,is given by X = AX.
Let it beVn a vector space and B = {e1, e2,…,en} a base of his. If the
vectors v1, v2,…, vp  Vn, p  n are expressed through the relations vj =
n

a
i 1
e , then the matrix A = (aij),
ij i having as columns the vectors
coordinates v1, v2,…,vp, will be called the passing matrix from vectors e1,
e2,...,en to vectors v1, v2,…, vp .

3.13 Theorem. The matrix rang A is equal with the maximum numbers
of column linear independent vectors

Demonstration. Let’s suppose that the rang A = r, that means

15
a11 , a12 ,...,a1r
a21, a22 ,..., a 2 r
 = ......................  0.
ar1 , ar 2 ,..., arr
  0 implies that the vectors linear independence v1, v2, ..., vr.
Let it be column vk, r  k  p and the determinants
a11 ...a1r a1k
....
i = a ....a ark
, i  1, n
r1 rr

ai1 ...air aik

Each one of this determinants is null because for i  r, i has two


identical lines, and for i > r, the order of i is greater then the rang r.
Developing after the last line we have:
r

ai1Γi1 +ai2 Γi2 +…+air Γir + ail D = 0  ail =  λ j aij ; λ j =Γij ⁄D, i  1, n
j 1

This scalar relations express the fact that any column vk, r  k  p,
is a linear combination of the first r columns of matrix A, so any r + 1
vectors are linear dependents.

3.14 If B = {e1, e2,…, en} is a base in Vn ,then the set B =


Consequence. n
{e1, e2,…, en}, e' j   aij ei , j  1,n is a base of
i 1
Vn if and only if the passing matrix A = (aij) is non-
singular.

Let it be V and W two vector spaces over field K.

3.15 Definition. An application T : V  W with proprieties:


T (x + y) = T(x) + T(y),  x, y  V
T(x) =T (x) ,  x  V,    V
Is called vector space morphism or linear
transformation.
A bijective linear transformation between two vector spaces
will be called vector spaces isomorphism.

3.16 Theorem. Two vector spaces V and W over firld K, of finite


16
dimension, are called isomorphism if and only if they
have the same dimension.
A system of coordinates on a finite dimensional vector space Vn,
f : V  Kn , x  Vn  (x1, x2, xn)  Kn is a isomorphism of vector spaces.

§4. Euclidiene vector spaces

Let it be V a real vector space.


If we add, beside the vector space structure, the notion of scalar
product, then in a vector space can be defined notions of vector length, the
angle of two vectors, ortoghonality a.o.

4.1 Definition. An application g: V  V  R, g( ( x, y ) )   x,y 


with properties:
a)  x,y  z    y,x    x,z  ,  x, y, z  V
b) < x, y> =  <x, y> ,  x, y  V,    R
c) <x, y> = <y, x> ,  x, y  V
d) <x, x>  0, <x, x> = 0  x = 0 ,xV
is called scalar product on vector space V.

4.2 Corollary If V is an euclidean vector space then we have the


following relations:

1) <x + y, z> = <x, z> + <y, z>


2) <x, y> =  <x, y>,  x, y, z  V, 
R

4.4 Definition. A vector space V on which we define a scalar product


is called euclidean vector space (or V posses an
euclidean structure).

4.3 Theorem. If vector space V is an euclidean vector space then we


have the Cauchy-Schwarz inequality:
<x, y>2  <x, x>  <y, y> (4.1)
17
the equality having place if and only if vectors x şi y are
linear dependents.

Demonstration: If x = 0 or y = 0 then we have the equality fro relation 5.1.


Let’s suppose x and y  V being non-null and consider vector z = x +
y, ,   R. From the scalar product properties we obtain:
0  <z, z> = <x + y, x + y> = 2 <x, x> + 2 <x, y> + 2 <y, y>,
equality having place for z = 0. If we take  = <y, y> > 0 then we obtain
<x, x> <y, y> + 2<x, y> + 2  0, and for  = - <x, y> inequality becomes
<x, x> <y, y> - <x, y>2  0, q.e.d
Example
1° In arithmetic space Rn for any two elements x=(x1,x2,...,xn) and y
= (y1, y2,..., yn), the operation

<x, y> =: x1y1 + x2y2 +...+ xnyn (4.2)

define a scalar product. The scalar product defined in this way, called usual
scalar product, endowedrithmetic space Rn with an Euclidean structure.
2° The set C([a, b]) of continuous functions on the interval [a, b] is
a vector space with respect to the product defined by
b
 f, g   a
f(x)  g(x) dx (4.3)

4.5 Theorem. In an euclidean vector space V function|| ||: V R+


defined through
|| x ||   x,x  ,  x  V
(4.4) is a norm on V, that means that satisfies the
axioms:
a) || x || > 0,  x  0 şi || x || = 0  x = 0
b) ||  || = |  |  || x ||,  x  V,    R
c) ||x+ y||  ||x|| + ||y|| (triangle inequality).

Demonstration: Conditions a) and b) result immediately from the norm


definition and the scalar product properties.
Axiom c) results using Cauchy-Schwarz inequality
|| x  y ||2   x  y, x  y    x, x   2  x, y    y, x  
  x, x   2  x, x  y, y    y, z   (|| x ||  || y ||) 2

18
from where results the triangle inequality.
A space on which we have defined a “norm” function is called
normed space.
The norm defined by a scalar product is called euclidean norm.
Example: In arithmetic space Rn the norm of a vector x = (x1, x2,…xn) is
given by
|| x ||  x12  x 22  ...  x n2 (4.5)
A vector e  V is called versor if ||e|| = 1. The notion of versor
permits to  x  V to be written as x  ||x|| e , ||e||  1 , where e direction
is the same with x direction.
Cauchy-Schwarz inequality, |<x, y>|  ||x||  ||y|| permits to define
the angle between two vectors, being the angle   [0, ], given by
 x,y 
cos θ  (4.6)
|| x ||  || y ||

4.6 Theorem. In normed vector space V, the real function d: V  V


 R+, defined through d(x, y) = || x – y || is a metric
on V, that means that satisfies the axioms:
a) d(x, y)  0, d(x, y) = 0  x = y ,  x, y  V
b) d(x, y) = d(y, x) ,  x, y  V
c) d(x, y)  d(x, z) + d(z, x) ,  x, y, z  V.

Example: In vector arithmetic space Rn the distance d is given by

d(x, y)  || x - y ||  ( x1-y1 ) 2  ( x2 -y 2 ) 2  ...  ( xn -yn ) 2 (4.7)

An ordinary set endowed with the distance is called the metric


space.
If the norm defined on vector space V is euclidean then the defined
distance by this is called euclidean metric.
In conclusion, any euclidean space is a metric space.
An euclidean structure on V induces on any subspace V’  V an
euclidean structure.
The scalar product defined on a vector space V permits the
introduction of ortoghonality notion.
4.7 Definition. In vector space V vectors x, y  V are called
orthogonal if < x, y > = 0 .
A set S  V is said to be ortoghonal if its vectors are ortoghonal
two by two.
19
An ortoghonal set is called orthonormate if every element has its
norm equal with his unit.
4.8 Proposition. In an euclidean vector space V any orthogonal,
formed from non-null elements, is linear independent.

Demonstration Let it be S  V \ {0} and 1x1 + 2x2 +…+ nxn, a linear


ordinary finite combination of elements from S. Scalar multiplying with xj
n
 S, the relation  i xi  0 becomes 1 <x1, xj> + 2 <x2, xj> +…+ n <xn,
i 1
xj> = 0.
S being orthogonal, <xi, xj> = 0,  i  j and j(xj, xj) = 0. For xj  0, 
j  1, n , <xj, xj> > 0, from where results that  j = 0,  j  1, n , that
means that S is linearly independent.
4.9 Consequence. In an n-dimensional euclidean vector space Vn, any
orthogonal set formed by n vectors is a base in Vn.
If in euclidean vector space Vn we consider an orthogonal base B =
{e1, e2,…, en}, then any vector x  Vn can be written in an unique way
under the form
n
 x, ei 
x   e
i 1
i i , where λi 
 ei , e i 

(4.8)
n
Indeed, multiplying the vector x   i xi with ek, we obtain <x, ek> =
i 1
n
  λi  ei ,ek  λk  ek ,ek  from where we have
i 1
 x, ek 
λk 
 ek , ek 
, k  1, n .

1 , i  j
If B is orthonormate we have  ei ,e j   i j  0 , i  j , and i

= <x, ei> and will be called the euclidiene coordinates of vector x.
4.10 Definition. Let it be x, y  V, two ordinary vectors.
 x, y 
The vector pr y x 
 y, y 
y , with y  0 is
called orthogonal projection of vector x on vector y,
 x , y
and number pryx = y is called algebraic size of
the orthogonal projection of x on y .
20
4.11 Definition. Let it be S  V an ordinary subset of euclidean space
V. An element y  V is called orthogonal of S if is
orthogonal on every element of S, that means that
<y, x> = 0,  x  S and noted with y  S.

4.12 Proposition. The set of all vectors y  V orthogonal to set S forms


a vector subspace noted with S. Plus, if S is a vector
subspace then the subspace S is called orthogonal
complement of S.
Demonstration: If y1, y2  S then (y1, x) = 0, <y2, x> = 0,  x  S. For 
,   R, we have <y1 + y2, x> = <y1, x> + <y2, x> = 0, c.c.t.d.

4.13 Proposition. If the subset S  V is of finite dimension, then S


admits an unique orthogonal supliment S.

4.14 If V = S  S and x = y + y , y  S, y  S, then


Consequence. takes place Pitagora’s theorem, || x ||2 = || y ||2 +
|| y ||2.
Let it be Vn a finite dimensional euclidean vector space.
4.15 Theorem. (Gram - Schmidt) If {v1, v2, ..., vn} is a base in
euclidean vector space Vn then exists an orthonormate
base {e1, e2, ..., en}  V such that the systems of
vectors {v1, v2, ..., vp} and {e1, e2, ..., ep} generate the
same subspace Up  V, for  p  1, n .
Demonstration First we build an orthogonal set {w1, w2, ..., wn} and then
we norm every element. We consider
w1 = v1 ,
w2 = v2 + kw1  0 and determine k imposing the condition
<w1, w2> = 0.
 v2 , w1 
We obtain k   , so
 w1 , w1 
 v2 , w1 
w2  v2  w2  v2 - pr w1 v2
 w1, w1 

21
w3 = v3 + k1w1 + k2w2  0 and determine the scalars k1, k2 imposing
condition w3 to be orthogonal on w1 and w2, that means that
<w3, w1> = <v3, w1> + k1 <w1, w1> = 0
<w3, w2> = <v3, w2> + k2 <w2, w2> = 0.
We obtain
 v3 , w1   v3 , w2 
w3  v3  w1 -  v3 - pr w1 v3 - p
 w1 , w1   w2 , w2 

After n steps are obtained vectors w1, w2, ..., wn orthogonal two by
two, linear independent (prop. 5.1) given by

j 1
 v j , wi 
wj  v j   wi ,  j  1,n
i 1  wi , wi 
(4.9)

wi
Define ei  ,  i  1,n , so the set B = {e1, e2, ..., en},
||wi||
represents an orthonormate base in Vn.
With elements e1, e2, ..., ep are expressed in function of v1, v2, ..., vp,
and these are linear independent subsystems we have L ({e1, e2, ..., ep}) = =
L ({v1, v2, ..., vp}), c.c.t.d.

4.16 Any euclidean vectorial subspace admits an


Consequence. orthonormate base.
Let it be B = {e1, e2, ..., en} and B = {f1, f2, ..., fn} two
orthonormate base in euclidean vector space Vn.
The relations between the elements of the two bases are given by
n
fj  a
k1
ki ek ,  j  1,n .

With B being orthonormată we have :

n n n
 fi , f j   a
k, h  1
ki ahj  ek , eh   a
k, h  1
ki ahj kh   aki akj   ij
k1

If A = (aij) is the passing matrix from base B to B then the anterior


relations are the form tAA = In, so A is an orthogonal matrix.

22
4.17 Proposition. At a change of orthonormate base B = tAB, in an
euclidean vector space Vn,, The coordinates
transforming is given by X = AX, where A is an
orthogonal matrix.

§5. Proposed problems

1. Let it be V and W two K-vector spaces. Show that V × W =


={(x, y)| x  V, y  W} is a K-vector space with respect to the operations:
(x1, y1) + (x2, y2) =: (x1 + x2, y1 + y2)
 (x, y) =: ( x,  y),  x1, x2  V, y1, y2  W,    K.

2. Say if the operations defined on the indicated sets determine a


vector space structure:
a)  x, y  R2 ; x = (x1, x2), y = (y1, y2),    R
 x  y : ( x1  y1 , x2  y2 )

 x : (0, x2 )
 x  y : ( x1  y2 , x2  y1 )
b) 
 x : (x1 , x2 ) ,  x, y  R2,    R

 x  y : 3 x 3  y 3
c) 
  x : x

,  x, y  R,    R
 x  y : ( x1  y1 , x2  y3 , x3  y2 )
d) 
 x : (x3 , x2 , x1 )
,  x, y  R3,    R
3. Let it be V a real vector space. We define on V × V the operations:

( x, y )  ( x, y) : ( x  x, y  y) , +i  C



 (  i )( x, y ) : (x  y,y  x)

Show that V × V is a vector space over complex number field C (this


space will be called the complexificate of V and will be noted with CV ).

23
4. Establish which of the following subsets forms vector subspaces in
the indicated vector spaces
a) S1 = {(x, y)  R2 | 2x - y = 0}
b) S2 = {(x, y)  R2 | 2x - y + 1 = 0}
c) S3 = {(x, y)  R2 | x2 - y2 - 1 = 0}
d) S4 = {(x1, x2, x3)  R3 | x1 - x2 + 2 x3 = 0}
e) S5 = {(x1, x2, x3)  R3 | x1 + x2 - x3 = 0, x1 - x2 = 0}

5. Let it be F[a,b] – the real functions set defined on the interval [a, b] 
R.
a) Show that the operations:
(f + g)(x) = f(x) + g(x)
( f)(x) =  f(x),   R, x  [a, b] define a structure
of R-vector space on set F [a,b].
b) If the interval [a, b]  R is symmetric with the origin, show that
the subsets
F+ = {f  F [a,b] | f(- x) = f(x)} (odd functions) and
F- = {f  F [a,b] | f(- x) = - f(x)} (even functions)
are vector subspaces and F [a,b] = F +  F - .

6. Show that the subsets


S = { A  Mn(K) | tA = A} (symmetric matrix)
A = { A  Mn(K) | tA = A} (antisymmetric matrix)
Are vectorial subspaces and Mn (K) = S  A.
7. Let it be v1, v2, v3  V, three linear independent vectors. Determine
  R such that the vectors
 u1  v1  v2

u 2  v2  v3
 u  v  v
 3 3 1

to be linear independents, respective linear dependents.


8.Show that the vectors x,y,zR3, x = (-1,1,1), y = (1,1,1), z = (1,3,3),
are linear dependent and find the relation linear dependence.

9. Establish the dependence or linear independence of vectors


systems:
a) S1 = {ex, xex, …, xn-1ex}
b) S2 = {1, cos2x, cos2x}
c) S3 = {1, cos2x, cos4x, cos4x }
24
10) Determine the sum and intersection of the vector subspaces U, V
 R , where
3

U = {( x1, x2, x3)  R3 | x1 - x2 = 0}


V = {( x1, x2, x3)  R3 | 2x1 - x2 + x3 = 0}

11) Determine the sum and intersection of the subspaces generated


by the vector’s systems:
U = {u1 = (1, 1, 0), u2 = (1, 0, 2), u3 = (0, -1, 2)}
V = {v1 = (1, 1, 2), v2 = (0, 2, 4)}

12) Determine the subspaces U  V  R3 where


U = {( x1, x2, x3)  R3 | 2x - y = 0, 3x - z = 0}
V = L({(-1, 2, 1), (2, -4, -2)})

13) Determine a base in subspaces U + W, U  W and verify the


Grassmann’s theorem for
 U  {( x1 , x2 , x3 )  R 3 x1  x2  2 x3  0}  R 3

a) 

W  L({w1  (1, 1, 1), w2  (1, 0, 0), w3  (3, 2, 2)})  R
3

 U  L({(1, 0, 2,  1), (0, 1, 1, 0)})  R 4


b) 
W  L({( 2, -1, 1, 0), (1, 0, 1, 2), (0, 2, 1, 0)})  R
4

14) Let it be the subspace W1  R3 generated by vectors w1 = (1,-


1,0) and w2 = (-1,1,2). Determine the suplementary subspace W2 and
decomposedescompună vector x = (2, 2, 2) on the two subspaces.

15) Show which of the following vector’s systems forms bases in the
given vector spaces:
a) S1 = {u1 = (1, 2), u2 = (2, -1)}  R2
b) S2 = {u1 = (1, 0, -1), u2 = (2, 1, -3), u3 = (1, -1, 0)}  R3
c) S3 = {u1 = (1, 0, 1), u2 = (0, -1, 1), u3 = (1, -1, 1)}  R3
d) S4 = {1, 1-x, (1-x)2, (1-x)3}  R3[x]
 1 0 1 1   1 1 1 1 
e) S5 =  ,  ,  ,     M2(R)
 0 0   0 0   0 1 1 1 

25
16) In R3 we consider the vector’s systems B  = {e1 = (1, 1, 0),
e2 = (1, 0, 1) , e3 = (1, 0, -1)} and B  = {e1 = (1, 0, 0) , e2 = (1, 1, 0),
e3 = (1, 1, 1)}. Show that B  and B  are bases and determinethe passing
matrix from base B  to base B  and the coordinates of the vector v = (2,
-1, 1) (expressed in canonic base) in rapoert with the two bases.

17) Let it be the real vector space M2(R) and the canonic base
 1 0 0 1 0 0 0 0 
B = E1   , E2   , E3   , E4   
 0 0  0 0 1 0  0 1 
a) Find a base B1 respective B2 in the symmetric matrices
subspace S2  M2(R) and respective in the antisymmetric
matrices subspace A2  M2(R). Determine the passing matrices
from the canonic base B to base B  = B1B2 .
a b
b) Express the matrix E =  c  in base B .
 d 

18) Verify if the following operations define scalar products on the


considerated vector spaces
a) <x, y> = 3x1y1 + x1y2 + x2y1 + 2x2y2 , x = (x1,x2), y = (y1, y2)  R2
b) <x, y> = x1y1 - 2x2y2 , x = (x1, x2), y = (y1, y2)  R2
c) <x, y> = x1y1 + x2y3 + x3y2 , x = (x1, x2, x3), y = (y1, y2, y3)  R3

19) Show that the operation defined on the polynoms set Rn[x]
n
through <f, g> =  ai bi , where f = a0 + a1x + … + anxn şi g = b0 + b1x+…
i 0
+ bnxn defines a scalar product and write the Couchy – Schwarz inequality.
Calculate || f || ,  (f, g) for the polynoms f(x) = 1 + x + 2x2 - 6x3 şi g(x) =
1 - x - 2x2 + 6x3.
20) Verify that the following operations determine scalar products on
the specificated vector spaces and orthonormate with respect to these scalar
products the functions systems {1, t, t2} and respective {1, cx, c-x}
2

a) <f, g> =  f ( x) g ( x)dx


0
2

 xf ( x) g ( x)dx , f, g  C[ 0 , 2 ] .
0
b) <f, g> =
0

26
21) Let it be vectors x = (x1, x2, …, xn), y = (y1, y2, …, yn)  Rn.
Demonstrate using the usual scalar product defined on the arithmetic space
Rn, the following inequalities:
n n n
a) ( xi yi )  ( xi )  ( yi )
2 2 2

i 1 i 1 i 1
n n n
b)  ( xi  yi ) 2 
i 1
 xi2 
i 1
y
i 1
2
i

and determine the conditions in which take place the equalities.

22) Orthonormate the vectors system with respect to the usual scalar
product

a) v1 = (1, -2, 2), v2 = (-1, 0, -1), v3 = (5, 3,-7)


b) v1 = (1, 1 , 0), v2 = ( 1, 0, 1 ) , v3= (0, 0, 1) .

23) Find the orthogonal projection of vector v = (14, -3, -6) on the
subspace generated by vectors v1 = (-3, 0, 7), v2 = (1, 4, 3) and the size of
this projection.

24) Determine in the arithmetic space R3, the orthogonal complement


of the vector subspace of the system’s solutions

3 x1  x2  x3  0

 x1  2 x2  x3  0
and find an orthonormate base in this complement.

25) Orthonormate the following linear independent vector systems:


a) v1 = (1, 1, 0), v2 = (1, 0, 1), v3 = (0, 0, -1) în R3
b) v1 = (1,1,0,0), v2 = (1,0,1,0), v3 = (1,0,0,1), v4 = (0,1,1,1) în R4.

26) Determine the orthogonal complement of the subspaces


generated by the following vector’s system:

a) v1 = (1, 2, 0), v2 = (2, 0, 1) în R3


b) v1 = (-1, 1, 2, 0), v2 = (3, 0, 2, 1), v3 = (4, -1, 0, 1) în R4
27) Find the vector’s projection v = (-1, 1, 2) on the solution
subspace of the equation x + y + z = 0.

27
28) Detremine in R3 the orthogonal complement of the subspace
generated by vectors v1 = (1, 0, 2), v2 = (-2, 0, 1). Find the decomposition
v = w + w1 of vector v = (1, 1, 1)  R3 on the two complementary
subspaces and verify the relation ||v||2 = ||w||2 + ||w1||2.

28

Vous aimerez peut-être aussi