Vous êtes sur la page 1sur 1

MATH1051, Calculus & Linear Algebra I

Poster A: Linear Algebra


c
School
of Mathematics and Physics, The University of Queensland
Vectors
Position vectors Let P = (xP , yP ) be a point in the (x, y) plane. Then

the vector OP , where O is the origin, is called the position vector of


P.

of
Norm For the vector v =P Q, the magnitude (or length or norm)
 
v1
,
v, written k v k, is the distance P Q between P and Q. For v =
v2
q
k v k= P Q =

v12 + v22.

A Vector space V over a field of scalars (usually R) must satisfy 10


axioms, for all u, v, w V , a, b R,
1. u + v V ,

(closure)

2. u + v = v + u,

(commutative)

3. u + (v + w) = (u + v) + w,

(associative)

4. 0 V : 0 + u = u,

(additive identity)

5. exists u: u + (u) = 0,

6. au V ,

(additive inverse)
(closure of scalar multiplication)

7. a(u + v) = au + av,
(distributive)
Vector addition Consider the triangle P QR with 
v =P Q, w =QR,


v1
w1
(distributive)
then v + w =P R. In terms of components, if v =
,w=
, 8. (a + b)u = au + bu,
v2
w2


9. a(bu) = (ab)u,
(associative)
v1 + w 1
. That is, algebraically we add vectors 10. 1(u) = u.
then v + w =
(scalar identity)
v2 + w 2
component-wise.
The span of a set of vectors is the set of all linear combinations of those
Scalar multiplication With R a number (a scalar), we define v vectors, and is a vector space itself.
to be the vector of magnitude k v k = | | k v k in the same direction
as v if > 0 and opposite direction if < 0.
A unit vector is a vector of unit length (has magnitude 1). If v 6= 0 is
v determines a unit vector in the same direction as
a vector, then v = kvk
 
 
1
0
v. The vectors i =
and j =
determine unit vectors along the
0
1
x and yaxes
Get from
 respectively.
 G
 (green)
 toB (blue) via only steps in the direction of
v1
= v1i + v2j, then v1 and v2 are called the components of u = 3 , v = 1 . When we calculate the span of a set of vectors
If v =
v2
1
2
 
4
we usually prefer to give the least vectors possible to represent it. For
v with respect to i and j. The component form of
is 4i + 3j.
3
example let

2
 6 
5
3D vectors In 3-space (R3) a vector v =P Q is represented in compo0
v= 2 ,
w=
.
u= 2 ,
nent form by
14
0
7
 v   xQxP 
1
v
v = v2 = yQyP .
To calculate Span(u, v, w), we have
3
zQzP
 
   6 
 v1 
5
2
v
v =OP = v2 the magnitude of v is
0
2 2 2 2 =
,
0

k v k=

q
ON 2 + N P 2 = v12 + v22 + v32.

We add vectors component-wise


and we
may define multiplication by


v 
w1
1
v
2
are vectors, the sum and scalar
a scalar . So if v = v2 , w = w
w
3
3
multiples respectively given by
 v1+w1 
 v 
1
2 ,
v + w = v2+w2 ,
R.
v = v
v
v3+w3

 
0
The zero vector in 3-space is 0 = 0 .
0

Dot product For non-zero vectors v =OP , w =OQ, the angle be-

tween v and w is the angle with 0 180 between OP and OQ

at the origin point O.


The dot product of vectors v and w, denoted by v w, is the number
given by

0
if v or w = 0,
vw =
k v kk w k cos otherwise,
where is the angle between v and w. It may also be computed:
   
v1
w1

= v1 w 1 + v2 w 2 ,
v2
w2
v  w 
1
1
v 2 w 2 = v1 w 1 + v2 w 2 + v3 w 3 .
v
w
3

Projection formula: Let v and w = w1 + w2 be vectors with w1


parallel to v and w2 perpendicular to v. The projection of v onto w is

14

which tells us that u, v, and w are linearly dependent. Therefore we


have a superfluous vector not needed to obtain all of the vectors in
Span(u, v, w), (all linear combinations of these). Since u and v are
linearly independent, we cannot reduce the number of vectors without
losing some of the space. Hence we put Span(u, v, w) = Span(u, v).
A subspace S of a vector space V is a subset of vectors of that vector
space, S V such that S itself is a vector space. Note that for this to
occur, the zero vector must be in S.
A basis B of a vector space V is a set of vectors B such that the
set B is linearly independent and the span of the set of vectors B
is the entire
 3 vector
 ospace V . For example, let S = {u, v, w} =
n
1
5
. First show that S is LI.: assume there are scalars
2 ,
4 , 0
0
3
4
, , R such that u + v + w = 0. Then we have the system of
linear equations
3 + 5 = 0,
2 + 4 = 0,
4 + 3 = 0.
It is easy to solve this system to find that = = = 0. Therefore S is
a set of linearly independent vectors.To show that S is a basis for R3, we
have already shown that S is linearlyindependent.
We must show that

x
3
Span(S) = R . To do this, let X = yz be an arbitrary vector of R3.
We must show that we can always find the required scalars , , R
such that
u + v + w = X,

vw
v.
w1 =
2
kvk

no matter which numbers the entries x, y, z R of X are. This means


when we try to solve the system of equations

A linear combination of a set of vectors isa 


sum of scalarmultiples

2
7
of a those vectors. For example, let u =
and v =
. The
5
1
following are examples of linear combinations of u and v :
 
2
1u + 0v =
,
5
 
       
2
7
4
21
25
2u + 3v = 2
+3
=
+
=
,
5
1
10
3
13
     
2
7
5
1u 1v =

=
.
5
1
4

3 + 5 = x,
2 + 4 = y,
4 + 3 = z

A set of vectors is linearly dependent (LD) if the zero vector is a linear


combinationofthose vectors (scalars not all zero). 
Forexample, the
2
1
vector u =
is a scalar multiple of the vector v =
. The vectors
4
  2    
2
1
0
u and v are linearly dependent since u 2v =
2
=
.
4
2
0
Since the scalars are not all zero: 1 and 2, the vectors are linearly
dependent. If one vector is a scalar multiple of
vector,
 another

  then
1
2
they must be linearly dependent. Now let u =
,v=
, and
3
5
 
     
1
1
2
1
w=
. Since 5
2
=
, w is a linear combination
25
3
5
25
of u and v. Rearranging, we have the zero vector on the Right hand
side:
 
     
1
2
1
0
5
2

=
.
3
5
25
0

Therefore {u, v, w} is a linearly dependent set of vectors.


If a set of vectors is not linearly dependent, we say it is linearly independent (LI). For example {i, j} is LI but {i, j, i + 2j} is LD.

for , , , this is always possible. Solving this yields:


1
= (12x + 29y 20z),
70
1
= (6x + 3y + 10z),
70
1
= (8x 4y + 10z),
70
so for any choice of x, y, z R, we can always find scalars , , R
such that
u + v + w = X.
We have shown that Span(S) = R3. Since both requirements for S to be
a basis have been established, S is linearly independent and Span(S) =
R3, S is a basis for R3.
An orthogonal basis is a basis B for a vector space in which for every
two
vectors
vi, vj B, vi vj = 0. For example,
letB =







n 1
0 o
1
0
2
,
be a basis for the vector space R . Since

= 0,
0
1
0
1




n 2
4 o
,
B is an orthogonal basis. Let C =
be a basis for the
4
    3
2
4
2

= 4 6= 0, C is not an orthogvector space R . Since


3
4
onal basis for R2, however since C contains two vectors which are not
multiples of one another, C is a basis for R2 and consequently every
expressed
as
a sum ofscalar
multiples
of
vector in R2 may be uniquely







n 4
14 o
4
14
,
. Since

= 0,
the vectors in C. Let D =
7
8
7
8
D is an orthogonal basis for R2.

The dimension of a vector space is the number of vectors in any basis for that vector space. This can be obvious by comparing the vector
space with the field of scalars. For example, the field of scalars for the
vector space R2 is the real numbers R. Since for each pair of real numbers there is one vector in R2, we require two copies of R to represent
R2 so the dimension of R2 is 2. Calculating dimension is occasionally
more involved. For example, consider a subset S of vectors of R3 for
which each v S is stuck on a particular set of parallel planes but not
multiples of just one vector. In this case, the dimension of S as a vector
space with scalars in R would be equal to 2 even though the vectors in
S have three components.

Matrices
A system of scalar equations such as
3x + 9y z = 4,
x 2y + 2z = 3,
5x y + 8z = 1
has a corresponding matrix equation Ax = v:
 3 9 1    
4
x
y = 3 ,
1 2 2
z

(1)

5 1 8

where the rectangular arrays are matrices. The matrix on the left is a
3 3 squarematrixbut matrices may also be rectangular, such as the
5 1
3 2 matrix 3 3 . Matrices are added by adding their components:
0 2

 5 1
3 3
0 2

0 3

+ 24
19

 5+0 1+3 

3+2 3+4
0+1 2+9

 5 4

1 7 .
1 11

For matrices A, B, C of the same size we have


A + B = B + A,
(A + B) + C = A + (B + C),
A + O = O + A = A,
A + (A) = O.
Multiplication To multiply
match rows with columns:

( commutative law)
( associative law)
( exists zero matrix )
( exists additive inverse )

 3 9 1  5 4 
1 2 2
5 1 8

1 7
1 11

5 64
9 12 , we
34 101

Even if AB and BA are both defined, in general AB 6= BA. However


we do have:
(AB)C = A(BC), (A + B)C = AC + BC, A(B + C) = AB + AC.
To solve equations such
we can use Gaussian Elimination. We
 as (1), 
construct (A | b) =
 3 9 1 4 

3 9 1 4
1 2 2 3
5 1 8 1

and perform row reduction:

 3 9 1 4 

 3 9 1 4 

1 2 2 3 3 6 6 9 0 15 7 5
5 1 8 1
5 1 8 1
5 1 8 1
 1 3 1/3 4/3   1 3 1/3 4/3   1
0 1 7/15 1/3 0 3 7/5 1 0
0 48 29 17
0 48 29 17
0
 1 0 16/15 7/3   1 0 16/15 7/3   1
0 1 7/15 1/3 0 1 7/15 1/3 0
0 0 33/5 33
00 1
5
0

 15 45 5 20 

0 15 7 5
15 3 24 3
0 16/15 7/3 
48 112/5 16
48 29 17
0 0 23/3  x= 23/3
1 0 8/3 , y= 8/3 .
0 1 5
z= 5

The inverse of a square matrix A is the unique ( if it exists ) square


matrix
such that AB = I = BA. The
 inverse of a 2 2 matrix
B
1
d b . We require that ad bc 6= 0
A = ac db given by B = adbc
c a
in order for the matrix B to be defined. If ad bc 6= 0, then a 2 2
matrix A has an inverse, and we say that A is invertible. The number
D = ad bc is called the determinant. To find the inverse of a 3 3
matrix, we can use (or other methods) Gaussian elimination:
 3 9 1 1 0 0 
1 2 2 0 1 0
5 1 8 0 0 1

 1 0 0 14/33 71/33 16/33 


0 1 0 2/33 29/33 7/33 .
0 0 1 9/33 48/33 15/33

a a a 
11 12 13
a
To calculate the determinant | A | of a 3 3 matrix A = a21 aa22 aa23 ,
31

32

33







a a a
a22 a23
a21 a23
a21 a22
a11 a12 a13
21
22
23

a
+
a
=
a
a a a
11 a32 a33
12 a31 a33
13 a31 a32 .
31

32

33

The matrix A is invertible if and only if | A |6= 0.


To calculate Eigenvalues & Eigenvectors of a square matrix A, the
eigenvalues are determined by solving | A I |= 0. The eigenvectors vj are given by calculating the nullspace of A j I, where
j is an eigenvalue of A. The nullspace of a matrix B is the set
(or space)
Bv = 0. For example, if

 of all vectors v satisfying

4 = (1 )(7 ) 12.
A = 13 47 , then | A I |= 1
3 7

2
We must solve 8 5 = 0: 1 = 4 + 21 and 2 = 4 21.
The eigenvector
corresponding
to 2 is determined
by
row reducing







14+ 21
4
4
3+ 21
3
74+ 21
21

 3  3+
3+21
3+21
4

3+ 21 (3+ 21)(1+ 13 21)


3+ 21

4
3+ 21

1
1+ 13 21



3+0 21 04 . The


4
4
4
nullspace of A 2I is the set of all 3
, R. The eigen21


4
vector v2 = 3
or any real multiple of it corresponds to 2.
21


4
Similarly v1 = 3+
21 corresponds to 1.

Vous aimerez peut-être aussi