Vous êtes sur la page 1sur 47

Linear Algebra

03/26/12

Revised by D.H. Chen

1<1>

03/04/08

Revised by D.H. Chen

2<1>

03/04/08

Revised by D.H. Chen

3<1>

Mirror Image w.r.t. Principal Diagonal


03/04/08

Revised by D.H. Chen

4<1>

Identity Matrix
03/04/08

Revised by D.H. Chen

5<1>

Multiplication

03/05/12

Revised by D.H. Chen

6<1>

3X3

2X3

2X3

B: 3 X 3

3X3

03/04/08

3X1

Revised by D.H. Chen

Size Requirement:
2nd index of the 1st
matrix must be the
same as the 1st index of
the 2nd matrix

A: 2 X 3

3X1

7<1>

Cofactor

Minor Mjk is the Determinant of a Sub-Matrix by deleting the i th row and j th


column from the full matrix.
02/23/15

Revised by D.H. Chen

8<1>

Determinant
A characteristic scalar (value) of a matrix

Cross-Product

03/04/08

Revised by D.H. Chen

9<1>

03/04/08

Revised by D.H. Chen

10<1>

(AB)T = BTAT
Inverse is equivalent to division (or reciprocal) in scalar operation
Identity Matrix (I) is equivalent to 1 in scalar calculations

03/04/08

Revised by D.H. Chen

11<1>

AA-1=I
03/05/12

Revised by D.H. Chen

12<1>

03/04/08

Revised by D.H. Chen

13<1>

det A = 0

03/04/08

det A 0

Revised by D.H. Chen

det A = 0

14<1>

If A is non-singular,
i.e., det A 0
& b 0 (non-homogeneous); then there is a
unique set of nontrivial solution, i.e., x 0 ;
as shown in case (b)
& b = 0 (homogeneous) ; then there is a
trivial solution, i.e., x = 0

03/19/12

Revised by D.H. Chen

15<1>

Matrix A is singular if det A = 0

Homogeneous Linear System


A nontrivial solution set in which
there exists at least one Free Variable

03/26/12
Revised by D.H. Chen

16<1>

Gaussian Elimination

03/04/08

Revised by D.H. Chen

17<1>

Back-Substitution

03/04/08

Revised by D.H. Chen

18<1>

Gaussian Elimination
A* x = b
A-1*A* x = A-1* b
I * x = A-1* b
x = A-1* b
the Gaussian elimination amounts to pre-multiply the
augmented form (with an identity matrix I) with A-1:

AbI

I x A-1

The procedure allows you to get the solution,


inverse, and determinant all at once.
03/04/08

Revised by D.H. Chen

19<1>

03/04/08

Revised by D.H. Chen

20<1>

DMC Control Law

03/04/08

Revised by D.H. Chen

21<1>

03/04/08

Revised by D.H. Chen

22<1>

Original
Matrix A

detA=4*3*(-.5)*(-1.8333)

Inverse A-1
Determinant A
03/04/08

Revised by D.H. Chen

23<1>

Newtons Method

03/04/08

Gaussian Elimination

Revised by D.H. Chen

24<1>

03/04/08

Revised by D.H. Chen

25<1>

Eigenvalue and eigenvector


A* x = x A* x = *I*x
(A *I ) x = 0
For the homogeneous system,
(A *I ) must be singular
det (A *I ) = 0
to give rise to a nontrivial soln, i.e.,
x 0.
If (A- *I ) is nonsingular, then x = 0, i.e., a
trivial solution.
03/04/08

Revised by D.H. Chen

26<1>

X1=(4x2, x2)T

X2= (-x2, x2)T


03/04/08

Revised by D.H. Chen

27<1>

03/04/08

Revised by D.H. Chen

28<1>

Three Kinds of Mathematical Problems


Exact System (# of eqns m = # of unknowns
n)
Over-determined System (m > n)
Under-determined System (m < n)

03/04/08

Revised by D.H. Chen

29<1>

3 Types (Cont.)
m=n Algebraic Eqns solved with Newtons
method; Mixed Differential/Algebraic Eqns
solved by Gears method
m>n Regression Analysis solved with
Marquardts method
m < n Optimization solved with Linear
Programming (LP) or Successive Quadratic
Programming (SQP)
03/04/08

Revised by D.H. Chen

30<1>

Is it a linear regression or
nonlinear regression?
Y = a + bx+ cx2
Ln P = A + B/ T
Ln P = A + B/ (T + C)

03/04/08

Revised by D.H. Chen

31<1>

General Multi-Linear Model


Y = a1V1+ a2V2 +a3V3 + ....anVn + C
where ai = weighting factor
Vi = predictor variable
C = constant

03/04/08

Revised by D.H. Chen

32<1>

Linear Regression (1)


Model Equation is an over-determined
linear system (mxn, m>>n)
A* X = b (1)
For example, we may have 100 data
points to solve for 5 coefficients in
regression.
We can not satisfy all 100 equations,
but we can compromise by going along
with a least-squares objective.
03/04/08

Revised by D.H. Chen

33<1>

03/04/08

Revised by D.H. Chen

34<1>

Linear Regression (2)


By taking partial derivative w.r.t. each
parameter and set them to 0, we obtain:
AT*A* X = AT* b
This equation is called the Normal
equation, and is equivalent to premultiplying Eqn (1) with AT
Now we have an exactly-determined
(nxn) system for regression to solve for
the coefficients the X vector.
03/04/08

Revised by D.H. Chen

35<1>

Linear Regression (3)


Take the example of the Edmister vapor
pressure equation:
Ln P = a + b/ T = a *1 + b* 1/ T
as an example
Lets say we have 100 data points, then m =
100, n =2, x = (a, b)T, A is (100x2), AT is
(2x100), AT*A* is (2x2), b is (100x1),
AT* b is (2x1), so we solve for 2 eqns and
2 unknowns in the normal equation.
03/19/12

Revised by D.H. Chen

36<1>

What are A, x, b, A ?
T

03/04/08

Revised by D.H. Chen

37<1>

Linear Regression (4)


Model Equation in DMC is a similar
over-determined linear system (mxn,
m>>n)
A* MV = E where E = SP-PVopen (1)
Now we have an exact (nxn) system for
regression to solve for MV (future
MV moves)
AT*A* MV = AT* E
03/04/08

Revised by D.H. Chen

38<1>

= ln a * 1 + b * T

03/19/12

Revised by D.H. Chen

39<1>

03/04/08

Revised by D.H. Chen

40<1>

03/04/08

Revised by D.H. Chen

41<1>

R- squared
(Yi i)2
R2 = 1
(Yi )2
Where (Yi ) = deviation of the ith
observation from the overall mean
(Yi i) = difference between the predicted
and the actual data for the ith observation.
03/04/08

Revised by D.H. Chen

42<1>

R-squared for Linear Model


The Goodness of Fit or the Explained
Variation.
Also known as Coefficient of Determination
R^2 = SS due to regression/ Total SS
corrected for the mean
= (Yi~ Y )^2 / (Yi Y )^2
(Yi Y )^2 = (Yi~ Y )^2 + (Yi Yi~)^2

SS about the mean

SS due to regression

SS about regression

Yi = data ; Y = mean; Yi~ = prediction


03/06/12

Revised by D.H. Chen

43<1>

Regression Models (1)


Y=a+b*X
Y = a + b * X + c * X^2
Is this a linear model?

Log P = A + B/T
Is this a linear model?

Log P = A + B/(T+C)
Is this a linear model?

(-rA) = kC /(1+KCA )
Is this a linear model?
03/04/08

Revised by D.H. Chen

44<1>

Regression Models (2)


Y=a+b*X
Y = a + b * X + c * X^2
Is this a linear model? Yes. The expression is linear in
parameters a, b, & c

Log P = A + B/T
Is this a linear model? Yes. The expression is linear in
parameters A & B
Log P and 1/T will be viewed as known data

Log P = A + B/(T+C)
Is this a linear model? No. The expression is non-linear
in parameters A, B, and C.
03/04/08

Revised by D.H. Chen

45<1>

Regression Models (3)


(-rA) = kC /(1+KCA )
Is this a linear model? No. This is nonlinear for
the parameters k, K, , and .

03/04/08

Revised by D.H. Chen

46<1>

Regression Example
Typical Tent

optimal setpoint

0.1

y = 25.541x 2 - 1.7628x + 0.09


R2 = 0.9282

0.08
0.06

SetPoint

0.04

Poly. (SetPoint)

0.02
0
0
03/04/08

0.01

0.02

0.03

std deviation
Revised
by D.H. Chen

0.04

0.05
47<1>

Vous aimerez peut-être aussi