Académique Documents
Professionnel Documents
Culture Documents
Error Analysis
13.002
Lecture 1
Differential Equation
Differentiation
Integration
Difference Equation
w(x,t)
x
System of Equations
Discrete Model
Linear System of Equations
Solving linear
equations
xn
m
n
w(x,t)
Eigenvalue Problems
Non-trivial Solutions
Root finding
Lecture 1
Examples
m
b
e
Mantissa
Base
Exponent
Decimal
Binary
Convention
Decimal
Binary
Max mantissa
Min mantissa
Max exponent
General
13.002
Min exponent
Numerical Methods for Engineers
Lecture 1
Arithmetic Operations
Number Representation
Absolute Error
Shift mantissa of largest number
Relative Error
Relative Error
13.002
Bounded
Lecture 1
Recursion
Initial guess
Test
a=26;
n=10;
MATLAB script
g=1;
sqr.m
sq(1)=g;
for i=2:n
sq(i)= 0.5*(sq(i-1) + a/sq(i-1));
end
hold off
plot([0 n],[sqrt(a) sqrt(a)],'b')
hold on
plot(sq,'r')
plot(a./sq,'r-.')
plot((sq-sqrt(a))/sqrt(a),'g')
grid on
Recursion Algorithm
13.002
Lecture 1
Recursion
Horners Scheme
Evaluate polynomial
horner.m
% Horners scheme
% for evaluating polynomials
a=[ 1 2 3 4 5 6 7 8 9 10 ];
n=length(a) -1 ;
z=1;
b=a(1);
% Note index shift for a
for i=1:n
b=a(i+1)+ z*b;
end
p=b
Horners Scheme
General order n
>> horner
p =
55
Recurrence relation
>>
13.002
Lecture 1
Recursion
Order of Operations Matter
0
13.002
Lecture 1
recur.m
>> recur
b = 1; c = 1; x = 0.5;
dig=2
i
delta
Sum
1.0000
2.0000
3.0000
4.0000
5.0000
6.0000
7.0000
8.0000
9.0000
10.0000
11.0000
12.0000
13.0000
14.0000
15.0000
16.0000
17.0000
18.0000
19.0000
20.0000
0.4634
0.2432
0.1226
0.0614
0.0306
0.0153
0.0076
0.0037
0.0018
0.0009
0.0004
0.0002
0.0001
0.0000
0.0000
-0.0000
-0.0000
-0.0000
-0.0000
-0.0000
0.4634
0.7065
0.8291
0.8905
0.9212
0.9364
0.9440
0.9478
0.9496
0.9505
0.9509
0.9511
0.9512
0.9512
0.9512
0.9512
0.9512
0.9512
0.9512
0.9512
delta(approx) Sum(approx)
res =
13.002
0.5000
0.2000
0.1000
0.1000
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.5000
0.7000
0.8000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
0.9000
Lecture 1
Error Analysis
13.002
Lecture 2
Examples
m
b
e
Mantissa
Base
Exponent
Decimal
Binary
Convention
Decimal
Binary
Max mantissa
Min mantissa
Max exponent
General
13.002
Min exponent
Numerical Methods for Engineers
Lecture 2
Error Analysis
Number Representation
Absolute Error
Shift mantissa of largest number
Relative Error
Relative Error
13.002
Bounded
Lecture 2
Error Propagation
Spherical Bessel Functions
Differential Equation
Solutions
13.002
Lecture 2
Error Propagation
Spherical Bessel Functions
Forward Recurrence
Forward Recurrence
Unstable
Backward Recurrence
Millers algorithm
Stable
N ~ x+20
13.002
Lecture 2
Error Propagation
Differential Equation
Eulers Method
Example
Discretization
Finite Difference (forward)
Recurrence
euler.m
13.002
Lecture 2
Error Propagation
Absolute Errors
'y~f (x)'x
'x = x - x
13.002
Lecture 2
Error Propagation
Example
Error Propagation Formula
Multiplication
=>
=>
=>
=>
13.002
Lecture 2
Error Propagation
Expectation of Errors
Addition
Standard Error
Truncation
Error Expectation
Rounding
Lecture 2
Error Propagation
Error Cancellation
Function of one variable
Max. error
Error cancellation
Stand. error
13.002
Lecture 2
Error Propagation
Condition Number
y = f(x)
x = x(1 + D)
y = y(1 + E)
Well-conditioned problem
13.002
Lecture 2
Error Propagation
Condition Number
Problem Condition Number
4 Significant Digits
Well-conditioned Algorithm
Lecture 2
Numerical implementation
3.3-3.4
Numerical stability
Partial Pivoting
Equilibration
Full Pivoting
Special Matrices
Iterative Methods
13.002
3.3-3.5
Jacobis method
Gauss-Seidel iteration
Convergence
3.5
3.4
3.6
Lecture 3
13.002
Lecture 3
13.002
Lecture 3
13.002
Lecture 3
Reduction
Step k
Back-Substitution
13.002
Lecture 3
Row k
Pivotal Elements
Row i
13.002
Lecture 3
New Row k
Pivotal Elements
New Row i
13.002
Lecture 3
Gaussian Elimination
2-digit Arithmetic
100% error
n=3
a = [ [0.01 1.0]' [-1.0 0.01]']
tbt.m
b= [1 1]'
r=a^(-1) * b
x=[0 0];
m21=a(2,1)/a(1,1);
tbt.m
a(2,1)=0;
a(2,2) = radd(a(2,2),-m21*a(1,2),n);
b(2)
= radd(b(2),-m21*b(1),n);
x(2)
= b(2)/a(2,2);
x(1)
= (radd(b(1), -a(1,2)*x(2),n))/a(1,1);
x'
13.002
1% error
Lecture 3
2-digit Arithmetic
Cramers Rule - Exact
1% error
1% error
13.002
Lecture 3
100% error
1% error
Infinity-Norm Normalization
Two-Norm Normalization
Lecture 3
Pivoting by Rows
2-digit Arithmetic
1% error
Full Pivoting
Find largest numerical value in same row and column and interchange
Affects ordering of unknowns
13.002
Lecture 3
Full Pivoting
Lecture 3
2-digit Arithmetic
1% error
100% error
13.002
Lecture 3
Consistent units
Dimensionless unknowns
13.002
Lecture 3
Numerical implementation
3.3-3.4
Numerical stability
3.3-3.5
Partial Pivoting
Equilibration
Full Pivoting
Special Matrices
Iterative Methods
13.002
Mathews
Jacobis method
Gauss-Seidel iteration
Convergence
3.5
3.4
3.6
Lecture 4
Reduction
Step k
Computation Count
Reduction Step k
Operations
k
Total Computation Count
Reduction
Back Substitution
n-k
n
Reduction for each right-hand side inefficient.
However, RHS may be result of iteration and unknown a priori
(e.g. Eulers method) -> LU Factorization
13.002
Lecture 4
where
and
1.
Forward substitution
2.
Back substitution
How to determine
13.002
and
Lecture 4
Define
After reduction step i-1:
Above and on diagonal
=>
Lecture 4
Matrix product
Upper triangular
Lower triangular
Below diagonal
i
Above diagonal
13.002
k
Numerical Methods for Engineers
Lecture 4
Upper triangular
=
Lower diagonal implied
13.002
Lecture 4
Pivoting if
Interchange rows i and k
else
Pivot element vector
13.002
Lecture 4
Condition number
Example
Lecture 4
Properties
Condition Number
13.002
Lecture 4
Properties
Condition Number
13.002
Lecture 4
Ill-Conditioned System
n=4
a = [ [1.0 1.0]' [1.0 1.0001]']
b= [1 2]'
tbt6.m
ai=inv(a);
a_nrm=max( abs(a(1,1)) + abs(a(1,2)) ,
abs(a(2,1)) + abs(a(2,2)) )
ai_nrm=max( abs(ai(1,1)) + abs(ai(1,2)) ,
abs(ai(2,1)) + abs(ai(2,2)) )
k=a_nrm*ai_nrm
r=ai * b
x=[0 0];
m21=a(2,1)/a(1,1);
a(2,1)=0;
a(2,2) = radd(a(2,2),-m21*a(1,2),n);
b(2)
= radd(b(2),-m21*b(1),n);
x(2)
x(1)
x'
= b(2)/a(2,2);
= (radd(b(1), -a(1,2)*x(2),n))/a(1,1);
Ill-conditioned system
13.002
Lecture 4
Well-Conditioned System
4-digit Arithmetic
n=4
a = [ [0.0001 1.0]' [1.0 1.0]']
b= [1 2]'
ai=inv(a);
a_nrm=max( abs(a(1,1)) +
abs(a(2,1)) +
ai_nrm=max( abs(ai(1,1))
abs(ai(2,1))
k=a_nrm*ai_nrm
tbt7.m
abs(a(1,2)) ,
abs(a(2,2)) )
+ abs(ai(1,2)) ,
+ abs(ai(2,2)) )
r=ai * b
x=[0 0];
m21=a(2,1)/a(1,1);
a(2,1)=0;
a(2,2) = radd(a(2,2),-m21*a(1,2),n);
b(2)
= radd(b(2),-m21*b(1),n);
x(2)
x(1)
x'
= b(2)/a(2,2);
= (radd(b(1), -a(1,2)*x(2),n))/a(1,1);
Algorithmically ill-conditioned
Well-conditioned system
13.002
Lecture 4
Numerical implementation
3.3-3.4
Numerical stability
3.3-3.5
Partial Pivoting
Equilibration
Full Pivoting
Special Matrices
Iterative Methods
13.002
Mathews
Jacobis method
Gauss-Seidel iteration
Convergence
3.5
3.4
3.6
Lecture 5
y(x,t)
Finite Difference
Harmonic excitation
f(x,t) = f(x) cos(Zt)
Matrix Form
Differential Equation
Boundary Conditions
Tridiagonal Matrix
Symmetric, positive definite: No pivoting needed
13.002
Lecture 5
LU Factorization
13.002
Lecture 5
Reduction
Forward Substitution
Back Substitution
LU Factorization:
Forward substitution:
Back substitution:
Total:
13.002
2*(n-1) operations
n-1 operations
n-1 operations
4(n-1) ~ O(n) operations
Lecture 5
0
p super-diagonals
q sub-diagonals
w = p+q+1 bandwidth
13.002
b is half-bandwidth
Lecture 5
0
=
0
0
13.002
Lecture 5
0
=
0
13.002
Lecture 5
0
j
13.002
n2
j-i
Numerical Methods for Engineers
n(p+2q+1)
Lecture 5
..
..
Skyline
..
Storage
Pointers
1 4 9 11 16 20
..
Lecture 5
Choleski Factorization
where
13.002
Lecture 5
Numerical implementation
3.3-3.4
Numerical stability
3.3-3.5
Partial Pivoting
Equilibration
Full Pivoting
Special Matrices
Iterative Methods
13.002
Mathews
Jacobis method
Gauss-Seidel iteration
Convergence
3.5
3.4
3.6
Lecture 6
x
x
x
0
Rewrite Equations
x
Iterative, Recursive Methods
0
Jacobis Method
0
0
13.002
Gauss-Seidelss Method
Lecture 6
Jacobis Method
/
Sufficient Convergence Condition
13.002
Lecture 6
Jacobis Method
Diagonal Dominance
13.002
Lecture 6
vib_string.m
n=99;
L=1.0;
h=L/(n+1);
k=2*pi;
kh=k*h
x=[h:h:L-h]';
a=zeros(n,n);
f=zeros(n,1);
o=1
Off-diagonal values
a(1,1) =kh^2 - 2;
a(1,2)=o;
for i=2:n-1
a(i,i)=a(1,1);
a(i,i-1) = o;
a(i,i+1) = o;
end
a(n,n)=a(1,1);
a(n,n-1)=o;
nf=round((n+1)/3);
nw=round((n+1)/6);
nw=min(min(nw,nf-1),n-nf);
figure(1)
hold off
nw1=nf-nw;
nw2=nf+nw;
f(nw1:nw2) = h^2*hanning(nw2-nw1+1);
subplot(2,1,1); plot(x,f,'r');
% exact solution
y=inv(a)*f;
subplot(2,1,2); plot(x,y,'b');
13.002
Lecture 6
vib_string.m
o=1.0
Iterative Solutions
Exact Solution
13.002
Lecture 6
vib_string.m
o = 0.5
Iterative Solutions
Exact Solution
13.002
Lecture 6
2.1-2.3
Newton-Raphsons Method
2.4
Secant Method
2.4
Multiple roots
Bisection
2.4
2.2
Convergence Speed
Examples
13.002
2.1-2.4
Lecture 7
a=2;
n=6;
heron.m
g=2;
% Number of Digits
dig=5;
sq(1)=g;
for i=2:n
sq(i)= 0.5*radd(sq(i-1),a/sq(i-1),dig);
end
'
i
value
'
[ [1:n]' sq']
hold off
plot([0 n],[sqrt(a) sqrt(a)],'b')
hold on
plot(sq,'r')
plot(a./sq,'r-.')
plot((sq-sqrt(a))/sqrt(a),'g')
grid on
Guess root
)/2
value
1.0000
2.0000
3.0000
4.0000
5.0000
6.0000
2.0000
1.5000
1.4167
1.4143
1.4143
1.4143
Iteration Formula
13.002
)/2
Lecture 7
Realistic stop-criteria
Machine
Accuracy
f(x)
flat f(x)
steep f(x)
Cannot require
Cannot require
G
13.002
Lecture 7
Non-linear Equation
Goal: Converging series
Rewrite Problem
Example
% f(x) = x^3 - a = 0
% g(x) = x + C*(x^3 - a)
cube.m
a=2;
n=10;
g=1.0;
C=-0.1;
sq(1)=g;
for i=2:n
sq(i)= sq(i-1) + C*(sq(i-1)^3 -a);
end
hold off
plot([0 n],[a^(1./3.) a^(1/3.)],'b')
hold on
plot(sq,'r')
plot( (sq-a^(1./3.))/(a^(1./3.)),'g')
grid on
Iteration
13.002
Lecture 7
then
Convergence Criteria
Apply successively
13.002
Convergence
Lecture 7
y=x
Convergent
y=g(x)
Mean-value Theorem
x1
x0
Convergence
y
>
y=x
y=g(x)
Divergent
x0 x1
13.002
Lecture 7
Rewrite
Convergence
13.002
Lecture 7
Absolute error
13.002
Lecture 7
Newton-Raphson Iteration
13.002
Lecture 7
Newton-Raphson
a=26;
n=10;
sqr.m
g=1;
sq(1)=g;
for i=2:n
sq(i)= 0.5*(sq(i-1) + a/sq(i-1));
end
hold off
plot([0 n],[sqrt(a) sqrt(a)],'b')
hold on
plot(sq,'r')
plot(a./sq,'r-.')
plot((sq-sqrt(a))/sqrt(a),'g')
grid on
13.002
Lecture 7
Approximate Guess
a=10;
n=10;
div.m
g=0.19;
sq(1)=g;
for i=2:n
sq(i)=sq(i-1) - sq(i-1)*(a*sq(i-1) -1) ;
end
hold off
plot([0 n],[1/a 1/a],'b')
hold on
plot(sq,'r')
plot((sq-1/a)*a,'g')
grid on
legend('Exact','Iteration','Error');
title(['x = 1/' num2str(a)])
Newton-Raphson
13.002
Lecture 7
Taylor Expansion
Relative Error
Quadratic Convergence
General Convergence Rate
Convergence Exponent
13.002
Lecture 7
2.
f(x)
Lecture 7
Error Exponent
Taylor Series 2nd order
1
Relative Error
Error improvement for each function call
Secant Method
Newton-Raphson
Exponents called Efficiency Index
13.002
Lecture 7
=>
f(x)
Convergence
x
Slower convergence the higher the order of the root
13.002
Lecture 7
x
yes
no
13.002
Lecture 7
Algorithm
n = n+1
yes
no
13.002
Lecture 7
Interpolation
4.1-4.4
Lagrange interpolation
Triangular families
Newtons iteration method
Equidistant Interpolation
Numerical Differentiation
Numerical Integration
4.3
4.4
4.4
4.4
6.1-6.2
7.1-7.3
13.002
Lecture 8
Numerical Interpolation
Given:
Find
for
13.002
Lecture 8
Numerical Interpolation
Polynomial Interpolation
f(x)
Polynomial Interpolation
Interpolation function
13.002
Lecture 8
Numerical Interpolation
Polynomial Interpolation
Examples
f(x)
f(x)
Linear Interpolation
13.002
Quadratic Interpolation
Lecture 8
Numerical Interpolation
Polynomial Interpolation
Taylor Series
Remainder
f(x)
Requirement
p(x)
f(x)
x
13.002
Lecture 8
Numerical Interpolation
Lagrange Polynomials
f(x)
1
k k+1 k+2
Difficult to program
Difficult to estimate errors
Divisions are expensive
Important for numerical integration
13.002
Lecture 8
Numerical Interpolation
Triangular Families of Polynomials
Ordered Polynimials
where
Coefficients
found by recursion
13.002
Lecture 8
Numerical Interpolation
Triangular Families of Polynomials
Polynomial Evaluation
Horners Scheme
13.002
Lecture 8
Numerical Interpolation
Newtons Iteration Formula
Standard triangular family of polynomials
Newtons Computational Scheme
Divided Differences
13.002
Lecture 8
Numerical Interpolation
Newtons Iteration Formula
f(x)
f(x)
13.002
Lecture 8
Numerical Interpolation
Equidistant Newton Interpolation
Equidistant Sampling
13.002
Divided Differences
Stepsize Implied
Lecture 8
Numerical Interpolation
Newtons Iteration Formula
function[a] = interp_test(n)
%n=2
h=1/n
xi=[0:h:1]
f=sqrt(1-xi.*xi) .* (1 - 2*xi +5*(xi.*xi));
%f=1-2*xi+5*(xi.*xi)-4*(xi.*xi.*xi);
c=newton_coef(h,f)
m=101
x=[0:1/(m-1):1];
fx=sqrt(1-x.*x) .* (1 - 2*x +5*(x.*x));
%fx=1-2*x+5*(x.*x)-4*(x.*x.*x);
y=newton(x,xi,c);
hold off; b=plot(x,fx,'b'); set(b,'LineWidth',2);
hold on; b=plot(xi,f,'.r') ; set(b,'MarkerSize',30);
b=plot(x,y,'g'); set(b,'LineWidth',2);
yl=lagrange(x,xi,f);
b=plot(x,yl,'xm'); set(b,'Markersize',5);
b=legend('Exact','Samples','Newton','Lagrange')
b=title(['n = ' num2str(n)]); set(b,'FontSize',16);
function[y] = newton(x,xi,c)
% Computes Newton polynomial
% with coefficients c
n=length(c)-1
m=length(x)
y=c(n+1)*ones(1,m);
for i=n-1:-1:0
cc=c(i+1);
xx=xi(i+1);
y=cc+y.*(x-xx);
end
13.002
function[c] = newton_coef(h,f)
% Computes Newton Coefficients
% for equidistant sampling h
n=length(f)-1
c=f; c_old=f; fac=1;
for i=1:n
fac=i*h;
for j=i:n
c(j+1)=(c_old(j+1)-c_old(j))/fac;
end
c_old=c;
end
Lecture 8
Interpolation
4.1-4.4
Lagrange interpolation
Triangular families
Newtons iteration method
Equidistant Interpolation
Numerical Differentiation
Numerical Integration
4.3
4.4
4.4
4.4
6.1-6.2
7.1-7.3
13.002
Lecture 9
Numerical Differentiation
Taylor Series
n=1
f(x)
First order
h
13.002
Lecture 9
Numerical Differentiation
Second order
f(x)
n=2
Second Derivatives
n=2
Forward Difference
n=3
Central Difference
13.002
Lecture 9
Numerical Integration
Lagrange Interpolation
f(x)
Equidistant Sampling
Properties
13.002
Lecture 9
Numerical Integration
f(x)
n=1
Trapezoidal Rule
Simpsons Rule
f(x)
n=2
13.002
Lecture 9
Numerical Integration
Simpsons Rule
Error Analysis
f(x)
Local Error
Global Error
Trapezoidal Rule
N Intervals
Local Error
Global Error
13.002
Lecture 9
9
9.1
9.2
9.4
Error analysis
Runge-Kutta Methods
13.002
9.5
9.7
9.8
Shooting method
Direct Finite Difference methods
9.8
9.9
Lecture 10
non-linear in y
13.002
Lecture 10
Example
Discretization
Finite Difference (forward)
Recurrence
euler.m
13.002
Lecture 10
+
Partial Derivatives
Discretization
Recursion Algorithm
with
Local Error
13.002
Lecture 10
Example
Error Analysis?
Eulers Method
13.002
Lecture 10
O(h)
13.002
Lecture 10
Error Bound
13.002
Lecture 10
Runge-Kutta Recursion
Match 2nd order Taylor series
13.002
Lecture 10
4th
13.002
Order Runge-Kutta
Predictor-corrector method
Lecture 10
Recurrence
h=1.0;
x=[0:0.1*h:10];
rk.m
y0=0;
y=0.5*x.^2+y0;
figure(1); hold off
a=plot(x,y,'b'); set(a,'Linewidth',2);
% Euler's method, forward finite difference
xt=[0:h:10]; N=length(xt);
yt=zeros(N,1); yt(1)=y0;
for n=2:N
yt(n)=yt(n-1)+h*xt(n-1);
end
hold on; a=plot(xt,yt,'xr'); set(a,'MarkerSize',12);
% Runge Kutta
fxy='x'; f=inline(fxy,'x','y');
[xrk,yrk]=ode45(f,xt,y0);
a=plot(xrk,yrk,'.g'); set(a,'MarkerSize',30);
a=title(['dy/dx = ' fxy ', y_0 = ' num2str(y0)])
set(a,'FontSize',16);
b=legend('Exact',['Euler, h=' num2str(h)],
'Runge-Kutta (Matlab)'); set(b,'FontSize',14);
Lecture 10
9
9.1
9.2
9.4
Error analysis
Runge-Kutta Methods
13.002
9.5
9.7
9.8
Shooting method
Direct Finite Difference methods
9.8
9.9
Lecture 11
Initial
Conditions
Matrix form
Lecture 11
Differential Equation
Boundary
Conditions
Shooting Method
Shooting Iteration
13.002
Lecture 11
Discretization
Finite Differences
13.002
Lecture 11
Matrix Equations
Finite Differences
Difference Equations
13.002
Lecture 11
y(x,t)
Finite Difference
Harmonic excitation
f(x,t) = f(x) cos(Zt)
Matrix Form
Differential Equation
Boundary Conditions
Tridiagonal Matrix
Symmetric, positive definite: No pivoting needed
13.002
Lecture 11
N+1
Central Difference
Difference Equations
Backward Difference
O(h3 )
General Boundary Conditions
O(h4 )
O(h 2)
13.002
Lecture 11
Minimization Problems
Least Square Approximation
Normal Equation
Parameter Estimation
Curve fitting
Optimization Methods
Simulated Annealing
Traveling salesman problem
Genetic Algorithms
13.002
Lecture 12
Minimization Problems
Data Modeling Curve Fitting
Linear Model
Non-linear Model
13.002
Lecture 12
Overdetermined System
m measurements
n unknowns
m>n
Least Square Solution
Minimize Residual Norm
n
13.002
Lecture 12
Theorem
A
Proof
q.e.d
Normal Equation
)
13.002
Lecture 12
C
D
13.002
Normal Equation
Residual Vector
Lecture 12
13.002
Lecture 12
Optimization Problems
Non-linear Models
Non-linear models
E(c)
Local
Minimum
Measured values
Global
Minimum
Model Parameters
xi
Non-linear models often have multiple, local minima. A locally linear, least square
approximation may therefore find a local minimum instead of the global minimum.
13.002
Lecture 12
Optimization Algorithms
Simulated Annealing
Boltzman Probability Distribution
High temperature T
Low temperature T
1.
2.
3.
4.
Lecture 12
Simulated Annealing
Example: Traveling Salesman Problem
Objective:
Visit N cities across the US in arbitrary
order, in the shortest time possible.
Metropolis Algorithm
1.
2.
3.
4.
East:
West:
13.002
=1
= -1
Lecture 12
Simulated Annealing
Example: Traveling Salesman Problem
% Travelling salesman problem
salesman.m
% Create random city distribution
n=20; x=random('unif',-1,1,n,1); y=random('unif',-1,1,n,1);
gam=1; mu=sign(x);
% End up where you start. Add starting point to end
x=[x' x(1)]'; y=[y' y(1)]'; mu=[mu' mu(1)]';
figure(1); hold off; g=plot(x,y,'.r'); set(g,'MarkerSize',20);
c0=cost(x,y,mu,gam); k=1; % Boltzman constant
nt=50; nr=200; % nt: temp steps. nr: city switches each T
cp=zeros(nr,nt);
iran=inline('round(random(d,1.5001,n+0.4999))','d','n');
for i=1:nt
T=1.0 -(i-1)/nt
for j=1:nr
% switch two random cities
ic1=iran('unif',n); ic2=iran('unif',n);
xs=x(ic1); ys=y(ic1); ms=mu(ic1);
x(ic1)=x(ic2); y(ic1)=y(ic2); mu(ic1)=mu(ic2);
x(ic2)=xs; y(ic2)=ys; mu(ic2)=ms;
p=random('unif',0,1); c=cost(x,y,mu,gam);
if (c < c0 | p < exp(-(c-c0)/(k*T))) % accept
c0=c;
else
% reject and switch back
xs=x(ic1); ys=y(ic1); ms=mu(ic1);
x(ic1)=x(ic2); y(ic1)=y(ic2); mu(ic1)=mu(ic2);
x(ic2)=xs; y(ic2)=ys; mu(ic2)=ms;
end
cp(j,i)=c0;
end
figure(2); plot(reshape(cp,nt*nr,1)); drawnow;
figure(1); hold off; g=plot(x,y,'.r'); set(g,'MarkerSize',20);
hold on; plot(x,y,'b');
g=plot(x(1),y(1),'.g'); set(g,'MarkerSize',30);
p=plot([0 0],[-1 1],'r--'); set(g,'LineWidth',2); drawnow;
end
13.002
cost.m
Lecture 12
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
E
99
r5RsUutvHwxHzy{[|UlNq\qR}e
B
9 4K9(
$'&? &524*/&? -$'&
- d Z z
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
F
B
9 4K9(
, -*/) 792$17$h
%l
-h
% z%
7DA . A= , -*/) = .*.A*/7$K
- _Q
|5z
7 hz&/D 4 2 */.-3 $:A*J &
24*/.= &? -3 "%$'&()4*
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
KO9( _
, -*/) "#$'&(4)*+ A= )h&? < &-3 *+{$
!3 $: 7 -3 .3$1!.-$ .!4 ) *-{$
&/&7$ A $.7$:h
4 * -3 $ .3$1!.-$ $ z&/D&_.
K xKojdxxKx'dxToKx'lux
xKojdxxKx'dxToK
K
, -*/) & . = `hz&T"
K h
K =
! %$: !7# -3! 4
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
h
r5RsUutvHwxHzy{[|UlNq\qR}e
+
-,M., 0/9
M2#43
1
'
!3 {A4.zB&C79 6 5! (* & -3! z&>= ! 7$
4*O*7Z.!B&/7D 7 D
7 A$ "#% $'&()*+
K 7
8`
* G
<
'
lK1'
?;
()
()
()
()
AB
o
J
'
K
L
'J
()
M b{
ANAx
x oK1'
K
C)A
AB
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
P
-,M., 0/9
M2#43
1
b
K
r5RsUutvHwxHzy{[|UlNq\qR}e
R
u MT
/T3T/
r5RsUutvHwxHzy{[|UlNq\qR}e
\
]3^/
_T
/
K 8loWE TdqGA
H_;`K <a
?xNA
W: AIE: A qqGAb
< x
QK1 E:AGA%]j
1
<ac< C uT 'l 1 ':(`K ETT Q'd
H_;`x <a
L) (l :u Al Y7'AeQW8 (NA7lKb %NTdbA W: A
E: A
YEKG ) qqGAB KxK;7]!
e
He_'`x <a
#I{;:`;Ab ?xG
?Nf%_'` a
He_'`x <a
?;f%_'` a
?Q]xKu
e'Fu
E`GA
TdqGA
7WA
8
e;e'AB]
c _ * M
He_'`x <a
?Q]xKu
e'Fu
E`GA
TdqGA
7WA
8
' %'%'A%g
He_'`x <a
?;f%_'` a
@
Hx]G
AB1 E` 1 A qqNA 7B'AI8 %'%WABg
c _ * M
He_'`x <a
@
Hx]G
AB1 E` 1 A qqNA 7B'AI8 j e;%WAB]
He_'`x <a
f@h
fL
FTu Q o qqN Ab
a 8
He_'`x <a
;)b
(`K i j bT (`]b k d ET%1 Au
Td
E`
'A%'&g i8lL TG ABW& b{ {u o]Gb A%g
G
%Glu ) Ed A% d ;dTu"
He_'`x <a
C < *l
H%;FTu
dbN
A q xKKd% 'T Q]' A%g
He_'`x <a
D_b
m < D a ?':EET L FAx 1 jGTKu% ]
c __
?G@
He_'`x <a
mI: A%
AI8
FAx 1 jGTKu% ( Q'k Gn
L
< EE{G
He_'`x <a
` <a
Fe|G AB' ( ) G A%] L |ox G AG ue g
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
E'I
d -., K
- qp _ z
p
-
- dsr r ut wv x
z yTv zx T{v: x v p|x
T{
v;} ~4xr B v;} X|x Bv;} X|x`
}}}}
}
}}}}=
}}}}
}
pM]T
~- p
}
} |B]p]~
-
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
h
~ }}}
}
91-~~
r5RsUutvHwxHzy{[|UlNq\qR}e
EKE
~
B
-
pB
}
p]~
- d' ~9
-
~
zD
}
- w
-
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
Ej
! % z
%l
!
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
EjF
} 9 B -O
x=
![F
4 24*
K
K
F
!
U!U6!
Dox':uT
A
8{': I8
UW
V
Dox':uT
A
8{': I8
U
J
Dox'
:uT K A
8{' : I8 W
UK
O
U
6
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
E
uT39
z&? 5! -
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
E
3 ( D
-.,
D
@79A4$ u2 7
^v
"z
rXdx
y^vV
*+7 !
"=hw
rdx
y vV
*(&?!!$ !
z y zyTv
*+7
2*+7h
rdx
rdx
l&
"
l&
"
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
E'+
-39 9
xxK;7
G' A
QN)Tb
d
xTx'
(`W
:
78'A
(`] QWk
Q
(
7
k
1 A
Qb
QKK
Fj1u
k
E'
:
x
b
WAd
N AAu
T 8T;A
T 8u
F
j
!
j
jj
( 4
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
EP
v = -
%
! B&-*+
hz
*s4)*
hz
*s4)*
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
EWR
.3#44 %
%
%
4DA*=2=.z $QB&C7
zy `
#4)*+ = 7Z7D
A=z&? 7DA=!
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
E'\
3 (
3 }
3 } 9
K F !!14x
K ) F"
G|FB
K Q]L
QK]xd%
A
8
QW:{xTb
K Ex;AFi)
K A%'AKmxNA L Fe
1|F &%g[FUq
K FK'(o FU
K )K'(o )U
K K%'
K E{%1 A jd'E E`KNA6V'E
K 'E E`KNAV'E
TI G:WA ' AWAej
KKK 1 eK
A
e'I:
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
x
0.6
0.7
0.8
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
0.9
r5RsUutvHwxHzy{[|UlNq\qR}e
xI
-
-
-
-
-
-
-
-
=
D
9
3 (
3 ( 9
9
}wX} ] T
T{vV x
T{
v x
w
vV r sr -
=zwv'} ] r } X rX
= zw
v'} ] r } | ]~r=
v
x
=r
r
T {vV x -
x
v x x
v x yz`
0.9
0.8
y1 = x sin(x)
0.7
y2 = sin(x) _._
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
x
0.6
0.7
0.8
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
0.9
r5RsUutvHwxHzy{[|UlNq\qR}e
E
-, hy
#N A
(`]
'
K uG A4A i Hlj A? N{g i W Ke
K uG A4A] i Hlj A? N{g i W Ke
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
xGxo
r5RsUutvHwxHzy{[|UlNq\qR}e
K
-, hy
0.9
0.8
y1 = x sin(x)
0.7
y2 = sin(x) _._
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
x
0.6
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
0.7
0.8
0.9
r5RsUutvHwxHzy{[|UlNq\qR}e
KF
K
K
K
K
K
K
K
K
K
- 9
/9^39( 9
`x;A`xI
Ex;AFi)
0.6
0.4
0.2
0
0
0.1
0.2
0.3
0.4
0.5
x
0.6
0.7
0.8
0.9
0.7
0.8
0.9
0.6
0.4
0.2
0
0
0.1
0.2
0.3
0.4
0.5
x
0.6
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
d 1
D4OO
r
! `
NA
TL ) 'AoTb:bV4Q!VB
;
Txxq4VL)KL
]
x T L)
YEU A
8 G) G AI8 j A
8 q AdTL)
e
x u
YEU A
8 G) G AI8 j A
8 q Ad TL)
e
'T
G&b"q
dKU B
K TGA
'
] j;H;(%jO]
K NA
K TL ) 'AoTb:bV4Q!VB
K ; TxKd4 VG )K L
e K TL)
YEU A
8 G ) G AI8 j A
8 q AdTL)
e
x u
YEU A
8 G ) G AI8 j A
8 q Ad TL)
e
'T
AI8
G ) G AI8 j A
8 q AldTL) dK
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
G&b"q
dKU B
r5RsUutvHwxHzy{[|UlNq\qR}e
d 1
( 03
4 42 *
QK1 E:A
{ Q'Aldex
7B'A
8
q
KxWE
Y
T
Q;Alq% uou
q
b
!
{ Q;Ade
o T Q'Alq%x
b{
T
Q;Alq% uo
QK1 E:A
{ Q'Alde
x 7B'A
8 78]] K x'E
q
T
Q;Alq% uou
78]K
q
e
W
N
T Q'Alq%
x T Q;Ad% uo
-
b{
T
Q;Alq% uo
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
+
K3
/%
O
, -*/) .$'&?2 ! oA.zB&C79h $ .!4**+h
) h.A=` -3! 3"! -3! 5Ag
.$'&?2 %$:
>
*+ -3 .7D&? A!.
7 -*/4) .D
7 !h
@>A!.-B&C7D= %$: ! -3 $-A4$d "4*A
3! )&/44 & $:!. )d< .$'&?2 4
o A!.-B&C79h & -3 "#$'&(4)*+ .$:!h &? oA.zB&C79h
%$: *+7.*"#$j&4)*+< 3$:!= "#$'&(4)*+ .$:=
&? .$'&?2 %$: *7D)#4*S
r5RsUutvHwxHzy{[|UlNq\qR}e
P
9
3 (
9 K3
a 8
Qb%E A
Nk | E`K E`KNA L A
8l G
G:x
Q'AB'n A Y:u x'E8l TIGd
1
AI8
7dIkE Q
(lITd
A
8l
QqeYE A
Q]xx]u
A
!U6 !1h
)
1| x' E8
A
Ex;
AAUi)
FK'
(o AB| Q6q
)K'
(o A)4Au G| oW E8
;A6B
A%'AK
| EK Ex; A ( ) xKx'dx ]K
OKg
K%
' j
K | E`K
K 8loW E | EK
|
EK4 j E`] G AK; ( Qb% E A
a 8
Qb%
E A N k | E`K E`KN A L A
8l G
G:x
Q'AB'n A Y:u x' E8l TI Gd
1
AI8
7dI
kE Q (lI Td A
8l
Qqe YE A Q]xx]u
K
E
EGFHJIKIMLONQP:RSHTPVUXWZY1[G\]N_^`Ybac[dN'Y1[D\KN'egfh[1Ydi]NjUlkmUonqp
r5RsUutvHwxHzy{[|UlNq\qR}e
R
ty(t) = sin(alpha*t)
0.4
0.2
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
time (sec)
Y%VZY
wB;G''4G
We
0.7
0.8
0.9
XeNN
\
3
3
!
"#
287
$ &%'%()!*
A4/=+B2=./1<=C*2?6
38/ihCS.?6jX1IWa
,-53
<D0 P ,K0=<6D:K9kX`2-V46>2?:ml ;nl :8a_Q
<D0_I P <=0kQ
05=7i2UX`2WV628:*b8IHcDJ*LB0^/jloIHJ b 5=7hpIHJqX<D0(a=a
:>r/D+
X4s\YDN-J NDJ YWc=J J-t^auQ
+6>2vXwh4,W:ml x\yD2/D,H;zx=l s`Y-NDJU[\f=JU[`N-J-t^auQ
hD./BA 7H@
V7=5A 7H@
05=7i2UX4s{2-V46>28:-f 2WV6>2?:DfWtUl s\Y-cDJ JWtUl x|YR]Wx}a
05=7i2UX4s\Y2WV6>2?:Df Yi2-V46>28:-f-tvl s\Y-c=J J-tvl x~Yv]x}a
2/B285D6qX8sx\3 P IBM-J ;DS
P x=l@=CW9^MS+H2G.vX|M b :4aul
x9
<46=:19?^/BAD2WV P ?Yqx=l@DCW9^MS+B2=.vX\2-V628:-fa_l
r?5D:B<6G5X*x2-V628: X`A?6>h=.?6D6S+*a_x}auQ
5D:B<6G5X*x47>.i9':=5*/*6A +W7BC*.4,-6 5D686G5 XwA-aux}a_Q
Y%VZY
wB;G''4G
We
/1@
,-5D:*+=+
] ]=]
D
xAS6hex\t^auQ
XeNN
$"&
3>C=@,i2/W7B@
E
<=0
Z
,K0=<46=:K9kX`2WV6>2?:ml;zl~:8a
@4:.Dh/K@
.S6AC^,-6>AD=.?6W
<D0
fRl :
IpQ 6B@SA
;b-:GbG+=/1@nX\2-V628:8a_Q
X|M*bH<46S+D+W6G5X1Ipl~:i<^+
X`.?6AC^,W6AD=.?6^a=a]wL.?6AC^,W6A==.?6Waj]}HMmQ
Y%VZY
wB;G''4G
We
XeNN
%
#
$ %%()!*
10
20
30
40
50
60
70
80
90
60
30
0
theta (degrees)
30
KD}S`1K-i>B1DBv1>-H4G
60
90
q=
*4
D
xi+1 =
a
1
(xi + ),
2
xi
for i = 0, 1, 2, 3, . . ..
The value xi will converge to a as i Write a program that reads in the value of a
interactively and uses this algorithm to compute the square root of a.
Test your program as you vary the maximum number of iterations of the algorithm is
increased from 1, 2, 3, . . . and determine how many signicant digits of precision that you
obtain for each. How many iterations are necessary to reach the machine precision of
matlab?
1
1
1
1
+ + + + ...
2! 3! 4! 5!
Test your program as you increase the number of terms in the series. Determine how many
signicant digits of precision that you obtain in your answer as a function of the number
of terms in the series. How many terms are necessary to reach machine precision?
13.002
PS #1 Solution
1.2.4. (a)
(1.0110101)2 = (2^0)+(2^-2)+(2^-3)+(2^-5)+(2^-7)= 1.41406250000000
1.2.4. (b)
(11.0010010001)2 = (2^1)+(2^0)+(2^-3)+(2^-6)+(2^-10)= 3.14160156250000
1.2.5. (a)
1.2.5. (b)
1.2.13. (b)
1 1 1
+ )+ =
10 3 5
((0.1101) 2 23 + (0.1011) 2 21
) = (0.1110) 2 21
(
1.3.1.(b)
98350-98000 = 350 (absolute error)
350
= 0.00355871886121 (relative error) (2 significant digits)
98350
1.3.1.(c)
x2new
b + b 2 4ac b + b 2 4ac
2c
=
2
2a
b + b 4ac b + b 2 4ac
b b 2 4ac b b 2 4ac
2c
=
=
2
2a
b b 4ac b b 2 4ac
1.3.13. (a)
x 2 1, 000.001x + 1 = 0
b + b 2 4ac
x1 =
= 1000
2a
2c
x2 =
= 0.001
b b 2 4ac
Programming Exercise 1
% script M-file findroots.m
a=input(Enter the value of "a" from ax^2+bx+c=0 :);
b=input(Enter the value of "b" from ax^2+bx+c=0 :);
c=input(Enter the value of "c" from ax^2+bx+c=0 :);
if b>= 0;
sign=1;
else sign=-1;
end;
q=-0.5*(b+sign*sqrt((b^2)-4*a*c));
x1=q/a;
x2=c/q;
xx1=num2str(x1);
xx2=num2str(x2);
disp([X1 is equal to , xx1])
disp([X2 is equal to ,xx2])
Programming Exercise 2
x(1)=1/2;r(1)=0.994; p(1)=1; p(2)=0.497; q(1)=1; q(2)=0.497;
for n=2:11
x(n)=(1/2)*x(n-1);
end
for n=2:11
r(n)=(1/2)*(r(n-1));
end
for n=3:11
p(n)=(3/2)*p(n-1)-(1/2)*p(n-2);
end
for n=3:11
q(n)=(5/2)*q(n-1)-q(n-2);
end
h=1:11;
figure(1)
plot(h, x(h)-r(h),bd,h, x(h)-p(h),r+,h, x(h)-q(h),g)
grid on
legend(r(n),p(n),q(n))
fprintf(n
x(n)
r(n)
p(n)
q(n)\n)
for i = h
fprintf(%2d %+10.8f %+10.8f %+10.8f %+10.8f\n, i, x(i), p(i), r(i), q(i))
end
fprintf( n x(n)-r(n) x(n)-p(n)
x(n)-q(n)\n)
for i = h
fprintf(%2d %+10.8f %+10.8f %+10.8f\n, i, x(i)-r(i), x(i)-p(i), x(i)-q(i))
end
n
1
2
3
4
5
6
7
8
9
10
11
x(n)
+0.50000000
+0.25000000
+0.12500000
+0.06250000
+0.03125000
+0.01562500
+0.00781250
+0.00390625
+0.00195313
+0.00097656
+0.00048828
r(n)
p(n)
+1.00000000 +0.99400000
+0.49700000 +0.49700000
+0.24550000 +0.24850000
+0.11975000 +0.12425000
+0.05687500 +0.06212500
+0.02543750 +0.03106250
+0.00971875 +0.01553125
+0.00185938 +0.00776563
-0.00207031 +0.00388281
-0.00403516 +0.00194141
-0.00501758 +0.00097070
n
1
2
3
4
5
6
7
8
9
10
11
>>
x(n)-r(n)
-0.49400000
-0.24700000
-0.12350000
-0.06175000
-0.03087500
-0.01543750
-0.00771875
-0.00385938
-0.00192969
-0.00096484
-0.00048242
x(n)-p(n)
-0.50000000
-0.24700000
-0.12050000
-0.05725000
-0.02562500
-0.00981250
-0.00190625
+0.00204687
+0.00402344
+0.00501172
+0.00550586
q(n)
+1.00000000
+0.49700000
+0.24250000
+0.10925000
+0.03062500
-0.03268750
-0.11234375
-0.24817188
-0.50808594
-1.02204297
-2.04702148
x(n)-q(n)
-0.50000000
-0.24700000
-0.11750000
-0.04675000
+0.00062500
+0.04831250
+0.12015625
+0.25207813
+0.51003906
+1.02301953
+2.04750977
1
0 x1
1
x
2
= e
1
e
1
2 e
x
3
1.
=
0
1
0
x1
1
1 1 1
x = 1
1
2 1
x3
1
x1 =
3
x2 = 2
x3 = 6
1
0
x1
1
0
1 0 1
x = 0
1
2 0 x3
0
x1
= 2
x2
=
1
x3 = 2
2.
clear
clc
alfa=100;
exp(-alfa)];
exp(-alfa)];
oA=A;
oB=B;
[mm,n]=size(A);
A=[A B];
L=zeros(mm,n);
for i=1:mm-1
for j=i+1:mm
m(j,i)=A(j,i)/A(i,i);
L(j,i)=m(j,i);
for k=i:n+1
A(j,k)=A(j,k)-m(j,i)*A(i,k);
end
end
end
U=A(:,1:n);
L=L+eye(mm,n);
B=A(:,n+1);
x=zeros(n,1);
for j=mm:-1:1
x(j)=(B(j)-A(j,j+1:n)*x(j+1:n))/A(j,j);
end
x
oB-oA*x
3.
=0
x1
= 3
x2
= 2
x
3
= 6
= 5
x1
= 1
.9933
x2
= 0
.9866
x
3
= 1
.9934
B Ax 0 Solutions are very good
approximation.
= 10
x1 = 2
x2
= 0
.9999
x
3
= 2
= 20
x1
= 2
x2
= 1
x
3
= 2
= 40
x1
= 0
x2
= 1
x
3 = 0
Ax =
0
4.
clear
clc
alfa=40;
exp(-alfa)];
oA=A;
oB=B;
[mm,n]=size(A);
A=[A B];
L=zeros(mm,n);
for i=1:mm-1
[Y I]=max(abs(A(i:mm,i)));
temp_store1=A(I,:);
temp_store2=B(I);
A(I,:)=A(i,:);
B(I)=B(i);
A(i,:)=temp_store1;
B(i)=temp_store2;
for j=i+1:mm
m(j,i)=A(j,i)/A(i,i);
L(j,i)=m(j,i);
for k=i:n+1
A(j,k)=A(j,k)-m(j,i)*A(i,k);
end
end
end
U=A(:,1:n);
L=L+eye(mm,n);
B=A(:,n+1);
x=zeros(n,1);
for j=mm:-1:1
x(j)=(B(j)-A(j,j+1:n)*x(j+1:n))/A(j,j);
end
x
oB-oA*x
5. switch x1 and x2
13.002
Introduction to Numerical Methods for Engineers
Take Home Exam
Issued: Thursday, Mar. 10, 2005
Due: Friday, Mar. 18, 2005
Problem 1.
Problem 2.
Name:
Course:
E-mail:
1.00
6.001
10.001
Self-Assessment Quiz (no grade)
A. Please circle the number representing your level of knowledge and under
standing in each of the following areas (0: no knowledge, 10: Expert):
Linear algebra
Dierentiation
Integration
Ordinary Dierential equations
Fortran
C
C++
Java
Scheme
Matlab
Other prog. language (specify)
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
4
4
4
4
4
4
4
4
4
4
4
5
5
5
5
5
5
5
5
5
5
5
6
6
6
6
6
6
6
6
6
6
6
7
7
7
7
7
7
7
7
7
7
7
8
8
8
8
8
8
8
8
8
8
8
9
9
9
9
9
9
9
9
9
9
9
10
10
10
10
10
10
10
10
10
10
10
B. Create a Matlab script that computes and plots the following two repre
sentations of the sine function for small arguments:
f (x) = sin x
f (x) =
(1)
(2)
for x = 10n/100 , n = 700, , 300. The function round will round the
argument to the nearest integer. Describe what you think is going on.
6