Académique Documents
Professionnel Documents
Culture Documents
These slides are a supplement to the book Numerical Methods with Matlab: Implementations and Applications, by Gerald W. Recktenwald, c 20002007, Prentice-Hall, Upper Saddle River, NJ. These slides are copyright c 20002007 Gerald W. Recktenwald. The PDF version of these slides may be downloaded or stored or printed only for noncommercial, educational use. The repackaging or sale of these slides in any form, without written consent of the author, is prohibited. The latest version of this PDF le, along with other supplemental material for the book, can be found at www.prenhall.com/recktenwald or web.cecs.pdx.edu/~gerry/nmm/.
Version 0.82
November 6, 2007
page 1
Overview
Fitting a line to data Geometric interpretation Residuals of the overdetermined system The normal equations Nonlinear ts via coordinate transformation Fitting arbitrary linear combinations of basis functions Mathematical formulation Solution via normal equations Solution via QR factorization Polynomial curve ts with the built-in polyfit function Multivariate tting
(xi, yi), i = 1, . . . , m
Find the coecients and such that
F (x) = x +
is a good t to the data Questions:
How do we dene good t? How do we compute and after a denition of good t is obtained?
page 2
page 3
Plausible Fits
The Residual
Plausible ts are obtained by adjusting the slope () and intercept ( ). Here is a graphical representation of potential ts to a particular set of data Which of the lines provides the best t?
5 y 4 3 2 1 1 2 3 4 5 6
The dierence between the given yi value and the t function evaluated at xi is
5 y 4 3 2 1 1 2 3 4 5 6
ri = yi F (xi) = yi (xi + ) ri is the residual for the data pair (xi, yi). ri is the vertical distance between the known data and the t function.
page 4
page 5
or
minimize
2 ri
2 ri
m X i=1
[yi (xi + )]
An alternative to minimizing the residual is to minimize the orthogonal distance to the line. P 2 Minimizing di is known as the Orthogonal Distance Regression problem. See, e.g., Ake Bj ork, Numerical Methods for Least Squares Problems, 1996, SIAM, Philadelphia.
y d3 d2 d4
d1
x1
x2
x3
x4
page 6
page 7
(1)
(2)
Sxx + Sx ri
2
= =
Sxy Sy
m X i=1 m X i=1
(1) (2)
Sx + m
where
m X i=1 m X i=1
is a minimum. Let = r
2 2
Sxx = Sxy =
xixi xiyi
Sx = Sy =
xi yi
=0 =constant
page 8
Note: Sxx, Sx, Sxy , and Syy can be directly computed from the given (xi, yi) data. Thus, Equation (1) and (2) are two equations for the two unknowns, and .
page 9
(3)
(1)
with
= =
Now, lets rederive the equations for the t. This will give us insight into the process or tting arbitrary linear combinations of functions. (3) (4) For any two points we can write
x1 + = y1 x2 + = y2
d=
mSxx
(5)
or
x1 x2
1 1
y1 = y2
page 10
page 11
(2)
Writing out the x + = y equation for all of the known points (xi, yi), i = 1, . . . , m gives the overdetermined system. 3 2 2 3 x1 1 y1 6 x2 17 6 y2 7 7 6 . 7 6 or Ac = y =4 . . 4 . . . . .5 . 5
Compute = r
2 2,
where r = y Ac
2 2
= r
xm
where
1 1 17 7 . . .5 1 3
ym c= y1 6 y2 7 6 7 y=4 . . . 5 ym 2 3
Minimizing requires or
x1 6 x2 6 A=4 . . . xm
T T = 2A y + 2A Ac = 0 c ( A A) c = A b
T T
Note: We cannot solve Ac = y with Gaussian elimination. Unless the system is consistent (i.e., unless
y lies in the column space of A) it is impossible to nd the c = (, )T that exactly satises all m equations. The system is consistent only if all the data points lie along a single line.
page 12
page 13
linefit.m
The linet function ts a line to a set of data by solving the normal equations.
function [c,R2] = linefit(x,y) % linefit Least-squares fit of data to y = c(1)*x + c(2) % % Synopsis: c = linefit(x,y) % [c,R2] = linefit(x,y) % % Input: x,y = vectors of independent and dependent variables % % Output: c = vector of slope, c(1), and intercept, c(2) of least sq. line fit % R2 = (optional) coefficient of determination; 0 <= R2 <= 1 % R2 close to 1 indicates a strong relationship between y and x if length(y)~= length(x), error(x and y are not compatible); end x = x(:); y = y(:); % Make sure that x and y are column vectors A = [x ones(size(x))]; % m-by-n matrix of overdetermined system c = (A*A)\(A*y); % Solve normal equations if nargout>1 r = y - A*c; R2 = 1 - (norm(r)/norm(y-mean(y)))^2; end
3 x
page 14
page 15
R2 Statistic
(1)
R2 is a measure of how well the t function follows the trend in the data. 0 R2 1.
Dene:
r 2 2 = 0.934 (yi y )2
5 y 4 3 2 1 5 y 4 3 2 1 1 2 3 4 5 6 1 2 3 4 5 6
y y
Then:
is the value of the t function at the known data points. is the average of the y values
For a line t
y i = c1xi + c2
y =
1 X yi m
When R 1 the t function follows the trend of the data. When R2 0 the t is not signicantly better than approximating the data by its mean.
Vertical distances between given y data and the average of the y . Vertical lines show contributions to P (yi y )2
page 16
page 17
(1)
GPa
Consider the variation of the bulk modulus of Silicon Carbide as a function of temperature (Cf. Example 9.4)
T (C) G (GP a)
20 500 203 197 1000 191 1200 188 1400 186 1500 184
>> [t,D,labels] = loadColData(SiC.dat,6,5); >> g = D(:,1); >> [c,R2] = linefit(t,g); c = -0.0126 203.3319 R2 = 0.9985
Bulk Modulus
Some nonlinear t functions y = F (x) can be transformed to an equation of the form v = u + Linear least squares t to a line is performed on the transformed variables. Parameters of the nonlinear t function are obtained by transforming back to the original variables. The linear least squares t to the transformed equations does not yield the same t coecients as a direct solution to the nonlinear least squares problem involving the original t function.
1000 C 1500
Examples:
ln y = x + ln y = ln x + ln(y/x) = x +
page 19
page 18
(2)
(3)
Consider
c2 x
(6)
The preceding steps are equivalent to graphically obtaining c1 and c2 by plotting the data on semilog paper.
y = c1ec2x
5 10
1
ln y = c2x + ln c1
ln y = ln c1 + c2x
Introducing the variables
y
10
v = ln y
transforms equation (6) to
b = ln c1
a = c2
v = ax + b
y 0
0.5
1 x
1.5
page 20
page 21
(4)
(5)
The preceding steps are equivalent to graphically obtaining c1 and c2 by plotting the data on log-log paper. (7)
14
ln y = ln c1 + c2 ln x
Introduce the transformed variables
y = c1xc2
10
2
ln y = c2 ln x + ln c1
12
v = ln y
and equation (7) can be written
u = ln x
b = ln c1
a = c2
y
10
8 10 y 6 4 2 10 2 10
0 1
v = au + b
0 0
0.5
1 x
1.5
10
10 x
10
page 22
page 23
(6)
The preceding steps are equivalent to graphically obtaining c1 and c2 by plotting the data on semilog paper.
b = ln c1
0.7 0.6 0.5
y = c1xec2x
ln(y/x) = c2x + ln c1
10
1 0
0.3 10 0.2
1
0.1
2
0 0
y 0.5 1 x 1.5 2 10 0
v = ax + b
10 0.4
0.5
1 x
1.5
page 24
page 25
xexpfit.m
0.7
Natural log of element-by-element division Fit is performed by linefit Extract parameters from transformation
0.5
1 x
1.5
page 26
page 27
Summary of Transformations
(1)
Transform (x, y ) data as needed Use linefit Transform results of linefit back
>> >> >> >> >> >> x y u v a c = = = = = = ... ... ... ... linefit(u,v) ... % % original data transform the data
1. m data pairs are given: (xi, yi), i = 1, . . . , m. 2. The t function y = F (x) = c1x + c2 has n = 2 basis functions f1(x) = x and f2(x) = 1. 3. Evaluating the t function for each of the m data points gives an overdetermined system of equations Ac = y where c = [c1, c2]T , y = [y1, y2, . . . , ym]T , and
3 1 17 7. . . .5 1
page 28
page 29
(2)
4. The least-squares principle denes the best t as the values of c1 and c2 that minimize
2 2
= y Ac
2 2.
Denition of t function and basis functions Formulation of the overdetermined system Solution via normal equations: fitnorm Solution via QR factorization: fitqr and \
(A A)c = A y,
6. Solving the normal equations gives the slope c1 and intercept c2 of the best t line.
page 30
page 31
(1)
(2)
F (x) function can be any combination of functions that are linear in the cj . Thus 1, x, x , x
2 2 /3
4x
F (x) =
The basis functions
cj fj (x)
sin(c1x), e
c3 x ,
c2
f1(x), f2(x), . . . , fn(x) are chosen by you the person making the t.
The coecients
are not valid basis functions as long as the cj are the parameters of the t. For example, the t function for a cubic polynomial is
x , x , x, 1.
page 32 NMM: Least Squares Curve-Fitting page 33
(3)
(4)
The objective is to nd the cj such that F (xi) yi. Since F (xi) = yi, the residual for each data point is
ri = yi F (xi) = yi
n X j =1
cj fj (xi)
2.
c1f1(x1) + c2f2(x1) + c3f3(x1) = y1, c1f1(x2) + c2f2(x2) + c3f3(x2) = y2, . . . c1f1(xm) + c2f2(xm) + c3f3(xm) = ym.
are all satised. For a least squares t, the equations are not all satised, i.e., the t function F (x) does not pass through the yi data.
page 34
page 35
(5)
(6)
If F (x) cannot interpolate the data, then the preceding matrix equation cannot be solved exactly: b does not lie in the column space of A. The least-squares method provides the compromise solution that minimizes r 2 = y Ac 2. The c that minimizes r
2
Ac = y,
where
(A A)c = A y.
3 y1 6 y2 7 7. y=6 4 . . . 5 ym
page 36
page 37
(7)
(1)
12
10
3 c1 6 c2 7 7, c=6 4. . .5 cn
3 y1 6 y2 7 7. y=6 4 . . . 5 ym
Not knowing anything more about the data we can start by tting a polynomial to the data.
3 x
page 38
page 39
(2)
(3)
y = c1x + c2x + c3
where the ci are the coecients to be determined by the t and the basis functions are
Notice the transposes, x and y must be column vectors. The coecient matrix of the overdetermined system is
>> A = [ x.^2 x ones(size(x)) ];
f1(x) = x ,
The A matrix is
f2(x) = x, 2 x1 x2 . . . xm 3 1 17 7 . . .5 1
f3(x) = 1
x2 1 6 x2 2 A=6 . 4 . . x2 m
page 40
page 41
(4)
(5)
F(x) = c1 x + c2 x + c3 12
6 y
3 x
page 42
page 43
(1)
(1)
F(x) = c1/x + c2 x
F (x) =
c1 + c2x x
10
1 , x
In Matlab:
>> >> >> >> x y A c = = = =
(1:6); [10 5.49 0.89 -0.14 -1.07 0.84]; [1./x x]; (A*A)\(A*y)
y 0
3 x
page 44
page 45
(2)
(3)
Evaluating the t function as a matrixvector product can be performed for any x. Suppose then that we have created an m-le function that evaluates A for any x, for example
function A = xinvxfun(x) A = [ 1./x(:) x(:) ];
The basis functions are dened in only one place: in the routine for evaluating the overdetermined matrix. Automation of tting and plotting is easier because all that is needed is one routine for evaluating the basis functions. End-user of the t (not the person performing the t) can still evaluate the t function as y = c1f1(x) + c2f2(x) + + cnfn(x).
Disadvantages:
Storage of matrix A for large x vectors consumes memory. This should not be a problem for small n. Evaluation of the t function may not be obvious to a reader unfamiliar with linear algebra.
page 46
page 47
fitnorm.m
function [c,R2,rout] = fitnorm(x,y,basefun) % fitnorm Least-squares fit via solution to the normal equations % % Synopsis: c = fitnorm(x,y,basefun) % [c,R2] = fitnorm(x,y,basefun) % [c,R2,r] = fitnorm(x,y,basefun) % % Input: x,y = vectors of data to be fit % basefun = (string) m-file that computes matrix A with columns as % values of basis basis functions evaluated at x data points. % % Output: c = vector of coefficients obtained from the fit % R2 = (optional) adjusted coefficient of determination; 0 <= R2 <= 1 % r = (optional) residuals of the fit if length(y)~= length(x); error(x and y are not compatible); end A = feval(basefun,x(:)); % Coefficient matrix of overdetermined system c = (A*A)\(A*y(:)); % Solve normal equations, y(:) is always a column if nargout>1 r = y - A*c; % Residuals at data points used to obtain the fit [m,n] = size(A); R2 = 1 - (m-1)/(m-n-1)*(norm(r)/norm(y-mean(y)))^2; if nargout>2, rout = r; end end page 48 NMM: Least Squares Curve-Fitting page 49
. . . f2(x) . . .
...
3 . . . fn(x)5 . . .
The columns of A are the basis functions evaluated at each of the x data points. As before, the normal equations are
A Ac = A y
The user supplies a (usually small) m-le that returns A.
f1(x) = x ,
Create the m-le poly2Basis.m:
f2(x) = x,
f3(x) = 1
Create the m-le xinvxfun.m
1 , x
page 50
page 51
R2 Statistic
(1)
R2 Statistic
(2)
To account for the reduction in degrees of freedom in the data when the t is performed, it is technically appropriate to consider the adjusted coecient of determination
Radjusted = 1
2
m1 mn1
P (yi y )2 , P (yi y )2
y i =
n X j =1
cj fj (xi)
page 52
page 53
(1)
(2)
Obtain coecients of a least squares curve t of a polynomial to a given data set Evaluate a polynomial at a given set of x values.
x and y dene the data n is the desired degree of the polynomial. c is a vector of polynomial coecients stored in order of descending powers of x
n1
+ + cnx + cn+1
page 54
page 55
(3)
(1)
c contains the coecients of the polynomial (returned by polyfit) xf is a scalar or vector of x values at which the polynomial is to be evaluated yf is a scalar or vector of values of the polynomials: yf= p(xf). If S is given as an optional input to polyval, then dy is a vector of estimates of the uncertainty in yf
>> >> >> >> >> >> x = (1:6); y = [10 5.49 0.89 -0.14 -1.07 c = polyfit(x,y,3); xfit = linspace(min(x),max(x)); yfit = polyval(c,xfit); plot(x,y,o,xfit,yfit,--) 0.84];
In Matlab:
page 56
page 57
12
(1)
10
30 Conductivity (W/m/C) 25 20 15 10 5 0 0
2 1
10
20
30 40 Temperature (K)
50
60
page 58
page 59
(2)
(3)
k (T ) =
1 c1 2 + c2T T
function y = cuconBasis1(x) % cuconBasis1 Basis fcns for conductivity model: y = [1./x x.^2];
(T ) =
which has the basis functions
1 c1 2 = + c2T k (T ) T
2
An m-le that uses fitnorm to t the conductivity data with the cuconBasis1 function is listed on the next page.
1 , T
page 60
page 61
(4)
function conductFit(fname) % conductFit LS fit of conductivity data for Copper at low temperatures % % Synopsis: conductFit(fname) % % Input: fname = (optional, string) name of data file; % Default: fname = conduct1.dat % % Output: Print out of curve fit coefficients and a plot comparing data % with the curve fit for two sets of basis functions. if nargin<1, fname = cucon1.dat; end % Default data file
% --- print results fprintf(\nCurve fit to data in %s\n\n,fname); fprintf( Coefficients of Basis Fcns 1 Basis Fcns 2\n); fprintf( T^(-1) %16.9e %16.9e\n,c1(1),c2(1)); fprintf( T %16.9e %16.9e\n,0,c2(2)); fprintf( T^2 %16.9e %16.9e\n,c1(2),c2(3)); fprintf(\n ||r||_2 %12.5f %12.5f\n,norm(r1),norm(r2)); fprintf( R2 %12.5f %12.5f\n,R21,R22); % --- evaluate and plot the fits tf = linspace(0.1,max(t)); % 100 T values: 0 < t <= max(t) Af1 = feval(fun1,tf); % A matrix evaluated at tf values kf1 = 1./ (Af1*c1); % Af*c is column vector of 1/kf values Af2 = feval(fun2,tf); kf2 = 1./ (Af2*c2); plot(t,k,o,tf,kf1,--,tf,kf2,-); xlabel(Temperature (K)); ylabel(Conductivity (W/m/C)); legend(data,basis 1,basis 2);
% --- define basis functions as inline function objects fun1 = inline([1./t t.^2]); % t must be a column vector fun2 = inline([1./t t t.^2]); % --- read data and perform the fit [t,k] = loadColData(fname,2,0,2); [c1,R21,r1] = fitnorm(t,1./k,fun1); [c2,R22,r2] = fitnorm(t,1./k,fun2);
% % %
Read data into t and k Fit to first set of bases and second set of bases
page 62
page 63