Vous êtes sur la page 1sur 80

Chapter four

Interpolation and Approximation


outlines of the chapter
• Class of Common Approximation Functions
• Criteria for the Choice of the Approximate Function
• Finite Differences and Divided Differences
• Interpolation by Polynomials and Least Square Approximation by Polynomials
• Piecewise Polynomial Approximation and Cubic Spline Interpolation
• Many types of approximating functions exist.
• In fact, any analytical function can be used as an approximating function.
• Three of the more common approximating functions are:
1. Polynomials
2. Trigonometric functions
3. Exponential functions
Approximating functions should have the following properties:
• 1. The approximating function should be easy to determine.
• 2. It should be easy to evaluate.
• 3. It should be easy to differentiate.
• 4. It should be easy to integrate.
There are two fundamentally different ways to fit a polynomial to
a set of discrete data:
1. Exact fits
2. Approximate fits
• An exact fit yields a polynomial that passes exactly through all of the discrete
points. This type of fit is useful for small sets of smooth data.
• An approximate fit yields a polynomial that passes through the set of data in the
best manner possible, without being required to pass exactly through any of the
data points, Approximate fits are useful for large sets of smooth data
Cont.…….….

• polynomial of degree n passing exactly through n + 1 discrete points is unique.


• The polynomial through a specific set of points may take many different forms,
but all forms are equivalent.
• Any form can be manipulated into any other form by simple algebraic
rearrangement.
Direct Fit Polynomials
• First let’s consider a completely general procedure for fitting a polynomial to a set
of equally spaced or unequally spaced data.
• Given n + 1 sets of data [xo,f(xo)] ,[x1 ,f(x1)] ..... [Xn,f(xn)], which will be
written as (x0,f0),[ x1 ,f1 ]) .. ... (x n,fn), determine the unique nth-degree
polynomial Pn(x) that passes exactly through the n + 1 points:
• There are n + 1 linear equations containing the n + 1 coefficients a0- an. can be
solved for a0-an by Gauss elimination.
• The resulting polynomial is the unique nth-degree polynomial that passes exactly
through the n + 1 data points.
• example To illustrate interpolation by a direct fit polynomial, consider the simple
function y =f(x) l/ x, and construct the following set of six significant figure data :
• Let’s interpolate for y at x = 3.44 using linear, quadratic, and cubic interpolation.
• Solution, The exact value is

To centre the data around x = 3.44, the first three points are used. Applying P
2(x) at each of these data points gives the following three equations:
Solving for a, b, and c by Gauss elimination without scaling or
pivoting yields
• advantages of higher-degree interpolation are obvious.

• The main advantage of direct fit polynomials is that the explicit form of the
approximating function is obtained, and interpolation at several values of x can
be accomplished simply by evaluating the polynomial at each value of x.
• The work required to obtain the polynomial does not have to be redone for
each value of x.
• A second advantage is that the data can be unequally spaced.
Neville’s Algorithm
• Neville’s algorithm is equivalent to a Lagrange polynomial. It is based on a series
of linear interpolations.

For the first interpolation of the data,


Example
• Rearranging the data in order of closeness to x = 3.44 yields the following set of
data:
Solution
Divided Difference

• divided difference is defined as the ratio of the difference in the function values at
two points divided by the difference in the values of the corresponding
independent variable.
• Thus, the first divided difference at point i is defined as
Example
Difference Tables
• There are three difference method
A difference table below can be interpreted as a forward-difference
table, a backward-difference table, or a centred difference table,

Forward-difference table
Backward-difference table Centred-difference
• The numbers in the tables are identical. Only the notation is different.
• Difference tables are useful for evaluating the quality of a set of
tabular data.
Least Squares Approximation
• The least squares method is defined as follows: Given N data points, [xi, Y(xi)] =
(xi, Yi), choose the functional form of the approximating function to be fit, y =
y(x), and minimize the sum of the squares of the deviations, ei = (Yi- yi).
• For example, if the values of the independent variable xi are considered exact,
then all the deviation is assigned to the dependent variable Y(x), and the deviation
ei is the vertical distance between Y, and yi =f(xi). Thus

• The simplest polynomial is a linear polynomial, the straight line. Least squares
straight line approximations are an extremely useful and common approximate
fit.
Dividing above Eqs by 2 and rearranging yields

Equations are called the normal


equations of the least squares fit.
They can be solved for a and b
by Gauss elimination.
Example
• Consider the constant pressure specific heat for air
at low temperatures is resented in
Table below where Tis the temperature (K)
and Cp is the specific heat (J/gm-K).
The exact values, approximate values
from the least squares straight line
approximation, and the percent error are
also presented in the table.Determine a
least squares straight line
approximation for the set of data:
Solution
Higher-degree Polynomial Approximation(least square approximation)
• The least squares procedure developed can be applied to higher-degree
polynomials.
• Given the N data points, (xi, Yi), fit the best nth-degree polynomial through the set
of data. Consider the nth-degree polynomial:
Dividing above equation by 2 and rearranging yields the
normal equations:
Example , Consider the constant pressure specific heat of air at
high temperatures presented in previous example Table Determine
a least squares quadratic polynomial approximation for this set of
data:
Solution
Least square method for multivariable and non linear system???
Interpolation
• frequently we may have occasion to estimate intermediate values between precise
data points.
• The most common method used for this purpose is polynomial interpolation.
• For example, there is only one straight line (that is, a first-order polynomial) that
connects two points figure a.
• Similarly, only one parabola connects a set of three points in figure.b
• Polynomial interpolation consists of determining the unique nth-order polynomial
that fits n + 1 data points.
• This polynomial then provides a formula to compute intermediate values.
(a) first-order (linear) connecting
two points,

(c)third-order (cubic)
connecting four points.

(b) Second order (quadratic or parabolic)


connecting three points,
Interpolating Methods

• Newton’s method
• Lagrange method
• Least square approximation methods , etc.…..
Newton’s Divided-difference Interpolating Polynomials
• As stated above, there are a variety of alternative forms for expressing an
interpolating polynomial.
• Newton’s divided-difference interpolating polynomial is among the most popular
and useful forms.
• Before presenting the general equation, we will introduce the first- and second-
order versions because of their simple visual interpretation.
Linear Interpolation
• The simplest form of interpolation is to connect two data points with a straight
line.
• This technique, called linear interpolation,
Eqn(1)

which is a linear-interpolation formula.


 The notation f1(x) designates that this is a first order interpolating
polynomial.
 Notice that besides representing the slope of the line connecting the points,
the term [ f(x1) - f(x0)] / (x1 - x0) is a finite-divided-difference
approximation of the first derivative .
• In general, the smaller the interval between the data points, the better the
approximation.
Example
Estimate the natural logarithm of 2 using linear interpolation.
• First, perform the computation by interpolating between ln 1 = 0 and ln 6 =
1.791759.
• Then, repeat the procedure, but use a smaller interval from ln 1 to ln 4
(1.386294). Note that the true value of ln 2 is 0.6931472.
• Solution,
• We use the above Eqn and a linear interpolation for ln(2) from x0 = 1 to x1 = 6 to
give
interpolations are shown in Fig, along with the true function.
Quadratic Interpolation
• If three data points are available, this can be accomplished with a
second-order polynomial (also called a quadratic polynomial or a
parabola).
• A particularly convenient form for this purpose is Eqn(2)
f2(x) = b0 + b1(x - x0) + b2(x - x0)(x - x1)
• A simple procedure can be used to determine the values of the coefficients.
• For b0, Eqn above with x = x0 can be used to compute, b0= f(x0).
• b1 ,is evaluated at x = x1 and b0= f(x0),

• Finally, substituting the above two equation in the original equation and b2,can be
evaluated at x = x2 and solved (after some algebraic manipulations) for,
Cont.………
• Notice that, as was the case with linear interpolation, b1 still represents the slope
of the line connecting points x0 and x1.
• Thus, the first two terms of Eqn(2) are equivalent to linear interpolation from x0
to x1.
• The last term, b2(x - x0)(x -0 x1) in eqn(2), introduces the second-order curvature
into the formula.
Example
• Substituting these values into Eqn(2) yields the quadratic formula
f2(x) = 0 + 0.4620981(x - 1) - 0.0518731(x - 1)(x - 4)
which can be evaluated at x = 2 for
f2(2) = 0.5658444
• which represents a relative error of t =18.4%.
• Thus, the curvature introduced by the quadratic formula improves the
interpolation compared with the result obtained using straight lines.
General Form of Newton’s Interpolating Polynomials
• The preceding analysis can be generalized to fit an nth-order polynomial to n + 1
data points.
• The nth-order polynomial is
• fn(x) = b0 + b1(x - x0) +………
bn(x - x0)(x -x1)………… (x – xn-1) Eqn(3)
• We use these data points and the
following equations to evaluate the coefficients:
the first finite divided difference is represented generally as

The second finite divided difference, which represents the difference of two first
divided differences, is expressed generally as
Eqn(3)
• which is called Newton’s divided-difference interpolating polynomial.
• It should be noted that it is not necessary that the data points used in be
equally spaced or that the abscissa values necessarily be in ascending order,
example
• In the previous Example, data points at x0 = 1, x1 =4, and x2 = 6 were used to estimate ln 2 with a
parabola.
• Now, adding a fourth point [x3 = 5; f(x3) = 1.609438], estimate ln 2 with a third-order Newton’s
interpolating polynomial.
Solution.
The third-order
polynomial,
with n = 3, is
The results for f [x1, x0], f [x2, x1, x0], and f [x3, x2, x1, x0] represent the coefficients
b1, b2, and b3, respectively, Along with b0 = f(x0) = 0.0,

f3(x) = 0 - 0.4620981(x - 1) + 0.05187311(x - 1)(x - 4)+ 0.007865529(x -1)(x - 4)(x - 6)


which can be used to evaluate f3(2) = 0.6287686, which represents a relative error of 9.3%.
Errors of Newton’s Interpolating Polynomials
eqn(4)
Use to estimate the error for the second-order polynomial interpolation of Example
above . Use the additional data point f(x3) = f(5) = 1.609438,to obtain your results.
Solution.
Recall that in Example above, the second-order interpolating polynomial provided
an estimate of f2(2) = 0.5658444, which represents an error of 0.6931472 -
0.5658444 = 0.1273028.
If we had not known the true value, as is most usually the case, along with the
additional value at x3, could have been used to estimate the error, as in
R2 = f [x3, x2, x1, x0](x -x0)(x - x1)(x - x2)
or
R2 = 0.007865529(x - 1)(x - 4)(x - 6)
• where the value for the third-order finite divided difference is as
computed previously in the above example
• This relationship can be evaluated at x = 2 for
R2 = 0.007865529(2 - 1)(2 - 4)(2 - 6) = 0.0629242
• which is of the same order of magnitude as the true error.
Lagrange Interpolating Polynomials
• The Lagrange interpolating polynomial is simply a reformulation of
the Newton polynomial that
avoids the computation of
divided differences.
• It can be represented
concisely as

eqn(5)
Example
• Use a Lagrange interpolating polynomial of the first and second order to evaluate
ln 2 on the basis of the data given in above example:
• x0 = 1, f (x0) = 0
• x1 = 4, f (x1) = 1.386294
• x2 = 6, f (x2) = 1.791760
• Solution , The first-order
polynomial eqn(5)can be used
to obtain the estimate at x = 2,
 As expected, both these results agree with those previously obtained using
Newton’s interpolating polynomial.
Inverse Interpolation
• As the nomenclature implies, the f(x) and x values in most interpolation contexts
are the dependent and independent variables, respectively.
• invers interpolation is the of processes finding ‘x’ by the given f(x).
• A simple example is a table of values derived for the function f(x) = 1/x ,

 For instance, for the data above, suppose that you were asked to
determine the value of x that corresponded to f(x) = 0.3.
 For this case, because the function is available and easy to manipulate,
the correct answer can be determined directly as x = 1/0.3 = 3.3333.
 Such a problem is called inverse interpolation.
• For example, for the problem outlined above, a simple approach would be to fit a
quadratic polynomial to the three points: (2, 0.5), (3, 0.3333) and (4, 0.25). The
result would be

The answer to the inverse interpolation problem of finding the x


corresponding to f(x) = 0.3 would therefore involve determining the root of
em
Spline Interpolation
• In the previous sections, nth-order polynomials were used to interpolate between n
+l data points.
• For example, for eight points, we can derive a perfect seventh-order polynomial
 An alternative approach is to apply lower-order polynomials to subsets of
data points. Such connecting polynomials are called spline functions.
 For example, third-order curves employed to connect each pair of data
points are called cubic splines.
Linear Splines
The simplest connection between two points is a straight line.
The first-order splines for a group of ordered data points can be defined as
a set of linear functions,
• If the lower-degree polynomials are independent of each other, a piecewise
approximation is obtained.
• An alternate approach is to fit a lower-degree polynomial to connect each pair of data
points and to require the set of lower-degree polynomials to be consistent with each
other in some sense.
• This type of polynomial is called a spline function, or simply a spline.
• Splines can be of any degree.
• Linear splines are simply straight line segments ,connecting each pair of data points.
• Linear splines are independent of each other from interval to interval.
• Linear splines yield first-order approximating polynomials.
• The slopes (i.e., first derivatives) and curvature (i.e., second derivatives)
are discontinuous at every data point.
• Quadratic splines yield second-order approximating polynomials.
• The slopes of the quadratic splines can be forced to be continuous at each
data point, but the curvatures (i.e., the second derivatives) are still
discontinuous.
• A cubic spline yields a third-degree polynomial connecting each pair of
data points.
• The slopes and curvatures of the cubic splines can be forced to be
continuous at each data point.
• In fact, these requirements are necessary to obtain the additional
conditions required to fit a cubic polynomial to two data points.
Example ,Fit the data in Table below with first-order splines. Evaluate the function at x = 5.
• Solution, These data can be used to determine the slopes between points. For example, for the
interval x = 4.5 to x = 7 the slope can be computed,

𝟐.𝟓−𝟏
M = = −𝟎. 𝟔
𝟕−𝟒.𝟓

The slopes for the other intervals can be computed, and the resulting first-order splines are plotted in
Figure above.
Quadratic Splines
• The objective in quadratic splines is to derive a second-order polynomial for each
interval between data points.
• The polynomial for each interval can be represented generally as,
• Figure below has been included to help clarify the notation.

• For n + 1 data points (i = 0, 1,2, . . . , n), there are n intervals and, consequently, 3n
unknown constants (the a’s, b’s, and c’s) to evaluate.
• Therefore, 3n equations or conditions are required to evaluate the unknowns.
These are:
• 1. The function values of adjacent polynomials must be equal at the interior knots.
This condition can be represented as,

for i = 2 to n.

Each provide n - 1 conditions for a total of 2n - 2 conditions.


2. The first and last functions must pass through the end points. This adds two additional
equations:

3. The first derivatives of adjacent polynomials at the interior knots must be equal. The
first derivative of the quadratic function is :for i = 2 to n.
4. Assume that the second derivative is zero at the first point. this condition
can be expressed mathematically as , a1 = 0
Example, Fit quadratic splines to the same data used in (Table Example above.
Use the results to estimate the value at x = 5.
Solution, For the present problem, we have four data points and n = 3 intervals.
Therefore, 3(3) = 9 unknowns and the first criteria yield
2(3) - 2 = 4 conditions:
Passing the first and last functions through the
initial and final values adds 2 more
9a1 + 3b1 + c1 = 2.5
81a3 + 9b3 + c3 = 0.5
Continuity of derivatives creates an additional 3 - l = 2 • Finally specifies that a1 = 0.
Because this equation specifies a1
exactly, the problem reduces to
solving eight simultaneous equations.
• Reading assignment :
Cubic Splines
• The objective in cubic splines is to derive a third-order polynomial for each interval between knots,
as in
fi(x) = 𝑎𝑖(𝑥)3 + 𝑏𝑖(𝑥)2 + ci x 1
• Thus, for n - 1 data points (i = 0, 1, 2, . . . , n), there are n intervals and, consequently, 4n unknown
constants to evaluate.
• Just as for quadratic splines, 4n conditions are required to evaluate the unknowns. These are:
1. The function values must be equal at the interior knots (2n - 2 conditions).
2. The first and last functions must pass through the end points (2 conditions).
3. The first derivatives at the interior knots must be equal (n 2 1 conditions).
4. The second derivatives at the interior knots must be equal (n 2 1 conditions).
5. The second derivatives at the end knots are zero (2 conditions).
Multidimensional Interpolation

• The interpolation methods for one-dimensional problems can be extended to


multidimensional interpolation.
• In the present section, we will describe the simplest case of two dimensional
interpolation in Cartesian coordinates.
Bilinear Interpolation
• Two-dimensional interpolation deals with determining intermediate
values for functions of two variables, z = f(xi, yi).
• As depicted in Fig below we have values at four points: f(x1, y1), f(x2, y1), f(x1,
y2), and f(x2, y2).
• We want to interpolate between these points to estimate the value at an
intermediate point f(xi, yi).
• If we use a linear function, the result is a plane connecting the points as in Fig
below.
• Such functions are called bilinear.
• First, we can hold the ‘y’ value fixed and apply one-dimensional linear
interpolation in the ‘x’ direction.
• Using the Lagrange form, the result at (xi, y1) is

These points can then be used to linearly interpolate along the y dimension to yield the
final result,
• A single equation can be developed by substituting the first two
equation in to the last equation
Note that beyond the simple bilinear interpolation described in the foregoing example, higher-
order polynomials and splines can also be used to interpolate in two dimensions.

Vous aimerez peut-être aussi