Académique Documents
Professionnel Documents
Culture Documents
Greg Fasshauer
Fall 2010
whenever f NK ().
Example
For Gaussians
2 kxk2
(x) = e , > 0 fixed,
2
we have (r ) = e r , so that
2r
(`) (r ) = (1)` 2` e for ` `0 = 0.
Remark
We emphasize that this nice property holds only in the non-stationary
setting and for data functions f that are in the native space of the
Gaussians such as band-limited functions.
Example
Example
For Laguerre-Gaussians
s/2 2 kxk2
(x) = Ln (kxk2 )e , > 0 fixed,
we have
s/2 2r
(r ) = Ln (2 r )e
and the derivatives (`) will be bounded by (`) (0) = pn (`)2` , where
pn is a polynomial of degree n.
Thus, the approximation power of Laguerre-Gaussians falls between
(3) and (2) and Laguerre-Gaussians have at least spectral
approximation power.
Example
provided || d s+1
2 e, hX , is sufficiently small, and f N ().
Example
Example
we get
||
|D f (x) D Pf (x)| ChX2 , |f |N () (6)
de1
provided is a bounded domain, || 2 , hX , is sufficiently
small, and f N ().
Example
we get
k ||
|D f (x) D Pf (x)| ChX , |f |N () (7)
provided is a bounded domain, || k 1, hX , is sufficiently small,
and f N ().
(x) = |x|, x R
Example
Similarly, for thin plate splines
one would expect order O(h2 ) in the case of pure, i.e., || = 0, function
approximation.
However, the estimate (7) yields only O(h).
Remark
These two examples suggest that it should be possible to double the
approximation orders obtained thus far.
One can improve the estimates for functions with finite smoothness
(i.e., Matrn functions, Wendland functions, radial powers, and thin
plate splines) by either (or both) of the following two ideas:
by requiring the data function f to be even smoother than what the
native space prescribes, i.e., by building certain boundary
conditions into the native space;
by using weaker norms to measure the error.
The second idea to weaken the norm is rather obvious and can
already be found in the early paper [Duchon (1978)].
Example
After applying both of these techniques the final approximation order
estimate for interpolation with the compactly supported functions s,k
on a bounded domain is (see [Wendland (1997)])
2k +s+1
kf Pf kL2 () ChX , kf kW 2k +s+1 (Rs ) , (8)
2
k + s+1
where we assume f W22k +s+1 (Rs ) W2 2
(Rs ) = N (Rs ).
For example, for the popular basic function
Example (cont.)
The L error for Wendland functions s,k (without the L2 relaxation) is
2k +1+ 2s
kf Pf kL () ChX , kf kW 2k +s+1 (Rs ) ,
2
k + s+1
where f needs to be such that it not only lies in N () = W2 2 (),
but also has an extension to the Sobolev space W22k +s+1 (Rs ) that can
be written as a convolution of an appropriate L2 function supported in
with the basic function s,k .
For the specific example of 3,1 (r ) = (1 r )4+ (4r + 1) on R3 we get
4.5
|f (x) Pf (x)| ChX , kf kW 6 (R3 ) 2
which is more in line with the numerical evidence obtained earlier (see
[Wendland (1997)] for details).
Example
For radial powers one obtains L2 -error estimates of order O(h+s ).
For thin plate splines one obtains L2 -error estimates of order
O(h2k +s ).
Remark
More work on improved error bounds can be found in, e.g.,
[Johnson (2004)] or [Schaback (1999)].
Remark
Since for the examples of piecewise linear splines and thin plate
splines discussed above our kernels (for interpolation on bounded
domains) were given by full space Greens functions, another
possibility to obtain the full approximation order on bounded domains
may be to instead use a Greens function for the bounded domain with
appropriate boundary conditions added.
The error bounds mentioned so far were all valid under the assumption
that the function f providing the data came from (a subspace of) the
native space of the RBF employed in the interpolation.
We now mention a few recent results that provide error bounds for
interpolation of functions f not in the native space of the basic function.
In particular, the case when f lies in some Sobolev space that is larger
than the native space is of great interest.
A rather general theorem (sometimes referred to as a sampling
inequality) was recently given in [Narcowich et al. (2005)].
In this theorem Narcowich, Ward and Wendland provide Sobolev
bounds for functions with many zeros. However, since the interpolation
error function is just such a function, these bounds have a direct
application to our situation.
We point out that this theorem again applies to the non-stationary
setting.
fasshauer@iit.edu MATH 590 Chapter 15 26
Error Bounds for Functions Outside the Native Space
Theorem
Let k be a positive integer, 0 < 1, 1 p < , 1 q and let
be a multi-index satisfying k > || + ps or, for p = 1, k || + s. Let
X be a discrete set with fill distance h = hX , where is a
compact set with Lipschitz boundary which satisfies an interior cone
condition. If u Wpk + () satisfies u|X = 0, then
k +||s( p1 q1 )+
|u|W || () ch |u|W k + () ,
q p
P : Wpk + () V
|Pf |W k + () |f |W k + () ,
p p
This new approach has the advantage that the term C (x) which may
depend on both and X no longer needs to be dealt with.
c1 (1 + kk22 ) ()
c2 (1 + kk22 ) , kk , Rs , (9)
Example
Examples of basic functions with an appropriately decaying Fourier
transform are provided by the families of
Wendland, or
Matrn functions.
h
where X = qX is the mesh ratio for X and qX is the separation
1
distance 2 mini6=j kx i x j k2 .
Recall that
the fill distance corresponds to the radius of the largest possible empty
ball that can be placed between the points in X , while
the separation distance (c.f. Chapter 16), on the other hand, can be
interpreted as the radius of the largest ball that can be
placed around every point in X such that no two balls
overlap. Thus,
the mesh ratio is a measure of the non-uniformity of the distribution of
the points in X .
Similar results were obtained earlier in [Brownlee and Light (2004)] (for
radial powers and thin plate splines only), and in [Yoon (2003)] (for
shifted surface splines, see below).
Example
If we consider polyharmonic splines, then the decay condition (9) for
the Fourier transform is satisfied with
= 2 for thin plate splines and with
= for radial powers.
If we take k = , n = 0, and q = in the error estimate
k ns( 12 q1 )+
|f Pf |Wqn () cXk h kf kC k () ,
then we arrive,
for thin plate splines (x) = kxk2 log(kxk), at the bound
s
|f Pf |L ch2 2 kf kC 2 ()
and, for radial powers (x) = kxk , at
s
|f Pf |L ch 2 kf kC () .
fasshauer@iit.edu MATH 590 Chapter 15 34
Error Bounds for Functions Outside the Native Space
for some fixed base scale 0 and study the approximation error based
on the RBF interpolant
N
X
Pf (x) = cj h (kx x j k),
j=1
where
h = (h ).
Example
A rather disappointing fact is that Gaussians do not provide any
positive approximation order, i.e., the approximation process is
saturated.
This was studied by [Buhmann (1989)] on infinite lattices.
For quasi-interpolation the approximate approximation approach of
Mazya shows that it is possible to choose 0 in such a way that the
level at which the saturation occurs can be controlled (see, e.g.,
[Mazya and Schmidt (1996), Mazya and Schmidt (2007)]).
Therefore, Gaussians may very well be used for stationary
interpolation provided an appropriate initial shape parameter is
chosen.
Remark
We will illustrate this behavior in the next chapter.
The same kind of argument also applies to the Laguerre-Gaussians of
Chapter 4.
fasshauer@iit.edu MATH 590 Chapter 15 41
Error Bounds for Stationary Approximation
Example
Basis functions with compact support such as the Wendland functions
also do not provide any positive approximation order in the stationary
case.
This can be seen by looking at the power function for the scaled basic
function h = (h ) which is of the form
hXh , = h hX , .
h
Example (cont.)
Therefore, the power function (which can be bounded in terms of the
fill distance) satisfies
Ph ,X (x) C h hX ,
h
Remark
If, on the other hand, we work in the approximate approximation
regime, then we can obtain good convergence in many cases (see the
next chapter for some numerical experiments).
Example
Stationary interpolation with (inverse) multiquadrics, radial powers and
thin plate splines presents no difficulties.
In fact, [Schaback (1995b)] shows that the native space error bound for
thin plate splines and radial powers is invariant under a stationary
scaling.
Example (cont.)
Yoon provides error estimates for stationary approximation of rough
functions (i.e., functions that are not in the native space of the basic
function) by so-called shifted surface splines.
Remark
These functions include all of the (inverse) multiquadrics, radial powers
and thin plate splines.
Example (cont.)
Yoon has the following theorem (see [Yoon (2003)] for the Lp case, and
[Yoon (2001)] for L bounds only).
Theorem
Let be a shifted surface spline with shape parameter inversely
proportional to the fill distance hX , . Then there exists a positive
constant C (independent of X ) such that for every f in the Sobolev
space W2 () W
() we have
with
s
p = min{, 2 + ps }.
() with max{0, s s } < < , then
Furthermore, if f W2 () W 2 p
kf Pf kLp () = o(hp + ).
fasshauer@iit.edu MATH 590 Chapter 15 46
Error Bounds for Stationary Approximation
Example (cont.)
Yoons estimates address the question
How well do the infinitely smooth (inverse) multiquadrics
approximate functions that are less smooth than those in their
native space?
Remark
For thin plate splines and radial powers the approximation orders
in Yoons theorem are equivalent to those of the theorem from
[Narcowich et al. (2005)] and the results of Brownlee and Light
mentioned above.
This is to be expected due to the invariance of these basic functions
with respect to scaling.
The second part of Yoons result is a step toward exact
approximation orders as is the work of [Maiorov (2005)] and
[Bejancu (1999)] mentioned above.
None of the error bounds discussed thus far have taken into
account the possibility of varying the shape parameter for a fixed
data set X .
However, in the literature the infinitely smooth basic functions
such as the Gaussians and (inverse) multiquadrics are usually
formulated including the shape parameter (or another parameter
equivalent to it) and one may wonder how a change in this shape
parameter affects the convergence properties of the RBF
interpolant.
In fact, quite a bit of work has been spent on the quest for the
optimal shape parameter (see, e.g.,
[Carlson and Foley (1991), Fasshauer and Zhang (2007),
Foley (1994), Hagan and Kansa (1994), Luh (2010a),
Kansa and Carlson (1992), Rippa (1999), Tarwater (1985),
Wertz et al. (2006)]).
Madych showed that for these basic functions there exists a positive
constant < 1 such that
Remark
This estimate shows that taking either the shape parameter or
the fill distance hX , to zero results in exponential convergence.
However, numerical experiments as well as a more careful
theoretical analysis (see [Luh (2010b)]) show that the constant C
is not independent of , so that there may be a minimal error for a
positive value of .
fasshauer@iit.edu MATH 590 Chapter 15 51
Dimension-Independent Error Bounds
Remark
Both of these bounds show that the rate of convergence
deteriorates as s increases.
Moreover, the dependence of the constants on s is not clear.
Therefore, these kinds of error bounds and in fact almost all
error bounds in the RBF literature suffer from the curse of
dimensionality.
We will now present some results from [Fasshauer et al. (2010)]
on dimension-independent convergence rates for Gaussian kernel
approximation.
errwc
2, = sup kf Pf k2, ,
kf kN s 1
K (R )
kf Pf k2, errwc
2, kf kNK (Rs ) for all f NK (Rs ).
For function approximation this means that the data sites have to
be chosen in an optimal way.
The results in [Fasshauer et al. (2010)] are non-constructive, i.e.,
no such optimal design is specified.
However, a Smolyak or sparse grid algorithm is a natural candidate
for such a design.
If we are allowed to choose arbitrary linear functionals, then the
optimal choice for weighted L2 approximation is known.
In either case we will need an eigenfunction expansion of the
Gaussian kernel.
and eigenfunctions
s !
(1 + 42 )1/4 2 x 2
2 1/4
k (x) = exp Hk 1 (1 + 4 ) x ,
2k 1 (k 1)! 1 2
2 (1 + 1 + 4 )
Remark
The multivariate (and anisotropic) case can be handled using products
of univariate eigenvalues and eigenfunctions.
For details see [Fasshauer et al. (2010)] or [Fasshauer (2010)].
where
X Z
K (x, y) = k k (x)k (y), K (x, y)k (y)(y)dy = k k (x).
k =1
Remark
Even if we do not have an eigenfunction expansion of a specific
kernel available, the work of [Fasshauer et al. (2010)] shows that
for any radial (isotropic) kernel one has a dimension-independent
Monte-Carlo type convergence rate of O(N 1/2+ ) provided
arbitrary linear functionals are allowed to generate the data.
For translation-invariant (stationary) kernels the situation is similar.
However, the constant in the O-notation depends in any case
on the sum of the eigenvalues of the kernel. For the radial case
this sum is simply (0) (independent of s), while for general
translation invariant kernels it is K
e (0), which may depend on s.
Remark
These results show that even though RBF methods are often
advertised as being dimension-blind their rates of
convergence are only excellent (i.e., spectral for infinitely smooth
kernels) if the dimension s is small.
For large dimensions the constants in the O-notation take over.
If one, however, permits an anisotropic scaling of the kernel (i.e.,
elliptical symmetry instead of strict radial symmetry) and if those
scale parameters decay rapidly with increasing dimension, then
excellent convergence rates for approximation of smooth functions
can be maintained independent of s.
For example, if the data sites are located such that they guarantee a
unique polynomial interpolant, then the limiting RBF interpolant is
given by this polynomial.
Remark
These statements require the RBFs to satisfy a condition on
certain coefficient matrices Ap,J . This condition was left unproven
in [Larsson and Fornberg (2005)] and verified in
[Lee et al. (2007)].
Remark
In [Fornberg and Wright (2004)] the authors describe a so-called
Contour-Pad algorithm that makes it possible (for data sets of
relatively modest size) to compute the RBF interpolant for all
values of the shape parameter including the limiting case 0.
d2
L= + 2 I.
dx 2
On the other hand, it is well-known that univariate C 0 piecewise
linear splines may be expressed in terms of kernels of the form
.
K (x, z) = |x z|. The corresponding differential operator is
d2
L= .
dx 2
Note that the differential operator for the Matrn kernel
converges to that of the piecewise linear splines as 0.
Example (cont.)
The univariate C 2 tension spline kernel [Renka (1987)]
.
K (x, z) = e|xz| + |x z| is the Greens kernel of
d4 2 d
2
L= + ,
dx 4 dx 2
.
while the univariate C 2 cubic spline kernel K (x, z) = |x z|3
corresponds to
d4
L = 4.
dx
Again, the differential operator for the tension spline converges
to that of the cubic spline as 0.
Example (cont.)
In [Berlinet and Thomas-Agnan (2004)] we find a so-called
univariate Sobolev kernel of the form
.
K (x, z) = e|xz| sin |x z| + 4 which is associated with
d4
L= 2 I.
dx 4
The operator for this kernel also converges to that of the cubic
spline kernel, but the effect of the scale parameter is different than
for the tension spline.
Remark
Note that this Sobolev kernel is different from the Sobolev splines
(Matrn functions) discussed earlier terminology . . .
Example (cont.)
The general multivariate Matrn kernels are of the form
. s
K (x, y) = Kms/2 (kx yk) (kx yk)ms/2 , m> ,
2
and can be obtained as Greens kernels of (see [Ye (2010)])
m s
L = 2 I , m> .
2
We contrast this with the polyharmonic spline kernels
(
. kx yk2ms , s odd,
K (x, y) = 2ms
kx yk log kx yk, s even,
and
s
L = (1)m m , m> .
2
Example
For univariate C 0 , C 2 and C 4 Matrn kernels, respectively, we have
.
(r ) = er
1 1
= 1 r + (r )2 (r )3 + ,
2 6
. r
(r ) = (1 + r )e
1 1 1
= 1 (r )2 + (r )3 (r )4 + ,
2 3 8
. 2 r
(r ) = 3 + 3r + (r ) e
1 1 1 1
= 3 (r )2 + (r )4 (r )5 + (r )6 + .
2 8 15 48
Remark
The previous theorem does not cover Matrn kernels with odd-order
smoothness. However, all other examples listed above are covered.
1.6
1.6
1.4
1.4
1.2 1.2
1 1
0.8 0.8
References I
References II
Wendland, H. (2005a).
Scattered Data Approximation.
Cambridge University Press (Cambridge).
Bejancu, A. (1999).
Local accuracy for radial basis function interpolation on finite uniform grids.
J. Approx. Theory 99, pp. 242257.
References III
de Boor, C. (1992).
On the error in multivariate polynomial interpolation.
Applied Numerical Mathematics 10, pp. 297305.
de Boor, C. (2006).
On interpolation by radial polynomials.
Adv. in Comput. Math. 24, pp. 143153.
de Boor, C. and Ron, A. (1990).
On multivariate polynomial interpolation.
Constr. Approx. 6, pp. 287302.
de Boor, C. and Ron, A. (1992).
The least solution for the polynomial interpolation problem.
Math. Z. 210, pp. 347378.
Brownlee, R. and Light, W. (2004).
Approximation orders for interpolation by surface splines to rough functions.
IMA J. Numer. Anal. 24, pp. 179192.
References IV
Buhmann, M. D. (1989).
Multivariate interpolation using radial basis functions.
Ph.D. Dissertation, University of Cambridge.
Buhmann, M. D. and Dyn, N. (1991).
Error estimates for multiquadric interpolation.
in Curves and Surfaces, P.-J. Laurent, A. Le Mhaut, and L. L. Schumaker
(eds.), Academic Press (New York), pp. 5158.
Carlson, R. E. and Foley, T. A. (1991).
The parameter R 2 in multiquadric interpolation.
Comput. Math. Appl. 21, pp. 2942.
Driscoll, T. A. and Fornberg, B. (2002).
Interpolation in the limit of increasingly flat radial basis functions.
Comput. Math. Appl. 43, pp. 413422.
References V
Duchon, J. (1978).
Sur lerreur dinterpolation des fonctions de plusieurs variables par les
D m -splines.
Rev. Francaise Automat. Informat. Rech. Opr., Anal. Numer. 12, pp. 325334.
Fasshauer, G. E. (2010).
Greens functions: taking another look at kernel approximation, radial basis
functions and splines.
Submitted.
Fasshauer, G. E., Hickernell, F. J. and Wozniakowski, H. (2010).
Rate of convergence and tractability of the radial function approximation problem.
Submitted.
Fasshauer, G. E. and McCourt, M. J. (2010).
Stable evaluation of Gaussian RBF interpolants.
In preparation.
References VI
References VII
References VIII
Johnson, M. J. (2004).
An error analysis for radial basis function interpolation.
Numer. Math. 98 4, pp. 675694.
Kansa, E. J. and Carlson, R. E. (1992).
Improved accuracy of multiquadric interpolation using variable shape parameters.
References IX
Light, W. A. (1996).
Variational error bounds for radial basis functions.
in Numerical Analysis 1995, D. F. Griffiths and G. A. Watson (eds.), Longman
(Harlow), pp. 94106.
Light, W. A. and Wayne, H. (1995).
Error estimates for approximation by radial basis functions.
in Approximation Theory, Wavelets and Applications, S. P. Singh (ed.), Kluwer
(Dordrecht), pp. 215246.
Light, W. A. and Wayne, H. (1998).
On power functions and error estimates for radial basis function interpolation.
J. Approx. Theory 92, pp. 245266.
Luh L.-T. (2009).
An improved error bound for multiquadric and inverse multiquadric interpolations.
Int. J. Numer. Meth. Applic. 1/2, pp. 101120.
References X
Madych, W. R. (1991).
Error estimates for interpolation by generalized splines.
in Curves and Surfaces, P.-J. Laurent, A. Le Mhaut, and L. L. Schumaker
(eds.), Academic Press (New York), pp. 297306.
Madych, W. R. (1992).
Miscellaneous error bounds for multiquadric and related interpolators.
Comput. Math. Appl. 24, pp. 121138.
Madych, W. R. and Nelson, S. A. (1988).
Multivariate interpolation and conditionally positive definite functions.
Approx. Theory Appl. 4, pp. 7789.
Madych, W. R. and Nelson, S. A. (1992).
Bounds on multivariate polynomials and exponential error estimates for
multiquadric interpolation.
J. Approx. Theory 70, pp. 94114.
References XI
Maiorov, V. (2005).
On lower bounds in radial basis approximation.
Adv. in Comp. Math. 22, pp. 103113.
Mazya, V. and Schmidt, G. (1996).
On approximate approximations using Gaussian kernels.
IMA J. Numer. Anal. 16, pp. 1329.
Narcowich, F. J. and Ward, J. D. (2004).
Scattered-data interpolation on Rn : error estimates for radial basis and
band-limited functions.
SIAM J. Math. Anal. 36 1, pp. 284300.
Narcowich, F. J., Ward, J. D. and Wendland, H. (2003).
Refined error estimates for radial basis function interpolation.
Constr. Approx. 19 4, pp. 541564.
References XII
References XIII
Schaback, R. (1995a).
Error estimates and condition numbers for radial basis function interpolation.
Adv. in Comput. Math. 3, pp. 251264.
Schaback, R. (1995b).
Multivariate interpolation and approximation by translates of a basis function.
in Approximation Theory VIII, Vol. 1: Approximation and Interpolation, C. Chui,
and L. Schumaker (eds.), World Scientific Publishing (Singapore), pp. 491514.
Schaback, R. (1996).
Approximation by radial basis functions with finitely many centers.
Constr. Approx. 12, pp. 331340.
Schaback, R. (1999).
Improved error bounds for scattered data interpolation by radial basis functions.
Math. Comp. 68 225, pp. 201216.
Schaback, R. (2005).
Multivariate interpolation by polynomials and radial basis functions.
Constr. Approx. 21, pp. 293317.
References XIV
Schaback, R. (2008).
Limit problems for interpolation by analytic radial basis functions.
J. Comp. Appl. Math. 212(2), pp. 127149.
Song, G., Riddle, J., Fasshauer, G.E. and Hickernell, F.J. (2009).
Multivariate interpolation with increasingly flat radial basis functions of finite
smoothness.
Submitted.
Tarwater, A. E. (1985).
A parameter study of Hardys multiquadric method for scattered data
interpolation.
Lawrence Livermore National Laboratory, TR UCRL-563670.
Wendland, H. (1997).
Sobolev-type error estimates for interpolation by radial basis functions.
in Surface Fitting and Multiresolution Methods, A. Le Mhaut, C. Rabut, and L.
L. Schumaker (eds.), Vanderbilt University Press (Nashville, TN), pp. 337344.
References XV
Wendland, H. (1998).
Error estimates for interpolation by compactly supported radial basis functions of
minimal degree.
J. Approx. Theory 93, pp. 258272.
Wendland, H. (2001).
Gaussian interpolation revisited.
in Trends in Approximation Theory, K. Kopotun, T. Lyche, and M. Neamtu (eds.),
Vanderbilt University Press, pp. 417426.
Wertz, J., Kansa, E. J. and Ling, L. (2006).
The role of the multiquadric shape parameters in solving elliptic partial differential
equations.
Comput. Math. Appl. 51 8, pp. 13351348.
Wu, Z. and Schaback, R. (1993).
Local error estimates for radial basis function interpolation of scattered data.
IMA J. Numer. Anal. 13, pp. 1327.
References XVI
Ye, Q. (2010).
Reproducing kernels of generalized Sobolev spaces via a Green function
approach with differential operators.
Submitted.
Yoon, J. (2001).
Interpolation by radial basis functions on Sobolev space.
J. Approx. Theory 112, pp. 115.
Yoon, J. (2003).
Lp -error estimates for shifted surface spline interpolation on Sobolev space.
Math. Comp. 72, pp. 13491367.
References XVII