Vous êtes sur la page 1sur 32

A VARIATIONAL METHOD IN IMAGE RECOVERY

GILLES AUBERT

AND LUMINITA VESE

SIAM J. NUMER. ANAL. c 1997 Society for Industrial and Applied Mathematics
Vol. 34, No. 5, pp. 19481979, October 1997 016
Abstract. This paper is concerned with a classical denoising and deblurring problem in image
recovery. Our approach is based on a variational method. By using the LegendreFenchel transform,
we show how the nonquadratic criterion to be minimized can be split into a sequence of half-quadratic
problems easier to solve numerically. First we prove an existence and uniqueness result, and then we
describe the algorithm for computing the solution and we give a proof of convergence. Finally, we
present some experimental results for synthetic and real images.
Key words. image processing, LegendreFenchel transform, partial dierential equations, cal-
culus of variations
AMS subject classications. 35J, 49J, 65N
PII. S003614299529230X
1. Introduction. An important problem in image analysis is the reconstruc-
tion of an original image f describing a real scene from an observed image p. The
transformation (or degradation) connecting f to p is in general the result of two phe-
nomena. The rst phenomenon is deterministic and is related to the mode of image
acquisition (for example, the computation of integral projections in tomography) or to
possible defects of the imaging system (blur created by a wrong lens adjustement, by
a movement,. . .). The second phenomenon is random: the noise inherent degradation
in any signal transmission. Suppose that the noise denoted by is white, Gaussian,
and additive.
The simplest model accounting for both blur and noise is the linear degradation
model: we suppose that f is connected to p by an equation of the form
(1.1) p = Rf +,
where R is a linear operator. (We remain, for the moment, intentionally vague on
the exact signicance of (1.1)in particular, on the space on which this equation is
dened.)
The reconstruction problem of f can be identied, in that way, with an inverse
problem: nd f, from (1.1). In general, this problem is ill-posed in the sense of
Hadamard. The information provided by p and the model (1.1) is not sucient to
ensure the existence, uniqueness, and stability of a solution f.
It is therefore necessary to regularize the problem by adding an a priori constraint
on the solution. The most classical and frequent approach in image reconstruction
is a stochastic approach based, in the framework of Bayesian estimation, on the use
of maximum a posteriori (MAP) estimation. Supposing that f is a Markov eld
(which constitutes an a priori constraint), then the MAP criterion identies with a
minimization problem in which the energy J depends on the image f and on the
gradient. We are not going into the details of this approach; instead, we refer the

Received by the editors September 22, 1995; accepted for publication (in revised form) February
6,1996.
http://www.siam.org/journals/sinum/34-5/29230.html

Laboratoire de Mathematiques, Universite de Nice, Parc Valrose, BP 71, F 06108 Nice, Cedex
02, France (gaubert@math.unice.fr).

Laboratoire de Mathematiques, Universite de Nice, Parc Valrose, BP 71, F 06108 Nice, Cedex
02, France (luminita@math.unice.fr).
1948
A VARIATIONAL METHOD IN IMAGE RECOVERY 1949
reader to the original article of Geman and Geman [13] or, for a clear and synthetic
exposition, to the work of Charbonnier [4].
Our purpose being to study the problem of image reconstruction via the calculus
of variations and partial dierential equations, we do not develop a new model here.
We will study for a continuous image the model described by Geman and Geman for
a numerical image (or a slight modication of this).
In section 2, we will present more precisely the minimization problem studied
here, as well as the assumptions to impose on the model. These will be imposed by
the requirements that we wish to obtain on the results. In section 3 we will show by
using the FenchelLegendre transform how we can introduce in the energy J (possibly
nonconvex) a dual variable b allowing us to reduce the minimization of J to a sequence
of quadratic minimization problems. In section 4 we will study the problem of the
existence and uniqueness of a solution f. The obtained results are based on a singular
perturbation result of Temam [12]. Moreover, we note the analogy of our problem
with the one of minimal surfaces studied by Temam in [12] and [26]. In section 5 we
will describe the algorithm in continuous variables for computing the solution, as well
as a convergence proof. Finally, in sections 6 and 7, we develop the numerical analysis
of the approximated problem, and we try to validate the model by presenting some
examples with synthetic or real images.
2. Description of the model. Assumptions. In continuous variables the ob-
served image p and the reconstructed image f can be represented by functions of
R
2
R which associate with the pixel (x, y) R
2
its gray level p(x, y) or f(x, y); is
the support of the image (a rectangle in general). The gray levels being in nite num-
ber, we can suppose that the observation p(x, y) veries 0 p(x, y) 1(x, y) .
The stochastic model proposed by Geman and Geman for image reconstruction
leads us, as we pointed out in the introduction, to search for a solution among the
minima of the energy
J

(f) =
_

(p(x, y) (Rf)(x, y))


2
dxdy +
_

([Df(x, y)[)dxdy. (2.1)


R is a linear operator of L
2
() L
2
(), and the rst integral represents an attached
term on the data. The function : R
+
R
+
is to be dened and symbolizes the
regularization term (hence, a constraint on the solution). In [13], Geman and Geman
have added a regularization term of the form
_

_
f
x
_
+
_
f
y
__
dxdy,
corresponding (for numerical images) to a regularization on lines and columns. This
term, unlike ours, is not invariant under rotation. The number R
+
is a parameter
which allows us to balance the inuence of each integral in the energy J

(f). If = 0,
J

(f) is
J
0
(f) =
_

(p(x, y) (Rf)(x, y))


2
dxdy,
and so the energy is reduced only to the attached term on the data, and the problem
inf
f
J
0
(f) (2.2)
1950 GILLES AUBERT AND LUMINITA VESE
corresponds to the least-squares method associated with the equation (1.1). Formally,
every solution of (2.2) veries the equation
R

p = R

Rf, (2.3)
where we have denoted by R

the adjoint operator of R. Generally, (2.3) is an ill-posed


problem: R

R is not always invertible or the problem (2.3) is often unstable.


To overcome this diculty, we either look for a solution in a smaller set (where
we have some compactness), or we add a regularization term to the attached term on
the data. This method, which is due to Tikhonov [27], is the one we use here, and
the additional term is represented in (2.1) by
_

([Df(x, y)[)dxdy. It now remains


to nd some appropriate conditions on the function in order to satisfy the following
principle of image analysis:
The reconstructed image must be formed by homogeneous regions, (2.4)
separated by sharp edges.
The model must, therefore, diuse within the regions where the variations of gray
levels are weak, and otherwise, it must preserve the boundaries of these regions; that
is, it must respect the strong variations of gray levels.
So, supposing that the integrals in J

(f) have made sense, then any function real-


izing the minima of J

must formally verify (for instance, in the sense of distribution)


the Euler equation J

(f) = 0 or

2
div
_

([Df[)
[Df[
Df
_
+R

Rf = R

p. (2.5)
Writing (2.5) in a nonconservative form, we will obtain some sucient assump-
tions on , in order to respect, as much as possible, the principle (2.4). To do this,
for each point (x, y) where [Df(x, y)[ ,= 0, let the vectors T(x, y) =
Df(x,y)
|Df(x,y)|
in the
gradient direction, and (x, y) in the orthogonal direction to T(x, y). With the usual
notations f
x
, f
y
, f
xx
, . . . for the rst and second partial derivatives of f, and by for-
mally developing the divergence operator, (2.5) can be written as

2
_

([Df[)
[Df[
_
f

([Df[)f
TT
+R

Rf = R

p, (2.6)
where we have denoted by f

and f
TT
the second derivatives of f in the directions
(x, y) and T(x, y), respectively:
f

=
1
[ Df [
2
(f
2
x
f
yy
+f
2
y
f
xx
2f
x
f
y
f
xy
),
f
TT
=
1
[ Df [
2
(f
2
x
f
xx
+f
2
y
f
yy
+ 2f
x
f
y
f
xy
).
If f is regular (at least continuous), we can interpret the principle (2.4) in the following
manner (as shown in Figure 2.1): locally, we represent a contour ( separating two
homogeneous regions of the image, by a level curve of f: ( = (x, y); f(x, y) = c .
In this case, the vector T(x, y) is normal to ( at (x, y) (, and the expression
f

(x,y)
|Df(x,y)|
=div(
Df(x,y)
|Df(x,y)|
) represents the curvature of ( at this point.
A VARIATIONAL METHOD IN IMAGE RECOVERY 1951
e
e
e
e
eu

%
f(x, y) = c
f(x, y) < c
f(x, y) > c
(x, y)
T

FIG. 2.1. A contour C separating two homogeneous regions.


In the interior of the homogeneous regions
_
(x, y), f(x, y) < c
_
_
_
(x, y), f(x, y) > c
_
,
where the variations of f are weak, we want to encourage smoothing. If we suppose
that

exists, with

(0) = 0 and

(0) > 0, (2.7)


we obtain that in a neighborhood of t = 0, (2.6) is formally

(0)(f

+f
TT
) +R

Rf = R

f, (2.8)
and since for any orthogonal directions T and we always have
f

+f
TT
= f
xx
+f
yy
= f,
(2.8) is written as

(0) f +R

Rf = R

p. (2.9)
Therefore, at the points where the image has a weak gradient, f is a solution of (2.9),
which is a uniformly elliptic equation having (this is well known) strong regularization
diusion properties on the solution.
On the contrary, in a neighborhood of a contour (, the image presents a strong
gradient. If we wish to better preserve this contour, it is preferable to diuse only
in the direction parallel to (, i.e., in the -direction. For this, it will be sucient in
(2.6) to annihilate, for strong gradients, the coecient of f
TT
and to suppose that
the coecient of f

does not vanish:


lim
t+

(t) = 0, (2.10)
lim
t+

(t)
t
= m > 0. (2.11)
This allows us to reduce (2.6) in a neigborhood of + to an equation of the form

2
mf

+R

Rf = R

p,
which we can interpret as a regularizing equation in the -direction.
1952 GILLES AUBERT AND LUMINITA VESE
But (2.10) and (2.11) are not compatible, and one must make a compromise
between these two hypotheses; for example, by supposing that

(t) and

(t)/t both
converge to zero as t but with dierent speeds. More precisely, we suppose that
(2.12a) lim
t+

(t) = lim
t+

(t)
t
= 0,
(2.12b) lim
t+

(t)

(t)
t
= 0,
that is, that

converges faster to 0 than

(t)/t, which makes preponderant the


coecient of f

in (2.6).
The preceding assumptions, (2.7) and (2.12), are rather of qualitative type and
represent a priori the properties that we want to obtain on the solution. But these
are not sucient to prove that the model is well posed mathematically. To do this,
in order to use the direct method of calculus of variations, we suppose that
(2.13) lim
t+
(t) = +;
this assumption ensures the boundness of the minimizing sequences of
J

(f) =
_

(p Rf)
2
dxdy +
_

([Df[)dxdy.
This growth to innity must not be too strong because it must not penalize strong
gradients (or formation of edges). Hence, we suppose a linear growth to innity:
(2.14)
_
There exist constants a
i
> 0 and b
i
0, i = 1, 2, such that
a
1
t b
1
(t) a
2
t +b
2
t R
+
,
and then the natural space on which we seek the solution will be
1 =
_
f L
2
(), Df L
1
()
2
_
.
Finally, for passing to the limit on the minimizing sequences of (2.1) and to obtain
the uniqueness of a solution, we suppose that
(2.15) t (t) is strictly convex on R
+
R
+
.
Remark. A better growth condition than (2.14), which doesnt penalize the for-
mation of edges, could be lim
t
(t) = c > 0. In this case, if M denotes a min-
imal threshold representing strong gradients, then the contribution of the integral
_
|Df|M
([Df[)dxdy in the energy is nearly a constant and then the formation of
an edge does not cost anything in the energy. But the hypothesis of a horizontal
asymptote introduces in general a nonconvexity on for t M, and we know, in this
case, that the problem is ill-posed and can have no solution. Nevertheless, we have
done some numerical tests with the function (t) =
t
2
1+t
2
(which is of this type of
potential and veries (2.7) and (2.12a)). The results obtained are very satisfactory.
To clarify the exposition, we now summarize our assumptions on the potential .
A VARIATIONAL METHOD IN IMAGE RECOVERY 1953
Hypotheses for .
(H1) The function : R
+
R
+
is of class C
2
, is nondecreasing, and satises

(0) = 0 and

(0) > 0.
(H2) The function : R
+
R
+
has the properties
lim
t+

(t) = lim
t+

(t)
t
= 0 and lim
t+

(t)

(t)
t
= 0.
(H3) There exist constants a
i
> 0 and b
i
0, i = 1, 2, such that
a
1
t b
1
(t) a
2
t +b
2
t R
+
.
(H4) The function : R
+
R
+
is strictly convex.
If it will be necessary to dene the function on the whole space, we will extend
it by parity of R
+
to R. Other hypotheses due to the numerical approximation will
be added in the following sections.
Of course, there are many functions verifying (H1)(H4), and no criterion per-
mits the choice of a potential more than any other. Charbonnier, in [4], presents many
choices used in image reconstruction as well as a comparative study. Our choice here
for the tests is the function (t) =

1 +t
2
which veries (H1)(H4) and, moreover,
has a simple and geometric interpretation (the problem of minimal surfaces).
This paper is closely related to the works of Malik and Perona [20], Catte et al. [2],
Rudin and Osher [22], and Chambolle and Lions [3]. Our approach is more oriented
towards the techniques of the calculus of variations than those of PDEs. This paper
completes, in a theoretical point of view, a preceding work concerning tomographic
reconstruction [5], [6]. See also [1], [14], [29].
3. Auxiliary variable. Half-quadratic reduction. Before proving the exis-
tence of a solution, we will show in this section how we can associate an auxiliary
variable (or dual) with the image f and how the regularization term in the energy
(2.1) can be represented by the inmum of quadratic functions. We recall that the
energy J

(f) is
(3.1) J

(f) =
_

(p Rf)
2
dxdy +
_

([Df[)dxdy,
the regularization term being
(3.2) L

(f) =
_

([Df[)dxdy.
To develop this idea, we use the FenchelLegendre transform (see Rockafellar [21]
or Ekeland and Temam [12]). We recall that if l() is a convex function of R
N
into R,
then its FenchelLegendre transform (or polar) is the convex function l

) dened
by
l

) = sup
R
N
(

l())
(

is the usual scalar product). This denition can be extended, without diculty,
to innite-dimensional spaces. Let be an open set of R
N
and l a convex continuous
function of R
N
R, and for u L

()
N
, let the functional
L(u) =
_

l(u(x))dx.
1954 GILLES AUBERT AND LUMINITA VESE
Then the polar of L, denoted L

, is dened on L

()
N
, the dual space of L

()
N
,
where
1

+
1

= 1, by
L

(u

) = sup
uL

()
N
_
_

u(x)u

(x)
_

l(u(x))dx
_
.
If, in addition, l is nonnegative or if l veries an inequality of the type l() a(x)
b [[

R
N
, with a(x) L
1
(), b 0, and [1, ), and if there exists u
0
L

()
N
such that L(u
0
) < , then we can prove (see Ekeland and Temam [12, Chap. IX])
that L

(u

) is written as
L

(u

) =
_

(u

(x))dx.
Of course, we can reiterate the process and dene
L

(u) =
_

(u(x))dx.
Since l is convex, we have l() = l

() and then L

(u) = L(u).
We use this notion of polarity in our problem with N = 2, =

= 2. Let, for
,

R
2
,
l() =
[[
2
2
([[), (3.3)
(

) = l

)
[

[
2
2
, (3.4)
as well as the functionals dened on L
2
()
2
by
(u) =
_

([u(x, y)[)dxdy
_
=
_

_
[u(x, y)[
2
2
l(u(x, y))
_
dxdy
_
, (3.5)
(b) =
_

(b(x, y))dxdy. (3.6)


The following theorem proves that and are dual in a certain sense.
THEOREM 3.1. If (extended by parity on R) veries the following hypotheses:
(H3) there exist constants a
i
> 0 and b
i
0, i = 1, 2, such that
a
1
[t[ b
1
(t) a
2
[t[ +b
2
t R;
(H5) the function t
t
2
2
(t) is convex on R,
then
(u) = inf
bL
2
()
2
_

_
[u b[
2
2
+(b)
_
dxdy, (3.7)
(b) = sup
uL
2
()
2
_

[u b[
2
2
+([u[)
_
dxdy. (3.8)
Proof. We prove (3.7). Let (u) be the value of the inmum in (3.7):
(u) =
_

[u[
2
2
dxdy + inf
b
_

_
[b[
2
2
b u +(b)
_
dxdy.
A VARIATIONAL METHOD IN IMAGE RECOVERY 1955
This can be also written, with (3.4), as
(u) =
_

[u[
2
2
dxdy + inf
b
_

_
[b[
2
2
b u +l

(b)
[b[
2
2
_
dxdy
=
_

[u[
2
2
dxdy sup
b
_

_
b u l

(b)
_
dxdy.
Then (by Ekeland and Temam [12]),
(u) =
_

[u[
2
2
dxdy
_

(u)dxdy.
From (H5) we have that l

() = l() R
2
; hence
(u) =
_

[u[
2
2
dxdy
_

l(u)dxdy = (u).
Equation (3.8) can be proved in the same manner.
We have remarked that we must seek a solution on the space
1 =
_
f L
2
(), Df L
1
()
2
_
.
But in order to use the duality, we will look for a solution f on the space H
1
().
By using the relation (3.7), J

(f) is written, for f H


1
(), as
J

(f) =
_

(p Rf)
2
dxdy + inf
bL
2
()
2
_

_
[Df b[
2
2
+(b)
_
dxdy,
and then
inf
fH
1
()
J

(f) = inf
fH
1
()
inf
bL
2
()
2
_
_

(pRf)
2
dxdy+
_

_
[Df b[
2
2
+(b)
_
dxdy
_
.
Because we can always invert the innima, we get
inf
fH
1
()
J

(f) = inf
bL
2
()
2
_

(b)dxdy+ inf
fH
1
()
_

_
(pRf)
2
+
[Df b[
2
2
_
dxdy
_
and the method is now clear: we x b L
2
()
2
and we solve the problem
(T
b
) inf
fH
1
()
_
_

_
(p Rf)
2
+
[Df b[
2
2
_
dxdy
_
.
If R satises some appropriate assumptions, then (T
b
) has a unique solution, which
is, formally, a solution of the Euler equation:
(3.9)
_

2
f
b
+R

Rf = R

p

2
divb, in T

(),
f
b

= 0, on .
We have then, for all v H
1
() and for all b,
(3.10)
_

_
(p Rf
b
)
2
+

2
[Df
b
b[
2
_
dxdy
_

_
(p Rv)
2
+
[Dv b[
2
2
_
dxdy.
1956 GILLES AUBERT AND LUMINITA VESE
By adding
_

(b)dxdy on each side of (3.10), and by passing to the inmum in b, we


get, for all v H
1
(),
inf
bL
2
()
2
_
_

_
(p Rf
b
)
2
+

2
[Df
b
b[
2
+(b)
_
dxdy (3.11)

_
(p Rv)
2
+([Dv[)
_
dxdy.
Denoting
T(b) =
_

_
(p Rf
b
)
2
+

2
[Df
b
b[
2
+(b)
_
dxdy,
it is then sucient, in order to prove that our algorithm allows us to solve the initial
reconstruction problem, to obtain the existence of b
0
L
2
()
2
such that
T(b
0
) = inf
bL
2
()
2
T(b) with (3.12)
T(b
0
) =
_

_
(p Rf
b0
)
2
+([Df
b0
[)
_
dxdy = J

(f
b0
).
We will then deduce, with (3.11), that
(3.13) J

(f
b0
) J

(v) v H
1
().
At this stage we must precisely formulate the mathematical assumptions in order to
ensure the existence and uniqueness of a solution. There is a problem due to the fact
that we work with sets constructed from the nonreexive Banach space L
1
().
4. Existence and uniqueness of a solution. To simplify, we will suppose that
R = I on L
2
() (which corresponds to a denoising problem), and we will indicate
in Appendix B the minor modications to add if R ,= I. We also suppose that the
weighting parameter is equal to 1, which does not modify the theoretical study of
the problem (its presence and adjustment are fundamental in the applications). The
studied functional is therefore
(4.1) J(f) =
_

(p f)
2
dxdy +
_

([Df[)dxdy.
The basic assumptions that we will suppose to be veried in this section are as follows:
(4.2) p L

() and 0 p(x, y) 1 a.e. (x, y) ,


(4.3) : R R is even, of class C
2
, nondecreasing on R
+
, and there exist constants
a
i
> 0, b
i
0, i = 1, 2, such that a
1
[t[ b
1
(t) a
2
[t[ +b
2
t R,
(4.4) 0 <

(t) < 1t R.
Remark. From (4.4), the functions (t) and
t
2
2
(t) are strictly convex (i.e., the
hypothesis (H5), which is strengthened).
Thanks to (4.4), with the notations of the preceding section, J(f) can be written,
for f H
1
(), as
J(f) = inf
bL
2
()
2
_

(p f)
2
+
_

_
[b Df[
2
2
+(b)
_
dxdy.
PROPOSITION 4.1. For xed b in L
2
()
2
and for p satisfying (4.2), the problem
(4.5) inf
fH
1
()
_

_
(p f)
2
+
[b Df[
2
2
_
dxdy
A VARIATIONAL METHOD IN IMAGE RECOVERY 1957
has a unique solution f
b
H
1
() verifying the Euler equation
(4.6) f
b
+ 2f
b
= 2p divb in T

().
Proof. The functional
J
b
(f) =
_

_
(p f)
2
+
[b Df[
2
2
_
dxdy
being continuous, strictly convex, and coercive on H
1
(), then, by the classical theory
of calculus of variations, there exists a unique f
b
H
1
() such that
(4.7) J
b
(f
b
) J
b
(f) f H
1
(),
which is equivalent to
2
_

(f
b
p)fdxdy +
_

(Df
b
b) Dfdxdy = 0 f H
1
().
Then we obtain (4.6), choosing f T().
Remark. For the moment, we will not include in (4.6) the usual condition on the
boundary of :
f

(x) = 0, the H
1
-regularity of f
b
being insucient to dene the
value on the boundary of the normal derivative.
Hence, we have for all f H
1
() and xed b
(4.8)
_

(f
b
p)
2
dxdy +
_

[b Df
b
[
2
2
dxdy
_

(f p)
2
dxdy +
_

[b Df[
2
2
dxdy,
and by adding (b) on each side of (4.8) and taking the inmum in b, for all f H
1
(),
we obtain
inf
bL
2
()
2
_

_
(f
b
p)
2
+
[b Df
b
[
2
2
+(b)
_
dxdy (4.9)

((p Rf)
2
+([Df[))dxdy = J(f).
We recall that
T(b) =
_

_
(f
b
p)
2
+
1
2
[b Df
b
[
2
+(b)
_
dxdy.
Now we must prove that the problem inf
bL
2
()
2 T(b) has a solution b
0
, which will
involve the existence of a function f
0
solution of the initial problem
J(f
0
) J(f) f 1.
First, we will state some properties of the dual function .
LEMMA 4.2. If veries (4.3) and (4.4), then the function dened by (3.4) has
the following properties:
(4.10)

) is strictly convex,
(4.11) there exist constants a

i
> 0 and b

i
0 such that
a

1
[

[ b

1
(

) a

2
[

[ +b

R
2
.
1958 GILLES AUBERT AND LUMINITA VESE
Proof. We recall the denition of (

). If l() denotes the strictly convex function


(from (4.4))
l() =
[[
2
2
([[),
then (

) is dened by
(

) = l

)
[

[
2
2
(l

denotes the FenchelLegendre transform of l).


We prove (4.11):
(

) = sup

l()
_

[
2
2
= sup
t0
sup
||=t
_


[[
2
2
+([ [)
_

[
2
2
= sup
t0
_
t[

[
t
2
2
+(t)
_

[
2
2
,
and since is even,
(

) = sup
tR
_
t[

[
t
2
2
+(t)
_

[
2
2
.
Hence, with (4.3), we have
b
1

[
2
2
+ sup
tR
_
a
1
[t[ +[

[t
t
2
2
_
(

) (4.12)
b
2

[
2
2
+ sup
tR
_
a
2
[t[ +[

[t
t
2
2
_
.
The supremum in the right-hand side of (4.12) is achieved for t = a
2
+ [

[, and its
value is
1
2
(a
2
+[

[)
2
; hence, with (4.12),
(

) b
2

[
2
2
+
1
2
(a
2
+[

[)
2
= b
2
+
1
2
a
2
2
+a
2
[

[,
from which we obtain the second inequality of (4.11), with a

2
= a
2
and b

2
= b
2
+
1
2
a
2
2
.
The rst inequality of (4.11) can be proved in the same way. The proof of (4.10)
follows from (4.4) and by a classical argument of convex analysis. From (4.4), we
have that the function l() is strictly convex; hence, l

) is of class C
2
(by
Rockfellar [21], Dacorogna [10]) and
(

) = l

2
(

) =
2
l

) I.
Moreover, since l() is strictly convex, l() is strictly monotonic and then, for
each

R
2
, there is a unique
0
R
2
such that

= l(
0
), or equivalently,
l

) =
0
. Hence, we get
(4.13) (

) =
0
l(
0
) = (([[))
=0
=

([
0
[)
[
0
[

0
A VARIATIONAL METHOD IN IMAGE RECOVERY 1959
and (see Crouzeix [9] for computational details)
(4.14)
2
(

) =
_

2
l(
0
)
_
1
I,
which can be also written, from the denition of l(), as

2
(

) =
_
I
2
(([[))
=0
_
1
I.
Thanks to (4.4), it is then clear that the matrix
2
(

) is symmetric positive denite;


consequently, is strictly convex.
Remark. In Lemma 4.2,
0
is, in fact, the unique point realizing the supremum
sup

l()) = l

).
Provided with the properties of the function , we can now return to the study
of the problem (4.9):
inf
bL
2
()
2
_
T(b) =
_

_
(f
b
p)
2
+
[b Df
b
[
2
2
+(b)
_
dxdy
_
.
If b
n
is a minimizing sequence, then it is simple to deduce from (4.8) and (4.11) that
b
n
and f
bn
verify the estimates
|f
bn
|
L
2
()
c,
|b
n
|
L
1
()
2 c,
where c is a constant which only depends on the data. But we cannot obtain an
H
1
()-estimate for f
bn
and an L
2
()
2
-estimate for b
n
, hence we must work on the
nonreexive space L
1
() or on /
b
(), the space of bounded measures. To over-
come this diculty, we regularize the problem by making a slight modication on the
potential . We introduce the function

(t) = (t) +

2
t
2
, > 0,
with which we associate
l

() =
[[
2
2

([[),

) = l

()
[[
2
2
.
The function

has the same properties as if we modify and replace (4.4) by the


following:
(4.4)

There is
0
, with 0 <
0
< 1 such that 0 <

(t) < 1
0
, for all t R.
The assumption (4.4)

is not restrictive, because we can always change the weighting


parameter in the energy J

(f), to have (4.4)

veried.
By proceeding as in Lemma 4.2, it is easy to see, for
0
, that
(4.10)

) is strictly convex,
(4.11)

b
1
+
a
2
1
2(1 )
+
a
1
1
[

[ +

2(1 )
[

[
2

)
b
2
+
a
2
2
2(1 )
+
a
2
1
[

[ +

2(1 )
[

[
2
,
1960 GILLES AUBERT AND LUMINITA VESE
and the regularized problem associated with T(b) is
(4.15) inf
bL
2
()
2
_
T

(b) =
_

_
(f
b
p)
2
+
[b Df
b
[
2
2
+

(b)
_
dxdy
_
,
for which we state the following proposition.
PROPOSITION 4.3. Under the assumptions (4.2), (4.3), and (4.4)

, the problem
(4.15) has a unique solution b

and there exists a constant c, independent of , such


that
(4.16)
|b

|
2
L
2
()
2
c,
|Df
b
|
2
L
2
()
2
c,
|b

|
L
1
()
2 c,
|f
b
|
L
2
()
c.
Proof. The functional T

(b) is strictly convex since, from (4.6), the map b f


b
is ane from L
2
() to H
1
() and the function

is strictly convex (from (4.10)

).
Moreover, from (4.11)

, for each b L
2
()
2
there are some coecients a

1
> 0
and b

1
such that

2
|b|
2
L
2
()
2 +a

1
|b|
L
1
()
2 b

1
T

(b);
hence, for xed , the minimizing sequences b
n

from (4.15) are bounded in L


2
()
2
;
therefore, there exist b

L
2
()
2
and a subsequence denoted also by b
n

such that
b
n

w
b

in L
2
() weak. Since to the strict convexity of T

, b

is unique, the entire


sequence b
n

converges to b

, and
T

(b

) lim
n
T

(b
n

) = inf
b
T

(b) T

(b) b L
2
()
2
;
that is, b

is the unique solution of (4.15):


(4.17)
_

_
(f
b
p)
2
+
[b

Df
b
[
2
2
+

(b

) +

2
[b

[
2
_
dxdy

_
(f
b
p)
2
+
[b Df
b
[
2
2
+

(b) +

2
[b[
2
_
dxdy b L
2
()
2
.
Choosing, for example, b = 0 in (4.17), it is clear that there is a constant c,
independent of , such that (4.16) is veried.
The following theorem examines the optimality condition satised by b

.
THEOREM 4.4. The solution b

of the problem (4.15) veries the optimality con-


dition
(4.18) (1 +)b

+D

(b

) Df
b
= 0 a.e. (x, y) .
Proof. To simplify the notations, we denote f

= f
b
; then let us consider a
variation of b

of the form b

= b

+ q, where R and q L
2
()
2
. Denoting
f

= f
b

, it is clear, thanks to the linearity of formula (4.6), that


(4.19) f

= f

+h,
where h veries
(4.20) h + 2h = divq in H
1
()

(the dual of H
1
()).
A VARIATIONAL METHOD IN IMAGE RECOVERY 1961
With this remark
T

(b

) T

(b

(4.21) =
_

(2f

2p +h)hdxdy +
_

(2b

2Df

+(q Dh)) (q Dh)dxdy


+
1

((b

+q) (b

))dxdy.
Now, as 0, the sum of the two rst integrals converges to
2
_

_
(f

p)h + (b

Df

) (q Dh)
_
dxdy.
For the third integral, according to a result of convex analysis of Tahraoui [25],
we have from (4.10)

and (4.11)

that there exist two constants a(), b() 0, such


that
(4.22) [

(b)[ a()[b[ +b() b R


2
.
Hence, thanks to (4.16) and the Lebesgue dominated convergence theorem, we can
pass to the limit in the third integral and obtain
(4.23) lim
0
1

(T

(b

)T

(b

)) = 2
_

_
(f

p)h+(b

Df

)(qDh)+D

(b

)q
_
dxdy.
But with the Euler equation of (4.7),
(4.24) 2
_

(f

p)hdxdy =
_

(Df

) Dhdxdy.
Therefore, since b

is a critical point of T, for all q L


2
()
2
,
lim
0
1

(T

(b

) T

(b

)) =
_

_
(b

Df

) +D

(b

)
_
qdxdy = 0,
which implies the relation we wanted to prove:
(4.25) b

Df

+D

(b

) = 0.
In the following corollary, we express D

(b

) in terms of Df

and

([Df

[).
COROLLARY 4.5. The optimality condition (4.25) can be written as
(4.26) b

=
_
(1 )

([Df

[)
[Df

[
_
Df

.
Proof. We recall that

) = l

)
[

[
2
2
,
where
l

() =
[[
2
2
([[)

2
[[
2
.
Thanks to (4.3) and (4.4)

, the function l

is strictly convex; hence, l

and

are
dierentiable and
(4.27) D

) = Dl

.
1962 GILLES AUBERT AND LUMINITA VESE
But
l

) = sup

()
_
.
Then Dl

) =

, where

is the unique point realizing the sup

()); that
is,

Dl

) = 0 or
(4.28)

= (1 )

([

[)
[

[
= 0.
Denoting
L

() = (1 )

([[)
[[
,
thanks to (4.4)

, L

is invertible and (4.28) is equivalent to

= L
1
(

) and D

) = L
1
(

.
With the optimality condition, we have the following sequence of equalities:
D

(b

) +b

Df

= 0,
L
1
(b

) b

+b

Df

= 0,
L
1
(b

) = Df

,
b

= L(Df

) = (1 )Df

Df

([Df

[)
[Df

[
,
b

=
_
(1 )

([Df

[)
[Df

[
_
Df

.
Now, to prove the existence of a solution for the initial problem (3.13), it remains
to study the behavior of f

and b

when 0. The system linking f

and b

consists
of two equations, namely, (4.6) and (4.26).
From (4.26) we have
divb

= (1 ) f

div
_

([Df

[)
[Df

[
Df

_
.
Putting this equality in (4.6), we get
(4.29) f

+ div
_

([Df

[)
[Df

[
Df

_
= 2(f

p).
Then, (4.29) is exactly the Euler equation associated with the problem
inf
_
J

(f); f H
1
()
_
,
where
(4.30) J

(f) =
_

(p f)
2
dxdy +
_

([Df[)dxdy +

2
_

[Df[
2
dxdy.
A VARIATIONAL METHOD IN IMAGE RECOVERY 1963
Otherwise, we remark, thanks to (4.26), that
J

(f

) =
_

(p f

)
2
dxdy +
_

_
([Df

[) +

2
[Df

[
2
_
dxdy
=
_

(p f

)
2
dxdy + inf
bL
2
()
2
_

_
[b Df

[
2
2
+

(b)
_
dxdy
=
_

_
(p f

)
2
+
[b

Df

[
2
2
+

(b

)
_
dxdy
= inf
bL
2
()
2
T

(b) J

(f) f H
1
().
Thanks to classical results of regularity, the solution f

of (4.29) belongs to C
2
()
(see, for example, Ladyzenskaya and Uralceva [17] or Gilbarg and Trudinger [16]).
Moreover, we can easily obtain an L

-estimate for f

.
PROPOSITION 4.6. If p veries (4.2), then the solution f

of (4.29) satises
(4.31) 0 f

(x, y) 1 a.e. (x, y) .


Proof. Let us show, for example, that f

(x, y) 1(x, y) . The other inequal-


ity can be proved in the same way.
f

is a solution of the variational problem


(4.32)
2
_

(f

p)vdxdy +
_

([Df

[)
[Df

[
Df

Dvdxdy +
_

Df

Dvdxdy = 0v H
1
().
In (4.32), we choose v = (f

1)
+
0; according to Stamppachia [24], v H
1
()
and (4.32) can be written as
(4.33)
_
f>1
[Df

[
2
dxdy +
_
f>1

([Df

[)dxdy = 2
_
f>1
(f

p)(f

1)
+
.
But, by hypothesis,

(t) 0 on R
+
(see (4.3)) and 0 p(x, y) 1 a.e. (x, y) .
Then (f

p)(x, y) 0 a.e. (x, y) (x, y); f

> 1, which implies, from (4.33), that


_
f>1
[Df

[
2
dxdy 0;
from this, we have that Df

(x, y) = 0 (x, y) (x, y); f(x, y) > 1; i.e., (f

1)
+
= 0,
which is equivalent to f

(x, y) 1 a.e. (x, y) .


The following estimates are more delicate and are based on a very ne pertur-
bation lemma due to Temam [12], [26]. This lemma is rather technical, and we will
make a sketch of the proof in Appendix A.
PROPOSITION 4.7. If p W
1,
, then for every open set O relatively compact in
, there is a constant K = K(O, , |p|
W
1,) such that
(4.34) |f

|
W
1,
(O)
K,
(4.35) |f

|
H
2
(O)
K.
This proposition allows us to pass to the limit on f

and b

when 0. Besides
the estimates (4.31), (4.34), and (4.35), we can add, thanks to (4.3):
(4.36) |f

|
W
1,1
()
c (c independent of ).
1964 GILLES AUBERT AND LUMINITA VESE
With these estimates, we can state, using the classical results of compactness and the
diagonal process, that there is a function f
0
and a sequence
m
0 such that
f
m
f
0
in L

() weak-star, (4.37)
Df
m
Df
0
in L

(O) weak-star O O , (4.38)


f
m
f
0
in H
2
(O) weak O O , (4.39)
f
m
f
0
in L
1
() strong, (4.40)
f
m
[
O
f
0
[
O
in H
1
(O) strong O O , (4.41)
f
m
(x, y) f
0
(x, y) a.e. (x, y), (4.42)
Df
m
(x, y) Df
0
(x, y) a.e. (x, y), (4.43)
and we have the following result.
THEOREM 4.8. Under the previous assumptions, (4.2), (4.3), and (4.4), and if
p W
1,
(), then the function f
0
dened before belongs to W
1,1
()

() and is
the unique solution of the initial optimization problem
(4.44) inf
_
J(f) =
_

(p f)
2
dxdy +
_

([Df[)dxdy, f L
2
(), Df L
1
()
2
_
.
Proof. By the Fatou lemma, (4.36) and (4.43), it is clear that f
0
belongs to
W
1,1
()

() (we have, moreover, that f


0
[
O
H
2
(O)

W
1,
(O), for all O with
O O ). f
0
is a solution of (4.44). In fact, f
m
is the solution of the variational
problem (4.32). Thanks to the Tahraoui result mentioned before, the assumptions
(4.3) and (4.4) imply that there exists a constant M > 0 such that [

(t)[ M, for all


t R. Therefore, with the convergences (4.37)(4.43) and the Lebesgue dominated
convergence theorem, we can pass to the limit in (4.32) and obtain
(4.45) 2
_

(f
0
p)
2
vdxdy +
_

([Df
0
[)
[Df
0
[
Df
0
Dvdxdy = 0 v H
1
().
By density, (4.45) is true for all v L
2
() with Dv L
1
()
2
, and since the problem
is strictly convex, f
0
is the unique solution of (4.44); moreover, 0 f(x, y) 1 a.e.
(x, y) .
The previous results imply some convergence properties for the sequence of the
dual variables b

. In fact, we have proved that b

veries
(4.46) b

=
_
(1 )

([Df

[)
[Df

[
_
Df

and
(4.47) J

(f

) = J(f

) +

2
_

[Df

[
2
dxdy =
_

(p f

)
2
dxdy
+ inf
bL
2
()
2
_

_
[b Df

[
2
2
+

(b)
_
dxdy = inf
bL
2
()
2
T

(b) J

(f)f H
1
().
If 0, we deduce from (4.46) that b

(x, y) b
0
(x, y) a.e. (x, y) , where
b
0
(x, y) =
_
1

([Df
0
(x, y)[)
[Df
0
(x, y)[
_
Df
0
(x, y).
A VARIATIONAL METHOD IN IMAGE RECOVERY 1965
The sequence of equalities in (4.47) proves that f

is a minimizing sequence for the


problem inf
f
J(f), and that
_

([Df
0
[)dxdy = lim
0
inf
bL
2
()
2
_

_
[b Df
0
[
2
2
+

(b)
_
dxdy
= lim
0
_

_
([Df

[) +

2
[Df

[
2
_
dxdy.
We will present more precisely some convergence results for b

in the next section.


Remark. In Theorem 4.8, we have obtained the existence under the condition
p W
1,
(). This is a restrictive condition, the most natural being p L

. This
restriction is due to the method; we can relax it by working on BV (), the space
of functions with bounded variation, and by using the notion of convex function of
a measure [11], [18]. Or, by another point of view, we can solve the problem in the
context of viscosity solutions (see [8] for the general theory and [18] for applications to
image analysis). Nevertheless, with the assumption p W
1,
(), we have obtained
the regularity result f
0
H
2
(O)

W
1,
(O) for all O O ; that is, the process
is regularizing.
5. Description and convergence of the algorithm. In this section, we are
working in the context of Theorem 4.8, and we assume the existence and uniqueness
of a function f
0
W
1,1
()

(), which is the solution of


(5.1) inf
_
J(f) =
_

(p f)
2
dxdy +
_

([Df[)dxdy; f L
2
(), Df L
1
()
2
_
.
Using the previous results, we describe the algorithm for computing f
0
. We denote
T(b, f), the functional dened on L
2
()
2
H
1
(), by
(5.2) T(b, f) =
_

(p f)
2
dxdy +
1
2
_

[Df b[
2
dxdy +
_

(b)dxdy
(with dened as before).
The iterative algorithm is as follows.
(i) f
0
H
1
() is arbitrarily given, with 0 f
0
1.
(ii) f
n
H
1
() being calculated, we compute b
n+1
by solving the minimization
problem
(5.3) T(b
n+1
, f
n
) T(b, f
n
) b L
2
()
2
.
Equation (5.3), which is a strictly convex problem, has a unique solution b
n+1
satis-
fying the equation b
n+1
= Df
n+1
D(b
n+1
), or, by Corollary 4.5,
(5.4) b
n+1
=
_
1

([Df
n
[)
[Df
n
[
_
Df
n
.
(iii) f
n+1
is therefore calculated as the solution of the problem
(5.5) T(b
n+1
, f
n+1
) T(b
n+1
, f) f H
1
(),
which is equivalent to solving the variational problem
(5.6)
_

(Df
n+1
b
n+1
) Dfdxdy + 2
_

(f
n+1
p)fdxdy = 0 f H
1
().
Equation (5.6) has a unique solution f
n+1
.
We denote by U
n
the sequence U
n
= T(b
n+1
, f
n
).
1966 GILLES AUBERT AND LUMINITA VESE
LEMMA 5.1. The sequence U
n
is convergent.
Proof. We prove that U
n
is decreasing and bounded below. We have
U
n1
U
n
= T(b
n
, f
n1
) T(b
n+1
, f
n
),
U
n1
U
n
= (T(b
n
, f
n
) T(b
n+1
, f
n
)) + (T(b
n
, f
n1
) T(b
n
, f
n
));
thanks to the denition of b
n+1
and f
n
, we have for all n > 0,
A
n
= T(b
n
, f
n
) T(b
n+1
, f
n
) 0,
B
n
= T(b
n
, f
n1
) T(b
n
, f
n
) 0.
Therefore, U
n1
U
n
= A
n
+B
n
0; that is, U
n
is decreasing, and since
inf
b
_

(b)dxdy > ,
the sequence U
n
is bounded below and then is convergent.
LEMMA 5.2. The previous sequence b
n
veries
(5.7) lim
n
|b
n
b
n+1
|
L
2
()
2 = 0.
Proof. We study the term A
n
, which can be written as
A
n
=
_

(f
n
p)
2
dxdy +
1
2
_

[Df
n
b
n
[
2
dxdy +
_

(b
n
)dxdy

(f
n
p)
2
dxdy
1
2
_

[Df
n
b
n+1
[
2
dxdy
_

(b
n+1
)dxdy,
A
n
=
_

_
1
2
[Df
n
b
n
[
2
+(b
n
)
_
dxdy
_

_
1
2
[Df
n
b
n+1
[
2
+(b
n+1
)
_
dxdy.
Denoting h
n
(b) =
1
2
[Df
n
b[
2
+(b), then
A
n
=
_

(h
n
(b
n
) h
n
(b
n+1
))dxdy.
Thanks to the Taylor formula, there exists c
n
between b
n
and b
n+1
such that
A
n
=
_

(b
n
b
n+1
) Dh
n
(b
n+1
)dxdy +
1
2
_
t

(b
n
b
n+1
) D
2
h
n
(c
n
)(b
n
b
n+1
)dxdy.
But Dh
n
(b
n+1
) = b
n+1
Df
n
+D(b
n+1
) = 0, by the denition of b
n+1
. Moreover,
D
2
h
n
(b) = I +D
2
(b) I, because is convex (by Lemma 4.2). Consequently,
A
n

_

[b
n
b
n+1
[
2
dxdy.
Otherwise, U
n1
U
n
= A
n
+B
n
A
n
0, and since the sequence U
n
is convergent,
lim
n
A
n
= 0, which implies that
lim
n
_

[b
n
b
n+1
[
2
dxdy = 0.
In general, we cannot obtain a more precisely convergent theorem (for example,
the convergence in H
1
()), without supposing more regularity on the solution.
A VARIATIONAL METHOD IN IMAGE RECOVERY 1967
LEMMA 5.3. If 0 (

(t)/t) 1 t 0, then the sequence f


n
is bounded in
H
1
().
Proof. The proof is based on a recurrence process. In (5.6) we choose f = f
n+1
;
we get
(5.8)
_

_
[Df
n+1
[
2
+ 2(f
n+1
)
2
_
dxdy =
_

_
b
n+1
Df
n+1
+ 2pf
n+1
_
dxdy.
With (5.4), and since 0 (

(t)/t) 1 (in fact, in the applications, (

(t)/t) decreases
from 1 to 0 for t ]0, [), we have [b
n+1
[ [Df
n
[, which implies, with (5.8), that
(5.9)
_

_
[Df
n+1
[
2
+ 2(f
n+1
)
2
_
dxdy
_

_
[Df
n
[ [Df
n+1
[ + 2[p[[f
n+1
[
_
dxdy.
Denoting M = max(2|p|
L
2, |f
0
|
H
1), we have
(5.10) |f
n
|
H
1
()
M n.
In fact, (5.10) is true for n = 0; suppose that (5.10) is true for n, and with (5.9),
|f
n+1
|
2
H
1
_

_
[Df
n+1
[
2
+ 2(f
n+1
)
2
_
dxdy
M|Df
n+1
|
L
2 + 2|p|
L
2|f
n+1
|
L
2 M|f
n+1
|
H
1,
from which |f
n+1
|
H
1 M. Then (5.10) is true for all n.
Like a corollary of Lemma 5.3, we easily deduce that the sequence b
n
is bounded
in L
2
()
2
. The following theorem examines the convergence of f
n
to f
0
, the solution
of the problem (5.1); it is therefore necessary to add a slight regularity assumption
on f
0
.
THEOREM 5.4. If the solution f
0
of (5.1) belongs to H
1
(), then
i) f
n
f
0
in L
2
() strong ;
ii) Df
n
Df
0
in L
2
()
2
weak ;
iii) lim
n
_

([Df
n
[)dxdy =
_

([Df
0
[)dxdy;
iv) Df
n
Df
0
in L
1
()
2
strong.
Proof. We know that f
0
is the unique solution, belonging to
1 =
_
f L
2
(), Df L
1
()
2
_
,
of the variational problem
(5.11)
_

([Df
0
[)
[Df
0
[
Df
0
Dfdxdy + 2
_

(f
0
p)fdxdy = 0 f H
1
().
With b
0
dened in the previous section, (5.11) is equivalent to (all the integrals have
sense, because f
0
H
1
())
(5.12)
_

(Df
0
Df b
0
Df)dxdy + 2
_

(f
0
p)fdxdy = 0 f H
1
().
1968 GILLES AUBERT AND LUMINITA VESE
Otherwise, with (5.6), f
n
is dened by
(5.13)
_

(Df
n
Df b
n
Df)dxdy + 2
_

(f
n
p)fdxdy = 0 f H
1
().
By subtracting (5.13) from (5.12), and choosing f = f
0
f
n
, we get
(5.14)
_

[Df
0
Df
n
[
2
dxdy+2
_

[f
0
f
n
[
2
dxdy
_

(b
n
b
0
)(Df
n
Df
0
)dxdy = 0.
With the denition of b
0
and b
n
, by adding and subtracting b
n+1
, it is easy to see
that
_

(b
n
b
0
) (Df
n
Df
0
)dxdy
=
_

[Df
n
Df
0
[
2
dxdy +
_

(b
n
b
n+1
)(Df
n
Df
0
)dxdy
+
_

([Df
0
[)
[Df
0
[
Df
0

([Df
n
[)
[Df
n
[
Df
n
__
Df
n
Df
0
_
dxdy.
If we denote
j(f) =
_

([Df[)dxdy,
then (5.14) can be written as
2
_

(f
n
f
0
)
2
dxdy +
_

(b
n+1
b
n
)(Df
n
Df
0
)dxdy (5.15)
+j

(f
0
) j

(f
n
), f
n
f
0
) = 0.
Since j is convex, the third integral in (5.15) is nonnegative and then
(5.16) 2
_

(f
n
f
0
)
2
dxdy +
_

(b
n+1
b
n
) (Df
n
Df
0
)dxdy 0.
With Lemma 5.2, (b
n+1
b
n
)
n
0 in L
2
()
2
strong, and with Lemma 5.3 and the
assumption f
0
H
1
(), we have that Df
n
Df
0
is bounded in L
2
()
2
; hence, by
passing to the limit in (5.16), we get
(5.17) lim
n
_

(f
n
f
0
)
2
dxdy = 0.
To prove ii), we remark, thanks to Lemma 5.3, that there is an

f H
1
() such
that f
n


f (or for a subsequence) in H
1
() weak and, with (5.17), that necessarily

f = f
0
, and that the entire sequence converges.
To prove iii), we deduce from (5.15) and (5.17) that
lim
n
j

(f
0
) j

(f
n
), f
n
f
0
) = 0;
that is,
(5.18) lim
n
_

(Df
0
[)
[Df
0
[
Df
0

([Df
n
[)
[Df
n
[
Df
n
_
(Df
n
Df
0
)dxdy = 0,
A VARIATIONAL METHOD IN IMAGE RECOVERY 1969
and since Df
n
Df
0
in L
2
()
2
weak, (5.18) implies that
(5.19) lim
n
_

([Df
n
[)
[Df
n
[
Df
n
(Df
n
Df
0
)dxdy = 0.
But, since is convex, we have
_

(([Df
0
[) ([Df
n
[))dxdy
_

([Df
n
[)
[Df
n
[
Df
n
(Df
0
Df
n
)dxdy,
from which, with (5.19):
_

([Df
0
[)dxdy lim
n
_

([Df
n
[)dxdy.
And, since we always have (thanks to the convexity of )
lim
n
_

([Df
n
[)dxdy
_

([Df
0
[)dxdy,
we get
(5.20) lim
n
_

([Df
n
[)dxdy =
_

([Df
0
[)dxdy.
The proof of iv) is a consequence of the following result due to Visintin.
THEOREM 5.5 (Visintin [28, Thm. 3]). Let be a strictly convex function from
R
2
R and let u
n
be a sequence from L
1
()
2
such that
u
n
u in L
1
()
2
weak,
_

(u
n
)dxdy
_

(u)dxdy.
Then u
n
u in L
1
() strong.
To prove part iv), we apply the Visintin result with u
n
= Df
n
and (u) =
([Du[).
Remarks.
(1) f
0
, the solution of the initial reconstruction problem (5.1), necessarily veries,
in the sense of distribution,
(5.21) 2(p f
0
) div
_

(Df
0
[)
[Df
0
[
Df
0
_
= 0 in T

().
Since p, f
0
L

(), we deduce from (5.21) that


div
_

(Df
0
[)
[Df
0
[
Df
0
_
L

(),
and then if f
0
H
1
(), with a result of Lions and Magenes [19], we can give sense,
on the boundary of , to the conormal derivative

([Df
0
[)/[Df
0
[Df
0
n (n is the
exterior normal to ). Multiplying (5.21) by f H
1
() and integrating by parts,
we get, with (5.12),
(5.22)

([Df
0
[)
[Df
0
[
f
0
n
= 0 on ,
1970 GILLES AUBERT AND LUMINITA VESE
and if lim
t

(t)/t = 0 (by (2.12)), then with (5.22), we have either


(5.23)
f
0
n
= 0 on
or
(5.24) [Df
0
[ = + on .
Supposing that in a neighborhood of its boundary, the image does not present an
edge, we can incorporate (5.23) like a boundary condition in the algorithm.
(2) If lim
t0

(t)/t = 1 by (2.7), lim


t

(t)/t = 0 by (2.12); then the function


|b0|
|Df0|
(x, y) is an edge indicator which takes, roughly speaking, only the values 1 or 0.
In fact,
[b
0
[
[Df
0
[
(x, y) =
_
1

([Df
0
[)
[Df
0
[
_
(x, y).
In a neighborhood of a pixel (x, y) belonging to an edge, [Df
0
[ is big and
|b0|
|Df0|
1,
whereas in the interior of a homogeneous region, [Df
0
[(x, y) is small and
|b0|
|Df0|
(x, y)
0.
(3) There are other dualities for introducing an auxiliary variable. For example, if
t (

t) is strictly concave, we can prove that there is a function strictly convex


and decreasing such that
(t) = inf
b
(bt
2
+(b)).
This duality was exploited by Geman and Reynolds [15] and Charbonnier et al. [6].
6. The numerical approximation of the model. In this section we will
present the numerical approximation, by using the nite dierence method, for the
Euler equation associated with the minimization reconstruction problem; that is,
(E) (R

Rf R

p) div
_

([Df[)
[Df[
Df
_
= 0,
where the function is of the type of potential introduced in the previous sections
(this equation is equivalent to (2.5), by taking =
2

). For the tests, we have used

1
(t) =

1 +t
2
in the convex case and
2
(t) =
t
2
1+t
2
, which is not convex.
Before starting with the algorithm, we recall some standard notation. Let
1
0
) x
i
= ih, y
j
= jh, i, j = 1, 2, . . . , N, with h > 0;
2
0
) f
ij
f(x
i
, y
j
), f
n
ij
f
n
(x
i
, y
j
);
3
0
) p
ij
p(x
i
, y
j
);
4
0
) m(a, b) = minmod(a, b) =
sgna + sgnb
2
min([a[, [b[);
5
0
)
x

f
ij
= (f
i1,j
f
ij
) and
y

f
ij
= (f
i,j1
f
ij
).
For the moment, we begin with the case R = I and let , the function, be dened
by
: R R, (t) =
_
_
_

(t)
t
if t ,= 0,
lim
t0

(t)
t
if t = 0.
A VARIATIONAL METHOD IN IMAGE RECOVERY 1971
Then, for each type of potential,
1
and
2
, the function is positive and bounded
on R.
The numerical method is as follows. (We essentially adopt the method of Rudin,
Osher, and Fatemi [23] to approximate the divergence term, and we use an iteration
algorithm.)
We suppose that is a rectangle. So, (p
ij
)
i,j=1,N
is the initial discrete image such
that m
1
p
ij
m
2
, where m
2
m
1
0. We will approach the numerical solution
(f
ij
)
i,j=1,N
by a sequence (f
n
ij
)
i,j=1,N
for n , which is obtained as follows.
1) f
0
is arbitrarily given, such that m
1
f
0
ij
m
2
.
2) If f
n
is calculated, then we compute f
n+1
as a solution of the linear discrete
problem
f
n+1
ij

1
h
_

___

x
+
f
n
ij
h
_
2
+
_
m
_

y
+
f
n
ij
h
,

f
n
ij
h
__
2
_1
2
__

x
+
f
n+1
ij
h
___

1
h
_

___

y
+
f
n
ij
h
_
2
+
_
m
_

x
+
f
n
ij
h
,

f
n
ij
h
__
2
_1
2
__

y
+
f
n+1
ij
h
___
(6.1)
= p
ij
,
for i, j = 1, . . . , N and with the boundary conditions obtained by reection as
f
n
0j
= f
n
2j
, f
n
N+1,j
= f
n
N1,j
, f
n
i0
= f
n
i2
, f
n
iN+1
= f
n
i,N1
.
Remark. The algorithm described in the previous section allows us to compute
f
n+1
by the formula (instead of (6.1))
(6.2) f
n+1
ij
f
n+1
ij
= p
ij
div
__
1

([Df
n
ij
[)
[Df
n
ij
[
_
Df
n
ij
_
.
But, in this way, we unfortunately obtain an unstable algorithm; that is, f
n+1
ij
is not
bounded by the same bounds of f
n
. So, to overcome this diculty, we must replace
(6.2) by
f
n+1
ij
div(Df
n+1
ij
) = p
ij
div
__
1

([Df
n
ij
[)
[Df
n
ij
[
_
Df
n+1
ij
_
,
which is equivalent to
f
n+1
ij
div
_

([Df
n
ij
[)
[Df
n
ij
[
Df
n+1
ij
_
= p
ij
,
i.e., (6.1) after discretization.
We multiply (6.1) by h
2
and we denote by c
1
(f
n
ij
), c
2
(f
n
ij
), c
3
(f
n
ij
), and c
4
(f
n
ij
)
in (6.1), the coecients of f
n+1
i+1,j
, f
n+1
i1,j
, f
n+1
i,j+1
, and f
n+1
i,j1
, respectively. With these
notations, (6.1) can be written as
(h
2
+c
1
(f
n
ij
) +c
2
(f
n
ij
) +c
3
(f
n
ij
) +c
4
(f
n
ij
))f
n+1
ij
(6.3)
= c
1
(f
n
ij
)f
n+1
i+1,j
+c
2
(f
n
ij
)f
n+1
i1,j
+c
3
(f
n
ij
)f
n+1
i,j+1
+c
4
(f
n
ij
)f
n+1
i,j1
+h
2
p
ij
.
We remark that c
i
0, for i = 1, 4. Now, for f
n
ij
, let C
i
(f
n
ij
) and C(f
n
ij
) be dened by
C
i
=
c
i
h
2
+c
1
+c
2
+c
3
+c
4
, C =
h
2
h
2
+c
1
+c
2
+c
3
+c
4
.
1972 GILLES AUBERT AND LUMINITA VESE
Then, we have that C
i
, C 0 and C
1
+ C
2
+ C
3
+ C
4
+ C = 1 (we recall that these
coecients depend on f
n
ij
).
Hence, we write (6.3) as
(6.4) f
n+1
ij
= C
1
(f
n
ij
)f
n+1
i+1,j
+C
2
(f
n
ij
)f
n+1
i1,j
+C
3
(f
n
ij
)f
n+1
i,j+1
+C
4
(f
n
ij
)f
n+1
i,j1
+C(f
n
ij
)p
ij
.
Now let (E, | |) be the Banach space
E =
_
f = (f
ij
)
i,j=1,N
, f
ij
R
_
with |f| = sup
ij
[f
ij
[,
and the subspace M E: M = f E; m
1
f
ij
m
2
.
PROPOSITION 6.1.
i) If f
n
M, then there exists a unique f
n+1
E such that (6.3) is satised.
Moreover, f
n+1
M.
ii) The nonlinear discrete problem
(6.5) f
ij
= C
1
(f
ij
)f
i+1,j
+C
2
(f
ij
)f
i1,j
+C
3
(f
ij
)f
i,j+1
+C
4
(f
ij
)f
i,j1
+C(f
ij
)p
ij
has a solution f M.
Proof.
i) For u M, we dene the linear application Q
u
: M E by
(Q
u
(z))
ij
= C
1
(u
ij
)z
i+1,j
+C
2
(u
ij
)z
i1,j
+C
3
(u
ij
)z
i,j+1
+C
4
(u
ij
)z
i,j1
+C(u
ij
)p
ij
.
We will easily prove that Q
u
(M) M and, moreover, that Q
u
is a contractive function
on E. We have, for z M,
(Q
u
(z))
ij
= C
1
(u
ij
)z
i+1,j
+C
2
(u
ij
)z
i1,j
+C
3
(u
ij
)z
i,j+1
+C
4
(u
ij
)z
i,j1
+ C(u
ij
)p
ij
(C
1
(u
ij
) +C
2
(u
ij
) +C
3
(u
ij
) +C
4
(u
ij
) +C(u
ij
))m
2
= m
2
.
We obtain in the same way that m
1
(Q
u
(z))
ij
. Hence, Q
u
(z) M. For v, w E,
we have
[(Q
u
(v) Q
u
(w))
ij
[ C
1
(u
ij
)[v
i+1,j
w
i+1,j
[ +C
2
(u
ij
)[v
i1,j
w
i1,j
[
+ C
3
(u
ij
)[v
i,j+1
w
i,j+1
[ +C
4
(u
ij
)[v
i,j1
w
i,j1
[
(C
1
(u
ij
) + +C
4
(u
ij
))|v w| c|v w|,
where the positive constant c is
c =
4 sup
[0,[

h
2
+ 4 sup
[0,[

< 1,
since the function is bounded. So, by the classical Banach xed point theorem, we
deduce that there is a unique f
n+1
E such that f
n+1
= Q
f
n(f
n+1
), which is the
xed point of Q
f
n, or the solution of (6.3). Moreover, f
n+1
M.
ii) To prove ii), we dene the application F : M M by F(u) = u

, where u

is
the unique xed point of Q
u
. We will prove that this application is continuous from
the compact and convex set M M, and then we will have the existence of a xed
point of F, which will be a solution of (6.4).
So, let u
n
, u M such that lim
n
|u
n
u| = 0 and u

n
= F(u
n
), u

= F(u).
We have the following equalities and inequalities:
|u

n
u

| = |Q
un
(u

n
) Q
u
(u

)| = |Q
u
(u

n
) Q
u
(u

) +Q
un
(u

n
) Q
u
(u

n
)|
|Q
u
(u

n
) Q
u
(u

)| +|Q
un
(u

n
) Q
u
(u

n
)|
c|u

n
u

| +|Q
un
(u

n
) Q
u
(u

n
)|.
A VARIATIONAL METHOD IN IMAGE RECOVERY 1973
Then, we get the following:
(1 c)|u

n
u

| |Q
un
(u

n
) Q
u
(u

n
)| |u

n
| sup
ij
_
[C
1
(u
nij
) C
1
(u
ij
)[ +
+[C
4
(u
nij
) C
4
(u
ij
)[ +[C(u
nij
) C(u
ij
)[
_
.
Now, since |u

n
| m
2
, for all n > 0 and since the functions C
i
, C are continuous
(because the functions and minmod are continuous), we obtain that the right-hand
side of the least inequality converges to 0 for n . Hence |u

n
u

| 0; that is,
the application F is continuous.
Remarks.
(1) The conclusion i) of Proposition 6.1 says that the algorithm is unconditionally
stable. Moreover, to compute f
n+1
as a solution of the linear system (6.4), since Q
f
n
is contractive, we can use the iterative method
f
0
M, f
k+1
= Q
f
n(f
k
) and lim
k
f
k
= f
n+1
.
Finally, in practice, to accelerate the convergence to the solution f of (6.4), by a
combination of these two iterative methods, we use a scheme based on the Gauss
Seidel algorithm: for i, j = 1, 2, . . . , N in this order, we let
f
n+1
ij
= C
1
f
n
i+1,j
+C
2
f
n+1
i1,j
+C
3
f
n
i,j+1
+C
4
f
n+1
i,j1
+Cp
ij
,
where for the computation of C
1
, . . . , C
4
and C, we replace, respectively, f
n
i1,j
, f
n
i,j1
by f
n+1
i1,j
, f
n+1
i,j1
.
Hence, in practice, we observe that the algorithm is quite stable and convergent.
(2) The conclusion ii) of Proposition 6.1 says that the problem (6.4), which is a
nonlinear discrete problem associated with (E), has a solution f. In the convex case,
we also have the uniqueness of this solution. But we have not proved the convergence
of f
n
to f (in fact, if f
n
converges, which is true in practice, then this will converge
to the solution f of the nonlinear discrete problem).
Now, we will briey treat the case R ,= I. In many cases, the degradation operator
R, the blur, is a convolution-type integral operator.
In the numerical approximations, (R
mn
)
m,n=0,d
is a symmetric matrix with
d

m,n=0
R
mn
= 1,
and the approximation of Rf can be
Rf
ij
=
d

m,n=1
R
mn
f
i+
d
2
m,j+
d
2
n
.
Since R is symmetric, then R

= R and R

Rf = RRf is approximated by
R

Rf
ij
=
d

m,n=1
d

r,t=1
R
mn
R
rt
f
i+drm,j+dtn
.
Then, we use the same approximation of the divergence term and the same iter-
ative algorithm, with a slight modication: let
h
2
R

Rf
n+1
ij
+ (c
1
(f
n
ij
) +c
2
(f
n
ij
) +c
3
(f
n
ij
) +c
4
(f
n
ij
))f
n+1
ij
= c
1
(f
n
ij
)f
n+1
i+1,j
+c
2
(f
n
ij
)f
n+1
i1,j
+c
3
(f
n
ij
)f
n+1
i,j+1
+c
4
(f
n
ij
)f
n+1
i,j1
+h
2
Rp
ij
.
1974 GILLES AUBERT AND LUMINITA VESE
FIG. 7.1. The rst sequence of images represents, in the denoising case, from left to right: the
degraded image, the synthetic image before degradation, and the reconstructed images with
1
and

2
. The second represents, from left to right: the degraded image and the reconstructed image; from
top to bottom, the deblurring case and both denoising and deblurring, with
1
.
Now, to compute f
n+1
as the solution of this linear system, we can use, for
example, the relaxation method [see 7].
Remark. In these algorithms, there are two parameters, and h. We denote

= h
2
. For the moment, there are not any rigorous choices for the values of

and
h. But, in practice, we have observed that (as is natural), by decreasing h, the edges
are better preserved and also, by decreasing

, we diuse the image.


7. Experimental results. Finally, we present some numerical results on two
images of varying diculty. To generate the images, we have used the software
Megawave from CEREMADE, at the University of Paris-Dauphine. The rst im-
age is a synthetic picture (7171 pixels) with geometric features (like circles, lines,
squares). The second is a real image (256256 pixels) representing a photograph of
an oce. We have introduced in these pictures the types of degradation considered
here: standard noise, Gaussian blur (the atmospheric turbulence blur type) or both,
and we have made the choice of the parameters

and h in order to increase the


signal to noise ratio. We remark that in the denoising case, we obtain the results very
fast (in just three iterations), and we obtain good results in the deblurring case. If
the degradation involves both noise and blur, the choice of the parameters is more
dicult, because we must take a small

in order to obtain a denoising image but,


in the same time,

must be large to deblur the image. The results for the synthetic
image are all represented in Figure 7.1.
A VARIATIONAL METHOD IN IMAGE RECOVERY 1975
y = 0.000000
70.000000
x = 0
255.000000
y = 0.000000
70.000000
x = 0
255.000000
y = 0.000000
70.000000
x = 0
255.000000
(a)
y = 0.000000
70.000000
x = 0
255.000000
y = 0.000000
70.000000
x = 0
255.000000
y = 0.000000
70.000000
x = 0
255.000000
(b)
y = 0.000000
70.000000
x = 0
255.000000
y = 0.000000
70.000000
x = 0
255.000000
y = 0.000000
70.000000
x = 0
255.000000
(c)
FIG. 7.2. Proles of the synthetic image ( ), the noisy initial image (- - -), and the recon-
structed image (- - -), with the function
1
. We represent in (a) the denoising case and in (b)
the deblurring case; (c) involves both noise and blur.
To better illustrate the reconstruction, we have represented in Figure 7.2 the
proles on lines for the rst image corresponding to Figure 7.1 (like one-dimensional
signals), for each experiment. So, we have superposed the noisy signal, the result,
and the signal before degradation.
Finally, in Figure 7.3, we present the results for the real image, which is a picture
of an oce, in the denoising case and the deblurring case.
Appendix A. In this appendix, we will present the technical lemma of singular
perturbations due to Temam [12], which we have used to obtain the a priori estimates
1976 GILLES AUBERT AND LUMINITA VESE
FIG. 7.3. The oce picture. From left to right: the degraded image and the reconstructed
image. From top to bottom: the denoising case and the deblurring case, with
1
.
in Proposition 4.7. We will state this lemma in an adjusted version of our problem.
The goal is to nd some estimates independent of , on the solution of the problem
(T

) inf
vH
1
()
_
_

(g(Dv) +h(v) +p(x)v)dx +



2
_

[Dv[
2
dx
_
.
We assume the following:
(A.1) the function g() is convex and of class C
3
from R
2
to R,
(A.2) the function x g(q(x)) is measurable on , for all q L
1
()
2
,
A VARIATIONAL METHOD IN IMAGE RECOVERY 1977
(A.3) there exist constants
i
0, i = 0, 8, such that, for all R
2
,
(A.3
1
) g()
0
[[
1
,
0
> 0,
(A.3
2
)
g

i
()
2
, i = 1, 2,
(A.3
3
)
2

i=1
g

i
()
i

3
(1 +[[
2
)
1
2

4
,
3
> 0,
(A.3
4
)

6
[

[
2
(1 +[[
2
)
1
2

i,j

2
g

j
()
i

j


7
[

[
2
(1 +[[
2
)
1
2
R
2
,
6
,
7
> 0,
where [

[
2
= [[
2

()
2
1+||
2
,
(A.3
5
) |p|
W
1,
()

8
,
2

i=1
g

i
()
i
0 R
2
, (A.4)
the function t h(t) is convex and h

(0) = 0. (A.5)
LEMMA A.1 (Temam [12]). The problem T

has a unique regular solution u

bounded independently of in L

()

W
1,1
(). Moreover, for any relatively compact
open set O in , there is a constant K(O, ) such that
|u

|
W
1,
(O)
K,
|u

|
H
2
(O)
K.
We can apply this lemma to our problem by taking, in Proposition 4.7,
g() = ([[) and h(t) = t
2
.
We will not give the proof of this lemma. We refer the reader to the paper and
the very technical proofs of Temam. We simply recall that the idea (due to Bernstein)
is to obtain some ne estimates on the function v

= [Du

[
2
. To do this, we use the
Euler equation associated with (T

), which can be written as


(E

) u

i=1

x
i
_
g

i
(Du

)
_
= h

(u

) p(x).
We derive (E

) with respect to x
l
; then we multiply the result by
u
x
l
and we add
over l, with 1 l 2, to get
(B

)

2
2

j=1

x
j
_
v

x
j
_
+
2

j=1
2

l=1
_

2
u

x
l
x
j
_
2

1
2
2

i=1

x
i
_
2

j=1

2
g

j
v

x
j
_
+
2

l=1
2

i=1
2

j=1

2
g

2
u

x
l
x
j

2
u

x
l
x
i
= h

(u

)v

l=1
p
x
l
u

x
l
.
The equation (B

) allows us to obtain the estimates on v

by using the test func-


tions judiciously selected and the complicated but classical techniques of Ladyzenskaya
and Uralceva [17]. To conclude, we remark that the assumption (A.5) was not given
1978 GILLES AUBERT AND LUMINITA VESE
by Temam. The Temam assumptions on the integrand dependence in v do not allow
us to directly apply its result.
To overcome this diculty, we have assumed (A.5), and then the term h

(u

)v

in (B

) is not negative, which allows us to obtain all the a priori estimates proved by
Temam.
Appendix B. In the previous sections we studied the problem of image recon-
struction when the operator R = I (corresponding to a denoising problem). If R ,= I
(generally a convolution operator), the existence and uniqueness results of section 4
remain true if R satises the following hypotheses:
(1) R is a continuous and linear operator on L
2
();
(2) R does not annihilate constant functions.
For the results of section 4, we must suppose in addition that
(3) R is injective.
We do not reproduce the proofs; instead we leave it to the readers to convince
themselves.
Acknowledgments. We would like to thank the referees for useful remarks on
the rst version of the manuscript.
REFERENCES
[1] L. ALVAREZ AND L. MAZORRA, Signal and image restoration using shock lters and anisotropic
diusion, SIAM J. Numer. Anal., 31(1992), pp. 590605.
[2] F. CATTE, P. L. LIONS, J. M. MOREL, AND T. COLL, Image selective smoothing and edge
detection by nonlinear diusion, SIAM J. Numer. Anal., 29(1992), pp. 182193.
[3] A. CHAMBOLLE AND P. L. LIONS, Image Recovery via Total Variation Minimization and
Related Problems, Numer. Math., Vol. 76, 2(1997), pp. 167188.
[4] P. CHARBONNIER, Reconstruction dimage: Regularisation avec prise en compte des disconti-
nuitis, Ph.D. thesis, University of Nice-Sophia Antipolis, 1994.
[5] P. CHARBONNIER, G. AUBERT, L. BLANC-FERAUD, AND M. BARLAUD, Two deterministic
half-quadratic regularization algorithms for computed imaging, First IEEE Internat. Conf.
on Image Processing, Vol. II, Austin, TX, IEEE, Piscataway, NJ, 1994, pp. 168172.
[6] P. CHARBONNIER, L. BLANC-FERAUD, G. AUBERT, AND M. BARLAUD, Deterministic edge-
preserving regularization in computed imaging, IEEE Trans. Image Processing, 6(1997),
pp. 298311.
[7] P. G. CIARLET, Introduction ` a lanalyse numerique matricielle et ` a loptimisation, Masson,
Paris, 1990.
[8] M. CRANDALL, H. ISHII, AND P. L. LIONS, Users guide to viscosity solutions of second order
partial dierential equations, Bull. Amer. Math. Soc., 27(1992), pp. 167.
[9] J. P. CROUZEIX, A relationship between the second derivatives of a convex function and of its
conjugate, Math. Programming, 13(1977), pp. 364365.
[10] B. DACOROGNA, Direct Method in the Calculus of Variations, Springer-Verlag, Berlin, 1989.
[11] F. DEMENGEL AND R. TEMAM, Convex functions of a measure and applications, Indiana Univ.
Math. J., 33(1984), pp. 673709.
[12] I. EKELAND AND R. TEMAM, Analyse convexe et problhmes variationnels, Dunod-Gauthier-
Villars, Paris, 1974.
[13] S. GEMAN AND D. GEMAN, Stochastic relaxation, Gibbs distribution and the bayesian restora-
tion of images, IEEE Trans. Pattern Anal. Machine Intell., 6(1984), pp. 721741.
[14] D. GEMAN AND C. YANG, Nonlinear Image Recovery with Half-Quadratic Regularization and
FFts, preprint, 1993.
[15] D. GEMAN AND G. REYNOLDS, Constrained restoration and the recovery of discontinuities,
IEEE Trans. Pattern Anal. Machine Intell., 14(1992), pp. 367383.
[16] D. GILBARG AND N. S. TRUDINGER, Elliptic Partial Dierential Equations of Second Order,
Springer-Verlag, Berlin, 1983.
[17] O. A. LADYZENSKAYA AND N. N. URALCEVA, Equations aux derivees partielles de type ellip-
tique, Dunod, Paris, 1968.
A VARIATIONAL METHOD IN IMAGE RECOVERY 1979
[18] L. LAZAROAIA-VESE, Variational Problems and P.D.E.s in Image Analysis and Curve Evo-
lution, Ph.D. thesis, University of Nice-Sophia Antipolis, 1996.
[19] J. L. LIONS AND E. MAGENES, Problemes aux limites non homogenes, Dunod, Paris, 1968.
[20] P. PERONA AND J. MALIK, Scale-space and edge detection using anisotropic diusion, IEEE
Trans. Pattern Anal. Machine Intell., 12(1990), pp. 629639.
[21] R. T. ROCKAFELLAR, Convex Analysis, Princeton University Press, Princeton, NJ, 1970.
[22] S. OSHER AND L. RUDIN, Total variation based image restoration with free local constraints,
Proc. IEEE Internat. Conf. on Image Processing, Vol. I, Austin, TX, IEEE, Piscataway,
NJ, 1994, pp. 3135.
[23] L. RUDIN, S. OSHER, AND E. FATEMI, Nonlinear total variation based noise removal algo-
rithms, Phys. D, 60(1992), pp. 259268.
[24] G. STAMPPACHIA, Equations elliptiques du second ordre ` a coecients discontinus, Presses de
lUniversite de Montreal, Canada, 1966.
[25] R. TAHRAOUI, Regularite de la solution dun probl`eme variationnel, Ann. Inst. H. Poincare
Analyse Non Lineaire, 9(1992), pp. 5199.
[26] R. TEMAM, Solutions generalisees de certaines equations du type hypersurfaces minima, Arch.
Rational Mech. Anal., 44(1971), pp. 121156.
[27] A. N. TIKHONOV AND V. Y. ARSENIN, Solutions of Ill-Posed Problems, Wiley, New York,
1977.
[28] A. VISINTIN, Strong convergence results related to strict convexity, Comm. Partial Dierential
Equations, 9(1984), pp. 439466.
[29] C. R. VOGEL AND M. E. OMAN, Iterative Methods for Total Variation Denoising, preprint,
1994, submitted to IEEE Trans. on Image Processing.

Vous aimerez peut-être aussi