Vous êtes sur la page 1sur 4

Math in Kunisch and Zou

The Morozov Discrepancy Principle


Consider the inverse problem:
Tf = z
Where T is the bounded linear operator which maps the parameter space X to
the observation space Y, and z is the data which may be noisy. In the case that
there is noise in the data, it is denoted as z .
As there is no continuous inverse for T, we convert the inverse problem to
the weak form, and then apply finite element methods in order to formulate it
in a well posed fashion. This involves formulating it as an output least squares
problem:

1
minf X J(f ) = ||T f z ||2Y + ||f ||2X
2
2
Where:
1. > 0 is the regularization parameter we will be seeking to optimize for
this paper.
2. X and Y are Hilbert spaces.
3. || ||Y , || ||X , < >X , < >Y are the inner products and norms in the
aformentioned Hilbert spaces.
Finally, T is the adjoint linear operator mapping the solution space Y to the
parameter space X, IE:
T : Y X
Then using this adjoint linear operator with the regularization parameter , we
can find a unique solution f () for the system :
T T f + f = T z

If we put this into the variational form: (T T f, g) + (f, g) = (T z , g)


An identity for the adjoint operator is that :
(A x, y) (Ax, Ay)
Then in variational form: (T f, T g) + (f, g) = (z , T g)

Morozovs Principle:
Morozovs principle states that the linear inverse problem should be chosen so
that the error from regularization is equal to the error from the observation
data. In notation this is:
||T f () z ||2Y = 2 ||z z ||2Y

Additionally, there is also the damped Morozov principle:


||T f () z ||2Y + ||f ()||2X = 2 ||z z ||2Y

In this case, [1, ]. In the case = , = 0, and the damped Morozov principle becomes undamped.
Let F () represent the OLS equation shown at the J(f (), ):
F () = J(f (), ) = 21 ||T f () z ||2Y +
tive of F () in terms of :

2
2 ||f ()||X

We can find the deriva-

||T f () z ||2Y + ||f ()||2X (T f () z , T f () z )Y + (f (), f ())X


2
2
2
2

dh(x)
d
(g(x), h(x)) = ( dg(x)
We know that: dx
dx , h(x)) + (g(x), dx )
and that : (g(x), h(x)) (h(x), g(x))

Then:

d 1

d 2 (T f () z , T f () z )Y + 2 (f (), f ())X
, T f 0 ())Y + 12 (T f 0 (), T f ()z )Y + 2 (f 0 (), f ())X + 2 (f (), f 0 ())X +

d
d F ()

1
2 (T f ()z
d
d 2 (f (), f ())

(T f () z , T f 0 ())Y + (f (), f 0 ())X + 12 (f (), f ()) = F 0 ()


We can find the second derivative of F 00 () through the same set of operations:
F 0 () = 12 (f (), f ())
F 00 () = 12 (f (), f 0 ()) + 12 (f 0 (), f ())
= (f (), f 0 ()) = F 00 ()
The above results mean that for the damped Morozov equation:
||T f z || + ||f ()||2X = 2
F () + F 0 () F 0 () = F () + ( )F 0 ()

Showing that the Observation Error is bounded


by F ()
Before moving on we need to show that the unique solution in on the interval
from 0 < 1 for F ().
First, let G() = F () + ( )F 0 () 12 2 . If we want to take the derivative
of this in terms of we get:
G0 () = F 0 () + 1 F 0 () + F 00 () F 0 ()
G0 () = 1 F 0 () + ( )F 00 ()
1
2 (f (), f ()) + ( )(f (), f 0 ())
If F 00 () < 0 then G0 () > 0. Therefore: F (0) 12 2 < 12 2 F (1) 21 2

Optimizing the Regularization parameter


Now that we have the function G() and its derivative, we have phrased the
Morozov principle :
||T f () z ||2 = 2
as a root finding problem. Then solving approximately for the optimal regularization parameter can be done by employing Newtons method:
G()
G0 ()

k+1 = k

Solving for f 0 () necessary for using G0 () requires an extra equation be solved


for:
Instead we use the quasi-Newton method:
k+1 = k

F () + ( )F 0 () 21 2
1
2

Where:
fk (, k1 ) =

2G(k )
+ ( )F 00 ()

1 F 0 ()

(f (), f ()) + ( )(f (), fk (, k1 ))

f (k )f (k1 )
k k1

Calculating the Paremeters:

Vous aimerez peut-être aussi