Vous êtes sur la page 1sur 25

The Annals of Applied Probability

2005, Vol. 15, No. 1A, 254278


DOI 10.1214/105051604000000602
Institute of Mathematical Statistics, 2005
ON THE DISTRIBUTION OF THE MAXIMUM OF A GAUSSIAN
FIELD WITH d PARAMETERS
1
BY JEAN-MARC AZAS AND MARIO WSCHEBOR
Universit Paul Sabatier and Universidad de la Repblica
Let I be a compact d-dimensional manifold, let X: I R be a
Gaussian process with regular paths and let F
I
(u), u R, be the probability
distribution function of sup
t I
X(t ).
We prove that under certain regularity and nondegeneracy conditions,
F
I
is a C
1
-function and satises a certain implicit equation that permits
to give bounds for its values and to compute its asymptotic behavior as
u +. This is a partial extension of previous results by the authors in
the case d =1.
Our methods use strongly the so-called Rice formulae for the moments of
the number of roots of an equation of the form Z(t ) =x, where Z: I R
d
is a random eld and x is a xed point in R
d
. We also give proofs for this kind
of formulae, which have their own interest beyond the present application.
1. Introduction and notation. Let I be a d-dimensional compact manifold
and let X: I R be a Gaussian process with regular paths dened on some
probability space (, A, P). Dene M
I
= sup
tI
X(t) and F
I
(u) = P{M
I
u},
u R, the probability distribution function of the random variable M
I
. Our aim is
to study the regularity of the function F
I
when d >1.
There exist a certain number of general results on this subject, starting from the
papers by Ylvisaker (1968) and Tsirelson (1975) [see also Weber (1985), Lifshits
(1995), Diebolt and Posse (1996) and references therein]. The main purpose of this
paper is to extend to d >1 some of the results about the regularity of the function
u F
I
(u) in Azas and Wschebor (2001), which concern the case d =1.
Our main tool here is the Rice formula for the moments of the number of roots
N
Z
u
(I) of the equation Z(t) =u on the set I , where {Z(t) : t I} is an R
d
-valued
Gaussian eld, I is a subset of R
d
and u is a given point in R
d
. For d >1, even
though it has been used in various contexts, as far as the authors know, a full proof
of the Rice formula for the moments of N
Z
u
(I) seems to have only been published
by Adler (1981) for the rst moment of the number of critical points of a real-
valued stationary Gaussian process with a d-dimensional parameter, and extended
by Azas and Delmas (2002) to the case of processes with constant variance.
Received January 2003; revised November 2003.
1
Supported by ECOS program U97E02.
AMS 2000 subject classications. 60G15, 60G70.
Key words and phrases. Gaussian elds, Rice formula, regularity of the distribution of the
maximum.
254
DISTRIBUTION OF THE MAXIMUM 255
Cabaa (1985) contains related formulae for random elds; see also the Ph.D.
thesis of Konakov cited by Piterbarg (1996b). In the next section we give a more
general result which has an interest that goes beyond the application of the present
paper. At the same time the proof appears to be simpler than previous ones. We
have also included the proof of the formula for higher moments, which in fact
follows easily from the rst moment. Both extend with no difculties to certain
classes of non-Gaussian processes.
It should be pointed out that the validity of the Rice formula for Lebesgue-
almost every u R
d
is easy to prove [Brillinger (1972)] but this is insufcient
for a certain number of standard applications. For example, assume X: I R
is a real-valued random process and one is willing to compute the moments of
the number of critical points of X. Then, we must take for Z the random eld
Z(t) = X

(t) and the formula one needs is for the precise value u = 0 so that a
formula for almost every u does not solve the problem.
We have added the Rice formula for processes dened on smooth manifolds.
Even though the Rice formula is local, this is convenient for various applications.
We will need a formula of this sort to state and prove the implicit formulae for the
derivatives of the distribution of the maximum (see Section 3).
The results on the differentiation of F
I
are partial extensions of Azas and
Wschebor (2001). Here, we have only considered the rst derivative F

I
(u). In
fact, one can push our procedure one step more and prove the existence of F

I
(u)
as well as some implicit formula for it. But we have not included this in the present
paper since formulae become very complicated and it is unclear at present whether
the actual computations can be performed, even in simple examples. The technical
reason for this is that, following the present method, to compute F

I
(u), one needs
to differentiate expressions that contain the helix process that we introduce
in Section 4, containing singularities with unpleasant behavior [see Azas and
Wschebor (2002)].
For Gaussian elds dened on a d-dimensional regular manifold (d > 1) and
possessing regular paths we obtain some improvements with respect to classical
and general results due to Tsirelson (1975) for Gaussian sequences. An example
is Corollary 5.1, which provides an asymptotic formula for F

I
(u) as u +
which is explicit in terms of the covariance of the process and can be compared
with Theorem 4 in Tsirelson (1975) where an implicit expression depending on
the function F itself is given.
We use the following notation:
If Z is a smooth function U R
d

, U a subset of R
d
, its successive
derivatives are denoted Z

, Z

, . . . , Z
(k)
and considered, respectively, as lin-
ear, bilinear, . . . , k-linear forms on R
d
. For example, X
(3)
(t )[v
1
, v
2
, v
3
] =

d
i,j,k=1

3X(t )
t
i
t
j
t
k
v
i
1
v
j
2
v
k
3
. The same notation is used for a derivative on a C

man-
ifold.

I, I and

I are, respectively, the interior, the boundary and the closure of the
set I . If is a random vector with values in R
d
, whenever they exist, we denote
256 J.-M. AZAS AND M. WSCHEBOR
by p

(x) the value of the density of at the point x, by E() its expectation and by
Var() its variancecovariance matrix. is Lebesgue measure. If u, v are points
in R
d
, u, v denotes their usual scalar product and u the Euclidean norm of u.
For M a d d real matrix, we denote M =sup
x=1
Mx.
Also for symmetric M, M 0 (resp. M 0) denotes that M is positive denite
(resp. negative denite). A
c
denotes the complement of the set A. For real x,
x
+
=sup(x, 0), x

=sup(x, 0).
2. Rice formulae. Our main results in this section are the following:
THEOREM 2.1. Let Z: I R
d
, I a compact subset of R
d
, be a random eld
and u R
d
. Assume that:
(A0) Z is Gaussian.
(A1) t Z(t) is a.s. of class C
1
.
(A2) For each t I , Z(t) has a nondegenerate distribution [i.e., Var(Z(t)) 0].
(A3) P{t

I, Z(t) =u, det(Z

(t)) =0} =0.


(A4) (I) =0.
Then
E
_
N
Z
u
(I)
_
=
_
I
E
_

det
_
Z

(t)
_

/Z(t) =u
_
p
Z(t)
(u) dt, (1)
and both members are nite.
THEOREM 2.2. Let k, k 2, be an integer. Assume the same hypotheses as in
Theorem 2.1 except for (A2), which is replaced by:
(A

2) for t
1
, . . . , t
k
I pairwise different values of the parameter, the distribution
of (Z(t
1
), . . . , Z(t
k
)) does not degenerate in (R
d
)
k
. Then
E
__
N
Z
u
(I)
__
N
Z
u
(I) 1
_

_
N
Z
u
(I) k +1
__
=
_
I
k
E
_
k

j=1

det
_
Z

(t
j
)
_

/Z(t
1
) = =Z(t
k
) =u
_
(2)
p
Z(t
1
),...,Z(t
k
)
(u, . . . , u) dt
1
dt
k
,
where both members may be innite.
REMARK. Note that Theorem 2.1 (resp. Theorem 2.2) remains valid if one
replaces I by

I in (1) or (2) and if hypotheses (A0)(A2) [resp. (A

2)] and (A3)


are veried. This follows immediately from the above statements. A standard
extension argument shows that (1) holds true if one replaces I by any Borel subset
of

I .
DISTRIBUTION OF THE MAXIMUM 257
Sufcient conditions for hypothesis (A3) to hold are given by the next
proposition. Under condition (a) the result is proved in Lemma 5 of Cucker and
Wschebor (2003). Under condition (b) the proof is straightforward.
PROPOSITION 2.1. Let Z: I R
d
, I a compact subset of R
d
, be a random
eld with paths of class C
1
and u R
d
. Assume that:
(i) p
Z(t)
(x) C for all t I and x in some neighborhood of u.
(ii) At least one of the two following hypotheses is satised:
(a) a.s. t Z(t) is of class C
2
,
(b) () = sup
tI,xV(u)
P{| det(Z

(t))| < /Z(t) = x} 0 as 0,


where V (u) is some neighborhood of u.
Then (A3) holds true.
The following lemma is easy to prove.
LEMMA 2.1. With the notation of Theorem 2.1, suppose that (A1) and (A4)
hold true and that p
Z(t)
(x) C for all t I and x in some neighborhood of u.
Then P{N
Z
u
(I) =0} =0.
LEMMA 2.2. Let Z: I R
d
, I a compact subset of R
d
, be a C
1
function
and u a point in R
d
. Assume that:
(a) inf
tZ
1
({u})
(
min
(Z

(t))) >0,
(b)
Z
() </d,
where
Z
is the continuity modulus of Z

, dened as the maximum of the


continuity moduli of its entries,
min
(M) is the square root of the smallest
eigenvalue of M
T
M and is a positive number.
Then, if t
1
, t
2
are two distinct roots of the equation Z(t) = u such that the
segment [t
1
, t
2
] is contained in I , the Euclidean distance between t
1
and t
2
is
greater than .
PROOF. Set = t
1
t
2
, v =
t
1
t
2
t
1
t
2

. Using the mean value theorem, for


i =1, . . . , d, there exists
i
[t
1
, t
2
] such that (Z

(
i
)v)
i
=0. Thus

_
Z

(t
1
)v
_
i

_
Z

(t
1
)v
_
i

_
Z

(
i
)v
_
i

k=1
|Z

(t
1
)
ik
Z

(
i
)
ik
||v
k
|
Z
( )
d

k=1
|v
k
|
Z
( )

d.
In conclusion,
min
(Z

(t
1
)) Z

(t
1
)v
Z
( )d, which implies > .

258 J.-M. AZAS AND M. WSCHEBOR


PROOF OF THEOREM 2.1. Consider a continuous nondecreasing function F
such that F(x) =0 for x 1/2, F(x) =1 for x 1. Let and be positive real
numbers. Dene the random function

,
(u) =F
_
1
2
inf
sI
_

min
_
Z

(s)
_
+Z(s)u
_
_

_
1F
_
d

Z
()
__
, (3)
and the set I

={t I : t s , s / I}. If
,
(u) > 0 and N
Z
u
(I

) does
not vanish, conditions (a) and (b) in Lemma 2.2 are satised. Hence, in each
ball with diameter

2
centered at a point in I

, there is at most one root of the


equation Z(t) =u, and a compactness argument shows that N
Z
u
(I

) is bounded
by a constant C(, I), depending only on and on the set I .
Take now any real-valued nonrandom continuous function f : R
d
R with
compact support. Because of the coarea formula [Federer (1969), Theorem 3.2.3],
since a.s. Z is Lipschitz and
,
(u) f (u) is integrable,
_
R
d
f (u)N
Z
u
(I

)
,
(u) du =
_
I

det
_
Z

(t)
_

f (Z(t))
,
(Z(t)) dt.
Taking expectations in both sides,
_
R
d
f (u)E
_
N
Z
u
(I

)
,
(u)
_
du
=
_
R
d
f (u) du
_
I

E
_

det
_
Z

(t)
_

,
(u)/Z(t) =u
_
p
Z(t)
(u) dt.
It follows that the two functions
(i) E(N
Z
u
(I

)
,
(u)),
(ii)
_
I

E(| det(Z

(t))|
,
(u)/Z(t) =u)p
Z(t)
(u) dt ,
coincide Lebesgue-almost everywhere as functions of u.
Let us prove that both functions are continuous, hence they are equal for
every u R
d
.
Fix u = u
0
and let us show that the function in (i) is continuous at u = u
0
.
Consider the random variable inside the expectation sign in (i). Almost surely,
there is no point t in Z
1
({u
0
}) such that det(Z

(t)) = 0. By the local inversion


theorem, Z() is invertible in some neighborhood of each point belonging to
Z
1
({u
0
}) and the distance fromZ(t) to u
0
is bounded belowby a positive number
for t I

outside of the union of these neighborhoods. This implies that, a.s.,


as a function of u, N
Z
u
(I

) is constant in some (random) neighborhood of u


0
.
On the other hand, it is clear from its denition that the function u
,
(u) is
continuous and bounded. We may now apply dominated convergence as u u
0
,
since N
Z
u
(I

)
,
(u) is bounded by a constant that does not depend on u.
For the continuity of (ii), it is enough to prove that, for each t I the conditional
expectation in the integrand is a continuous function of u. Note that the random
DISTRIBUTION OF THE MAXIMUM 259
variable | det(Z

(t))|
,
(u) is a functional dened on {(Z(s), Z

(s)) : s I}.
Perform a Gaussian regression of (Z(s), Z

(s)) : s I with respect to the random


variable Z(t), that is, write
Z(s) =Y
t
(s) +
t
(s)Z(t),
Z

j
(s) =Y
t
j
(s) +
t
j
(s)Z(t), j =1, . . . , d,
where Z

j
(s), j = 1, . . . , d, denote the columns of Z

(s), Y
t
(s) and Y
t
j
(s)
are Gaussian vectors, independent of Z(t) for each s I , and the regression
matrices
t
(s),
t
j
(s), j = 1, . . . , d, are continuous functions of s, t [take into
account (A2)]. Replacing in the conditional expectation, we are now able to get
rid of the conditioning, and using the fact that the moments of the supremum of an
a.s. bounded Gaussian process are nite, the continuity in u follows by dominated
convergence.
So, now we x u R
d
and make 0, 0 in that order, both in (i) and (ii).
For (i) one can use Beppo Levis theorem. Note that almost surely N
Z
u
(I

)
N
Z
u
(

I) = N
Z
u
(I), where the last equality follows from Lemma 2.1. On the other
hand, the same Lemma 2.1 plus (A3) imply together that, almost surely,
inf
sI
_

min
_
Z

(s)
_
+Z(s) u
_
>0
so that the rst factor in the right-hand side of (3) increases to 1 as decreases to
zero. Hence by Beppo Levis theorem,
lim
0
lim
0
E
_
N
Z
u
(I

)
,
(u)
_
=E
_
N
Z
u
(I)
_
.
For (ii), one can proceed in a similar way after deconditioning obtaining (1). To
nish the proof, remark that standard Gaussian calculations show the niteness of
the right-hand side of (1).
PROOF OF THEOREM 2.2. For each >0, dene the domain
D
k,
(I) ={(t
1
, . . . , t
k
) I
k
, t
i
t
j
if i =j, i, j =1, . . . , k}
and the process

Z
(t
1
, . . . , t
k
) D
k,
(I)

Z(t
1
, . . . , t
k
) =
_
Z(t
1
), . . . , Z(t
k
)
_
.
It is clear that

Z satises the hypotheses of Theorem 2.1 for every value
(u, . . . , u) (R
d
)
k
. So,
E
_
N

Z
(u,...,u)
_
D
k,
(I)
__
=
_
D
k,
(I)
E
_
k

j=1

det
_
Z

(t
j
)
_

/Z(t
1
) = =Z(t
k
) =u
_
(4)
p
Z(t
1
),...,Z(t
k
)
(u, . . . , u) dt
1
dt
k
.
260 J.-M. AZAS AND M. WSCHEBOR
To nish, let 0, note that (N
Z
u
(I))(N
Z
u
(I) 1) . . . (N
Z
u
(I) k + 1) is
the monotone limit of N

Z
(u,...,u)
(D
k,
(I)), and that the diagonal D
k
(I) = {(t
1
,
. . . , t
k
) I
k
, t
i
=t
j
for some pair i, j, i =j} has zero Lebesgue measure in (R
d
)
k
.

REMARK. Even thought we will not use this in the present paper, we point
out that it is easy to adapt the proofs of Theorems 2.1 and 2.2 to certain classes of
non-Gaussian processes.
For example, the statement of Theorem 2.1 remains valid if one replaces
hypotheses (A0) and (A2), respectively, by the following (B0) and (B2):
(B0) Z(t) = H(Y(t )) for t I , where Y : I R
n
is a Gaussian process with
C
1
paths such that for each t I, Y(t) has a nondegenerate distribution and
H : R
n
R
d
is a C
1
function.
(B2) For each t I , Z(t) has a density p
Z(t)
which is continuous as a function
of (t, u).
Note that (B0) and (B2) together imply that n d. The only change to be
introduced in the proof of the theorem is in the continuity of (ii) where the
regression is performed on Y(t) instead of Z(t).
Similarly, the statement of Theorem 2.2 remains valid if we replace (A0) by
(B0) and add the requirement that the joint density of Z(t
1
), . . . , Z(t
k
) be a
continuous function of t
1
, . . . , t
k
, u for pairwise different t
1
, . . . , t
k
.
Now consider a process X from I to Rand dene
M
X
u,1
(I) ={t I, X() has a local maximum at the point t, X(t ) >u},
M
X
u,2
(I) ={t I, X

(t) =0, X(t) >u}.


The problem of writing Rice formulae for the factorial moments of these random
variables can be considered as a particular case of the previous one and the
proofs are the same, mutatis mutandis. For further use, we state as a theorem the
Rice formula for the expectation. For breavity we do not state the equivalent of
Theorem 2.2, which holds true similarly.
THEOREM 2.3. Let X: I R, I a compact subset of R
d
, be a random eld.
Let u R, dene M
X
u,i
(I), i = 1, 2, as above. For each d d real symmetric
matrix M, we put
1
(M) :=| det(M)|1
M0
,
2
(M) :=| det(M)|.
Assume:
(A0) X is Gaussian,
(A

1) a.s. t X(t) is of class C


2
,
(A

2) for each t I , X(t), X

(t) has a nondegenerate distribution in R


1
R
d
,
(A

3) either a.s. t X(t) is of class C


3
or () =sup
tI,x

V(0)
P(| det(X

(t ))|<
/X

(t) =x

) 0 as 0, where V(0) denotes some neighborhood of 0,


DISTRIBUTION OF THE MAXIMUM 261
(A4) I has zero Lebesgue measure.
Then, for i =1, 2,
E
_
M
X
u,i
(I)
_
=
_

u
dx
_
I
E
_

i
_
X

(t)
_
/X(t) =x, X

(t) =0
_
p
X(t),X

(t)
(x, 0) dt
and both members are nite.
2.1. Processes dened on a smooth manifold. Let U be a differentiable
manifold (by differentiable we mean innitely differentiable) of dimension d.
We suppose that U is orientable in the sense that there exists a nonvanishing
differentiable d-form on U. This is equivalent to assuming that there exists
an atlas ((U
i
,
i
); i I) such that for any pair of intersecting charts (U
i
,
i
),
(U
j
,
j
), the Jacobian of the map
i

1
j
is positive.
We consider a Gaussian stochastic process with real values and C
2
paths
X ={X(t) : t U} dened on the manifold U. In this section we rst write Rice
formulae for this kind of processes without further hypotheses on U. When U is
equipped with a Riemannian metric, we give, without details and proof, a nicer
form. Other forms exist also when U is naturally embedded in a Euclidean space,
but we do not need this in the sequel [see Azas and Wschebor (2002)].
We will assume that in every chart X(t) and DX(t) have a nondegenerate joint
distribution and that hypothesis (A

3) is veried. For S a Borel subset of



U, the
following quantities are well dened and measurable: M
X
u,1
(S), the number of
local maxima and M
X
u,2
(S), the number of critical points.
PROPOSITION 2.2. For k =1, 2 the quantity which is expressed in every chart
with coordinates s
1
, . . . , s
d
as
_
+
u
dx E
_

k
_
Y

(s)
_
/Y(s) =x, Y

(s) =0
_
p
Y(s),Y

(s)
(x, 0)
d

i=1
ds
i
, (5)
where Y(s) is the process X written in the chart: Y =X
1
, denes a d-form

k
on

U and for every Borel set S

U,
_
S

k
=E
_
M
X
u,k
(S)
_
.
PROOF. Note that a d-form is a measure on

U whose image in each chart
is absolutely continuous with respect to Lebesgue measure
_
d
i=1
ds
i
. To prove
that (5) denes a d-form, it is sufcient to prove that its density with respect to
_
d
i=1
ds
i
satises locally the change-of-variable formula. Let (U
1
,
1
), (U
2
,
2
)
be two intersecting charts and set
U
3
:=U
1
U
2
; Y
1
:=X
1
1
; Y
2
:=X
1
2
; H :=
2

1
1
.
262 J.-M. AZAS AND M. WSCHEBOR
Denote by s
1
i
and s
2
i
, i =i, . . . , d, the coordinates in each chart. We have
Y
1
s
1
i
=

Y
2
s
2
i

H
i

s
1
i
,

2
Y
1
s
1
i
s
1
j
=

i

,j

2
Y
2
s
2
i

s
2
j

H
i

s
1
i
H
j

s
1
j
+

Y
2
s
2
i

2
H
i

s
1
i
s
1
j
.
Thus at every point
Y

1
(s
1
) =
_
H

(s
1
)
_
T
Y

2
(s
2
),
p
Y
1
(s
1
),Y

1
(s
1
)
(x, 0) =p
Y
2
(s
2
),Y

2
(s
2
)
(x, 0)

det
_
H

(s
1
)
_

1
,
and at a singular point,
Y

1
(s
1
) =
_
H

(s
1
)
_
T
Y

2
(s
2
)H

(s
1
).
On the other hand, by the change-of-variable formula,
d

i=1
ds
1
i
=

det
_
H

(s
1
)
_

1
d

i=1
ds
2
i
.
Replacing in the integrand in (5), one checks the desired result.
For the second part again it sufces to prove it locally for an open subset S
included in a unique chart. Let (S, ) be a chart and let again Y(s) be the process
written in this chart. It sufces to check that
E
_
M
X
u,k
(S)
_
(6)
=
_
(S)
d(s)
_
+
u
dx E
_

k
_
Y

(s)
_
/Y(s) =x, Y

(s) =0
_
p
Y(s),Y

(s)
(x, 0).
Since M
X
u,k
(S) is equal to M
Y
u,k
{(S)}, we see that the result is a direct
consequence of Theorem 2.3.
Even though in the integrand in (5) the product does not depend on the
parameterization, each factor does. When the manifold U is equipped with a
Riemannian metric it is possible to rewrite (5) as
_
+
u
dx E
_

k
_

2
X(s)
_
/X(s) =x, X(s) =0
_
p
X(s),X(s)
(x, 0) Vol, (7)
where
2
X(s) and X(s) are respectively the Hessian and the gradient read in an
orthonormal basis. This formula is close to a formula by Taylor and Adler (2002)
for the expected Euler characteristic.
REMARK. One can consider a number of variants of Rice formulae, in which
we may be interested in computing the moments of the number of roots of the
DISTRIBUTION OF THE MAXIMUM 263
equation Z(t) =u under some additional conditions. This has been the case in the
statement of Theorem 2.3 in which we have given formulae for the rst moment
of the number of zeroes of X

in which X is bigger than u (i = 2) and also the


real-valued process X has a local maximum (i =1).
We just consider below two additional examples of variants that we state here
for further reference. We limit the statements to random elds dened on subsets
of R
d
. Similar statements hold true when the parameter set is a general smooth
manifold. Proofs are essentially the same as the previous ones.
VARIANT 1. Assume that Z
1
, Z
2
are R
d
-valued random elds dened on
compact subsets I
1
, I
2
of R
d
and suppose that (Z
i
, I
i
), i = 1, 2, satisfy the
hypotheses of Theorem 2.1 and that for every s I
1
and t I
2
, the distribution
of (Z
1
(s), Z
2
(t)) does not degenerate. Then, for each pair u
1
, u
2
R
d
,
E
_
N
Z
1
u
1
(I
1
)N
Z
2
u
2
(I
2
)
_
=
_
I
1
I
2
dt
1
dt
2
E
_

det
_
Z

1
(t
1
)
_

det
_
Z

2
(t
2
)
_

/Z
1
(t
1
) =u
1
, Z
2
(t
2
) =u
2
_
(8)
p
Z
1
(t
1
),Z
2
(t
2
)
(u
1
, u
2
).
VARIANT 2. Let Z, I be as in Theorem 2.1 and let be a real-valued bounded
random variable which is measurable with respect to the -algebra generated by
the process Z. Assume that for each t I , there exists a continuous Gaussian
process {Y
t
(s) : s I}, for each s, t I a nonrandom function
t
(s) : R
d
R
d
and a Borel-measurable function g : C R where C is space of real-valued
continuous functions on I equipped with the supremum norm, such that:
1. =g(Y
t
() +
t
()Z(t)),
2. Y
t
() and Z(t) are independent,
3. for each u
0
R, almost surely the function u g(Y
t
() +
t
()u) is
continuous at u =u
0
.
Then the formula
E
_
N
Z
u
(I)
_
=
_
I
E
_

det
_
Z

(t)
_

/Z(t) =u
_
p
Z(t)
(u) dt
holds true.
We will be particularly interested in the function = 1
M
I
<v
for some v R.
We will see later on that it satises the above conditions under certain hypotheses
on the process Z.
3. First derivative, rst form. Our main goals in this and the next section are
to prove existence and regularity of the derivative of the function u F
I
(u) and,
at the same time, that it satises some implicit formulae that can be used to provide
bounds on it. In the following we assume that I is a d-dimensional C

manifold
264 J.-M. AZAS AND M. WSCHEBOR
embedded in R
N
, N d. and are respectively the geometric measures on
I and I . Unless explicit statement of the contrary is made, the topology on I will
be the relative topology.
In this section we prove formula (10) for F

I
(u)which we call rst form
which is valid for -almost every u, under strong regularity conditions on the paths
of the process X. In fact, the hypothesis that X is Gaussian is only used in the
Rice formula itself and in Lemma 3.1 which gives a bound for the joint density
p
X(s),X(t),X

(s),X

(t)
. In both places, one can substitute Gaussianity by appropriate
conditions that permit to obtain similar results.
More generally, it is easy to see that inequality (9) is valid under quite general
non-Gaussian conditions and implies that we have the following upper bound for
the density of the distribution of the random variable M
I
:
F

I
(u)
_
I
E
_

1
_
X

(t)
_
/X(t) =u, X

(t) =0
_
p
X(t),X

(t)
(u, 0)(dt )
(9)
+
_
I
E
_

1
_

(t)
_
/X(t) =u,

X

(t) =0
_
p
X(t),

(t)
(u, 0) (dt),
where the function
1
has been dened in the statement of Theorem 2.3 and

X
denotes the restriction of X to the boundary I .
Even for d = 1 (one-parameter processes) and X Gaussian and stationary,
inequality (9) provides reasonably good upper bounds for F

I
(u) [see Diebolt and
Posse (1996) and Azas and Wschebor (2001)]. We will see an example for d =2
at the end of this section.
In the next section, we are able to prove that F
I
(u) is a C
1
function and
that formula (10) can be essentially simplied by getting rid of the conditional
expectation, thus obtaining the second form for the derivative. This is done
under weaker regularity conditions but the assumption that X is Gaussian becomes
essential.
DEFINITION 3.1. Let X: I R be a real-valued stochastic process dened
on a subset of R
d
. We will say that X satises condition (H
k
), k a positive integer,
if the following three conditions hold true:
(i) X is Gaussian;
(ii) a.s. the paths of X are of class C
k
;
(iii) for any choice of pairwise different values of the parameter t
1
, . . . , t
n
,
the joint distribution of the random variables X(t
1
), . . . , X(t
n
), X

(t
1
), . . . , X

(t
n
),
. . . , X
(k)
(t
1
), . . . , X
(k)
(t
n
) has maximum rank.
The next proposition shows that there exist processes that satisfy (H
k
).
PROPOSITION 3.1. Let X ={X(t) : t R
d
} be a centered stationary Gaussian
process having continuous spectral density f
X
. Assume that f
X
(x) >0 for every
DISTRIBUTION OF THE MAXIMUM 265
x R
d
and that for any > 0 f
X
(x) C

holds true for some constant


C

and all x R
d
. Then, X satises (H
k
) for every k =1, 2, . . . .
PROOF. The proof is an adaptation of the proof of a related result for d = 1
[Cramr and Leadbetter (1967), page 203]; see Azas and Wschebor (2002).
THEOREM 3.1 (First derivative, rst form). Let X: I R be a Gaussian
process, I a C

compact d-dimensional manifold. Assume that X veries (H


k
)
for every k =1, 2, . . . .
Then, the function u F
I
(u) is absolutely continuous and its RadonNikodym
derivative is given for almost every u by
F

I
(u) =(1)
d
_
I
E
_
det
_
X

(t)
_
1
M
I
u
/X(t) =u, X

(t) =0
_
p
X(t),X

(t)
(u, 0)(dt)
(10)
+(1)
d1
_
I
E
_
det
_

(t)
_
1
M
I
u
/X(t) =u,

X

(t) =0
_
p
X(t),

(t)
(u, 0) (dt).
PROOF. For u <v and S (resp.

S) a subset of I (resp. I ), let us denote
M
u,v
(S) ={t S : u <X(t) v, X

(t) =0, X

(t) 0},

M
u,v
(

S) ={t

S : u <X(t) v,

X

(t) =0,

X

(t) 0}.
Step 1. Let h >0 and consider the increment
F
I
(u) F
I
(u h)
=P
_
{M
I
u} [{M
uh,u
(

I) 1} {

M
uh,u
(I) 1}]
_
.
Let us prove that
P
_
M
uh,u
(

I) 1,

M
uh,u
(I) 1
_
=o(h) as h 0. (11)
In fact, for >0,
P
_
M
uh,u
(

I) 1,

M
uh,u
(I) 1
_
(12)
E
_
M
uh,u
(I

M
uh,u
(I)
_
+E
_
M
uh,u
(I \ I

)
_
.
The rst term in the right-hand side of (12) can be computed by means of a
Rice-type formula, and it can be expressed as
_
I

I
(dt) (d

t )
_ _
u
uh
dx d x
E
_

1
_
X

(t)
_

1
_

t )
_
/X(t) =x,

X(

t ) = x, X

(t) =0,

X

t) =0
_
p
X(t),

X(

t ),X

(t),

t )
(x, x, 0, 0),
266 J.-M. AZAS AND M. WSCHEBOR
where the function
1
has been dened in Theorem 2.3.
Since in this integral t

t , the integrand is bounded and the integral
is O(h
2
).
For the second term in (12) we apply the Rice formula again. Taking into
account that the boundary of I is smooth and compact, we get
E
_
M
uh,u
(I \ I

)
_
=
_
I\I

(dt)
_
u
uh
E
_

1
_
X

(t)
_
/X(t) =x, X

(t) =0
_
p
X(t),X

(t)
(x, 0) dx
(const)h(I \ I

) (const)h,
where the constant does not depend on h and . Since > 0 can be chosen
arbitrarily small, (11) follows and we may write as h 0:
F
I
(u) F
I
(u h)
=P
_
M
I
u, M
uh,u
(

I) 1
_
+P
_
M
I
u,

M
uh,u
(I) 1
_
+o(h).
Note that the foregoing argument also implies that F
I
is absolutely continuous
with respect to Lebesgue measure and that the density is bounded above by the
right-hand side of (10). In fact,
F
I
(u) F
I
(u h) P
_
M
uh,u
(

I) 1
_
+P
_

M
uh,u
(I) 1
_
E
_
M
uh,u
(

I)
_
+E
_

M
uh,u
(I)
_
and it is enough to apply the Rice formula to each one of the expectations on the
right-hand side.
The delicate part of the proof consists in showing that we have equality in (10).
Step 2. For g : I R we put g

= sup
tI
|g(t )| and if k is a non-
negative integer, g
,k
=sup
k
1
+k
2
++k
d
k

k
1
,k
2
,...,k
d
g

. For xed >0 (to


be chosen later on) and h > 0,we denote by E
h
= {X
,4
h

}. Because of
the LandauSheppFernique inequality [see Landau and Shepp (1970) or Fernique
(1975)] there exist positive constants C
1
, C
2
such that
P(E
C
h
) C
1
exp[C
2
h
2
] =o(h) as h 0,
so that to have (10) it sufces to show that, as h 0,
E
__
M
uh,u
(

I) 1
M
uh,u
(

I)1
_
1
M
I
u
1
E
h
_
=o(h), (13)
E
__

M
uh,u
(I) 1

M
uh,u
(I)1
_
1
M
I
u
1
E
h
_
=o(h). (14)
We prove (13). Equation (14) can be proved in a similar way.
DISTRIBUTION OF THE MAXIMUM 267
Put M
uh,u
=M
uh,u
(

I). We have, on applying the Rice formula for the second


factorial moment,
E
__
M
uh,u
1
M
uh,u
1
_
1
M
I
u
1
E
h
_
(15)
E
_
M
uh,u
(M
uh,u
1)1
E
h
_
=
_ _
II
A
s,t
(ds)(dt),
where
A
s,t
=
_ _
u
uh
dx
1
dx
2
E
_

det
_
X

(s)
_
det
_
X

(t)
_

1
X

(s)0,X

(t)0
1
E
h
/X(s) =x
1
,
(16)
X(t ) =x
2
, X

(s) =0, X

(t ) =0
_
p
X(s),X(t),X

(s),X

(t)
(x
1,
x
2
, 0, 0).
Our goal is to prove that A
s,t
is o(h) as h 0 uniformly on s, t . Note that
when s, t vary in a domain of the form D

:= {t, s I : t s > } for


some > 0, then the Gaussian distribution in (16) is nondegenerate and A
s,t
is
bounded by (const)h
2
, the constant depending on the minimum of the determinant:
det Var(X(s), X(t), X

(s), X

(t)), for s, t D

.
So it is enough to prove that A
s,t
=o(h) for t s small, and we may assume
that s and t are in the same chart (U, ). Writing the process in this chart, we may
assume that I is a ball or a half ball in R
d
. Let s, t be two such points, and dene
the process Y =Y
s,t
by Y() =X(s +(t s)); [0, 1]. Under the conditioning
one has
Y(0) =x
1
, Y(1) =x
2
, Y

(0) =Y

(1) =0,
Y

(0) =X

(s)[(t s), (t s)], Y

(1) =X

(t)[(t s), (t s)].


Consider the interpolation polynomial Q of degree 3 such that
Q(0) =x
1
, Q(1) =x
2
, Q

(0) =Q

(1) =0.
Check that
Q(y) =x
1
+(x
2
x
1
)y
2
(3 2y), Q

(0) =Q

(1) =6(x
2
x
1
).
Denote Z() =Y() Q(), 0 1. Under the conditioning, one has Z(0) =
Z(1) = Z

(0) = Z

(1) = 0 and if also the event E


h
occurs, an elementary
calculation shows that for 0 1,
|Z

()| sup
[0,1]
|Z
(4)
()|
2!
= sup
[0,1]
|Y
(4)
()|
2!
(const)t s
4
h

. (17)
268 J.-M. AZAS AND M. WSCHEBOR
On the other hand, check that if A is a positive semidenite symmetric d d
real matrix and v
1
is a vector of Euclidean norm equal to 1, then the inequality
det(A) Av
1
, v
1
det(B) (18)
holds true, where B is the (d 1) (d 1) matrix B =((Av
j
, v
k
))
j,k=2,...,d
and
{v
1
, v
2
, . . . , v
d
} is an orthonormal basis of R
d
containing v
1
.
Assume X

(s) is negative denite, and that the event E


h
occurs. We can
apply (18) to the matrix A = X

(s) and the unit vector v


1
= (t s)/t s.
Note that in that case, the elements of matrix B are of the form X

(s)v
j
, v
k
,
hence bounded by (const)h

. So,
det[X

(s)] X

(s)v
1
, v
1
C
d
h
(d1)
=C
d
[Y

(0)]

t s
2
h
(d1)
,
the constant C
d
depending only on the dimension d.
Similarly, if X

(t) is negative denite, and the event E


h
occurs, then
det[X

(t)] C
d
[Y

(1)]

t s
2
h
(d1)
.
Hence, if C is the condition {X(s) =x
1
, X(t) =x
2
, X

(s) =0, X

(t) =0},
E
_

det
_
X

(s)
_
det
_
X

(t)
_

1
X

(s)0,X

(t)0
1
E
h
/C
_
C
2
d
h
2(d1)
t s
4
E
_
[Y

(0)]

[Y

(1)]

1
E
h
/C
_
C
2
d
h
2(d1)
t s
4
E
__
Y

(0) +Y

(1)
2
_
2
1
E
h
/C
_
=C
2
d
h
2(d1)
t s
4
E
__
Z

(0) +Z

(1)
2
_
2
1
E
h
/C
_
(const)C
2
d
h
2d
t s
4
.
We now turn to the density in (15) using the following lemma which is similar
to Lemma 4.3, page 76, in Piterbarg (1996a). The proof is omitted.
LEMMA 3.1. For all s, t I ,
t s
d+3
p
X(s),X(t),X

(s),X

(t)
(0, 0, 0, 0) D, (19)
where D is a constant.
Back to the proof of the theorem, to bound the expression in (15) we use
Lemma 3.1 and the bound on the conditional expectation, thus obtaining
E
_
M
uh,u
(M
uh,u
1)1
E
h
_
(const)C
2
d
h
2d
D
_ _
II
t s
d+1
ds dt
_ _
u
uh
dx
1
dx
2
(20)
(const)h
22d
DISTRIBUTION OF THE MAXIMUM 269
since the function (s, t) t s
d+1
is Lebesgue-integrable in I I . The last
constant depends only on the dimension d and the set I . Taking small enough,
(13) follows.
EXAMPLE. Let {X(s, t)} be a real-valued two-parameter Gaussian, centered
stationary isotropic process with covariance . Assume that (0) =1 and that the
spectral measure is absolutely continuous with density (ds, dt) =f () ds dt ,
=(s
2
+t
2
)
1/2
. Assume further that J
k
=
_
+
0

k
f () d < , for 1 k 5.
Our aim is to give an explicit upper bound for the density of the probability
distribution of M
I
where I is the unit disc. Using (9) which is a consequence
of Theorem 3.1 and the invariance of the law of the process, we have
F

I
(u) E
_

1
_
X

(0, 0)
_
/X(0, 0) =u, X

(0, 0) =(0, 0)
_
p
X(0,0),X

(0,0)
_
u, (0, 0)
_
(21)
+2E
_

1
_

(1, 0)
_
/X(1, 0) =u,

X

(1, 0) =0
_
p
X(1,0),

(1,0)
(u, 0)
=I
1
+I
2
.
We denote by X, X

, X

the value of the different processes at some point (s, t );


by X

ss
, X

st
, X

tt
the entries of the matrix X

; and by and the standard normal


density and distribution.
One can easily check that: X

is independent of X and X

, and has variance


J
3
I
d
; X

st
is independent of X, X

ss
and X

tt
, and has variance

4
J
5
.
Conditionally on X =u, the random variables X

ss
and X

tt
have
expectation: J
3
;
variance:
3
4
J
5
(J
3
)
2
;
covariance:

4
J
5
(J
3
)
2
.
We obtain
I
2
=
_
2
J
3
(u)
__
3
4
J
5
(J
3
)
2
_
1/2
(bu) +J
3
u(bu)
_
,
with b =
J
3
(3/4J
5
(J
3
)
2
)
1/2
. As for I
1
we remark that, conditionally on X = u,
X

ss
+X

tt
and X

ss
X

tt
are independent, so that a direct computation gives
I
1
=
1
8J
3
(u)E
_
(
1
2J
3
u)
2

J
5
4
(
2
2
+
2
3
)1
{
1
<2J
3
u}
(22)
1
{(
1
2J
3
u)
2

J
5
4
(
2
2
+
2
3
)>0}
_
,
270 J.-M. AZAS AND M. WSCHEBOR
where
1
,
2
,
3
are standard independent normal random variables and
2
=
2J
5
4
2
J
2
3
. Finally we get
I
1
=

2
8J
3
(u)
_

0
_
(
2
+a
2
c
2
x
2
)(a cx)
+[2a
2
(a cx)](a cx)
_
x(x) dx,
with a =2J
3
u, c =
_
J
5
4
.
4. First derivative, second form. We choose, once for this entire section,
a nite atlas A for I . Then, to every t I it is possible to associate a xed chart
that will be denoted (U
t
,
t
). When t I ,
t
(U
t
) can be chosen to be a half ball
with
t
(t) belonging to the hyperplane limiting this half ball. For t I , let V
t
be
an open neighborhood of t whose closure is included in U
t
and let
t
be a C

function such that


t
1 on V
t
;
t
0 on U
c
t
.
1. For every t

I and s I we dene the normalization n(t, s) in the following
way:
(a) For s V
t
, we set in the chart (U
t
,
t
), n
1
(t, s) =
1
2
s t
2
. By in
the chart we mean that s t is in fact
t
(t)
t
(s).
(b) For general s, we set n(t, s) =
t
(s)n
1
(t, s) +(1
t
(s)).
Note that in the at case, when the dimension d of the manifold is equal to the
dimension N of the ambient space, the simpler denition n(t, s) =
1
2
s t
2
works.
2. For every t I and s I , we set n
1
(t, s) = |(s t)
N
| +
1
2
s t
2
, where
(s t)
N
is the normal component of (s t) with respect to the hyperplane
delimiting the half ball
t
(U
t
). The rest of the denition is the same.
DEFINITION 4.1. We will say that f is helix-functionor an h-functionon
I with pole t I satisfying hypothesis (H
t,k
), k integer, k >1, if:
(i) f is a bounded C
k
function on I \ {t}.
(ii) f (s) :=n(t, s)f (s) can be prolonged as function of class C
k
on I .
DEFINITION 4.2. In the same way X is called an h-process with pole t I
satisfying hypothesis (H
t,k
), k integer, k >1, if:
(i) Z is a Gaussian process with C
k
paths on I \ {t}.
(ii) For t

I , Z(s) := n(t, s)Z(s) can be prolonged as a process of class C
k
on I , with Z(t) =0, Z

(t) =0, If s
1
, . . . , s
m
are pairwise different points of I \ {t},
then the distribution of Z
(2)
(t), . . . , Z
(k)
(t), Z(s
1
), . . . , Z
(k)
(s
1
), . . . , Z
(k)
(s
m
)
does not degenerate.
DISTRIBUTION OF THE MAXIMUM 271
(iii) For t I ; Z(s) := n(t, s)Z(s) can be prolonged as a process of class
C
k
on I with Z(t ) = 0,

Z

(t) = 0, and if s
1
, . . . , s
m
are pairwise different
points of I \ {t}, then the distribution of Z

N
(t), Z
(2)
(t), . . . , Z
(k)
(t ), Z(s
1
), . . . ,
Z
(k)
(s
1
), . . . , Z
(k)
(s
m
) does not degenerate. Z

N
(t) is the derivative normal to the
boundary of I at t .
We use the terms h-function and h-process since the function and the paths
of the process need not extend to a continuous function at the point t . However,
the denition implies the existence of radial limits at t . So the process may take
the form of a helix around t .
LEMMA 4.1. Let X be a process satisfying (H
k
, k 2), and let f be a C
k
function I R.
(a) For t

I , set for s I, s =t ,
X(s) =a
t
s
X(t) +b
t
s
, X

(t) +n(t, s)X


t
(s),
where a
t
s
and b
t
s
are the regression coefcients.
In the same way, set
f (s) =a
t
s
f (t) +b
t
s
, f

(t) +n(t, s)f


t
(s),
using the regression coefcients associated to X.
(b) For t I , s T, s =t , set
X(s) = a
t
s
X(t) +

b
t
s
,

X

(t) +n(t, s)X


t
(s)
and
f (s) = a
t
s
f (t) +

b
t
s
,

f

(t) +n(t, s)f


t
(s).
Then s X
t
(s) and s f
t
(s) are, respectively, an h-process and an
h-function with pole t satisfying H
t,k
.
PROOF. We give the proof in the case t

I, the other one being similar. In fact,
the quantity denoted by X
t
(s) is just X(s) a
t
s
X(t) b
t
s
, X

(t). On L
2
(, P),
let be the projector on the orthogonal complement to the subspace generated by
X(t), X

(t). Using a Taylor expansion,


X(s) =X(t) +(s t), X

(t)
+t s
2
_
1
0
X

_
(1 )t +s
_
[v, v](1 ) d,
with v =
st
st
. This implies that
X
t
(s) =
_
t s
2
_
1
0
X

_
(1 )t +s
_
[v, v](1 ) d
_
, (23)
272 J.-M. AZAS AND M. WSCHEBOR
which gives the result due to the nondegeneracy condition.
We state now an extension of Ylvisakers (1968) theorem on the regularity of
the distribution of the maximum of a Gaussian process which we will use in the
proof of Theorem 4.2 and which might have some interest in itself.
THEOREM 4.1. Let Z: T R be a Gaussian separable process on some
parameter set T and denote by M
Z
= sup
tT
Z(t) which is (a random variable)
taking values in R{+}. Assume that there exists
0
>0, m

>such that
m(t) =E(Z
t
) m

,
2
(t) =Var(Z
t
)
2
0
for every t T . Then the distribution of the random variable M
Z
is the sum of an
atom at + and apossibly defectiveprobability measure on R which has a
locally bounded density.
PROOF. Suppose rst that X: T R is a Gaussian separable process
satisfying Var(X
t
) = 1, E(X
t
) 0, for every t T . A close look at Ylvisakers
(1968) proof shows that the distribution of the supremum M
X
has a density p
M
X
that satises
p
M
X(u) (u) =
exp(u
2
/2)
_

u
exp(v
2
/2) dv
for every u R. (24)
Let now Z satisfy the hypotheses of the theorem. For given a, b R, a < b,
choose A R
+
so that |a| <A and consider the process
X(t) =
Z(t) a
(t)
+
|m

| +A

0
.
Clearly, for every t T ,
E(X(t)) =
m(t) a
(t)
+
|m

| +A

|m

| +|a|

0
+
|m

| +A

0
0,
and Var(X(t)) =1. So that (24) holds for the process X.
On the other hand, the statement follows from the inclusion:
{a <M
Z
b}
_
|m

| +A

0
<M
X

|m

| +A

0
+
b a

0
_
,
which implies
P{a <M
Z
b}
_
(|m

|+A)/
0
+(ba)/
0
(|m

|+A)/
0
(u) du
=
_
b
a
1

_
v a +|m

| +A

0
_
dv.

DISTRIBUTION OF THE MAXIMUM 273


Set now (t) 1. The key point is that, due to regression formulae, under the
condition {X(t) =u, X

(t) =0} the event


A
u
(X, ) :={X(s) u, s I}
coincides with the event
A
u
(X
t
,
t
) :=
_
X
t
(s)
t
(s)u, s I \ {t}
_
,
where X
t
and
t
are the h-process and the h-function dened in Lemma 4.1.
THEOREM 4.2 (First derivative, second form). Let X: I R be a Gaussian
process, I a C

compact manifold contained in R


d
. Assume that X has paths
of class C
2
and for s = t the triplet (X(s), X(t ), X

(t)) in R R R
d
has a
nondegenerate distribution. Then, the result of Theorem 3.1 is valid, the derivative
F

I
(u) given by relation (10) can be written as
F

I
(u) =(1)
d
_
I
E
_
det
_
X
t
(t)
t
(t)u
_
1
A
u
(X
t
,
t
)
_
p
X(t),X

(t)
(u, 0)(dt)
(25)
+(1)
d1
_
I
E
_
det
_

X
t
(t)

t
(t)
_
u1
A
u
(X
t
,
t
)
_
p
X(t),

(t)
(u, 0) (dt)
and this expression is continuous as a function of u.
The notation

X
t
(t) should be understood in the sense that we rst dene X
t
and then calculate its second derivative along I .
PROOF OF THEOREM 4.2. As a rst step, assume that the process X satises
the hypotheses of Theorem 3.1, which are stronger that those in the present
theorem.
We prove that the rst term in (10) can be rewritten as the rst term in (25).
One can proceed in a similar way with the second term, mutatis mutandis. For that
purpose, use the remark just before the statement of Theorem 4.2 and the fact that
under the condition {X(t) =u, X

(t) =0}, X

(t) is equal to X
t
(t)
t
(t )u.
Replacing in the conditional expectation in (10) and on account of the
Gaussianity of the process, we get rid of the conditioning and obtain the rst term
in (25). We now study the continuity of u F

I
(u). The variable u appears at three
locations:
(i) in the density p
X(t),X

(t)
(u, 0), which is clearly continuous,
274 J.-M. AZAS AND M. WSCHEBOR
(ii) in
E
_
det
_
X
t
(t )
t
(t)u
_
1
A
u
(X
t
,
t
)
_
,
where it occurs twice: in the rst factor and in the indicator function.
Due to the integrability of the supremum of bounded Gaussian processes, it is
easy to prove that this expression is continuous as a function of the rst u.
As for the u in the indicator function, set

v
:=det
_
X
t
(t)
t
(t)v
_
(26)
and, for h >0, consider the quantity E[
v
1
A
u
(X
t
,
t
)
] E[
v
1
A
uh
(X
t
,
t
)
], which is
equal to
E
_

v
1
A
u
(X
t
,
t
)\A
uh
(X
t
,
t
)
_
E
_

v
1
A
uh
(X
t
,
t
)\A
u
(X
t
,
t
)
_
. (27)
Apply Schwarzs inequality to the rst term in (27):
E
_

v
1
A
u
(X
t
,
t
)\A
uh
(X
t
,
t
)
_

_
E(
2
v
)P{A
u
(X
t
,
t
) \ A
uh
(X
t
,
t
)}
_
1/2
.
The event A
u
(X
t
,
t
) \ A
uh
(X
t
,
t
) can be described as
s I \ {t} : X
t
(s)
t
(s)u 0; s
0
I \ {t} : X
t
(s
0
)
t
(s
0
)(u h) >0.
This implies that
t
(s
0
) > 0 and that
t

h sup
sI\{t}
X
t
(s)
t
(s)u 0.
Now, observe that our improved version of Ylvisakers theorem (Theorem 4.1)
applies to the process s X
t
(s)
t
(s)u dened on I \ {t}. This implies that
the rst term in (27) tends to zero as h 0. An analogous argument applies to the
second term. Finally, the continuity of F

I
(u) follows from the fact that one can
pass to the limit under the integral sign in (25).
To complete the proof we still have to show that the added hypotheses are in
fact unnecessary for the validity of the conclusion. Suppose now that the process
X satises only the hypotheses of the theorem and dene
X

(t) =Z

(t) +Y(t), (28)


where for each > 0, Z

is a real-valued Gaussian process dened on I ,


measurable with respect to the -algebra generated by {X(t) : t I}, possessing
C

paths and such that almost surely Z

(t), Z

(t), Z

(t) converge uniformly


on I to X(t), X

(t), X

(t), respectively, as 0. One standard form to construct


such an approximation process Z

is to use a C

partition of the unity on I and to


approximate locally the composition of a chart with the function X by means of a
convolution with a C

kernel.
In (28), Y denotes the restriction to I of a Gaussian centered stationary
process satisfying the hypotheses of Proposition 3.1, dened on R
N
, and
independent of X. Clearly X

satises condition (H
k
) for every k, since it
has C

paths and the independence of both terms in (28) ensures that X

DISTRIBUTION OF THE MAXIMUM 275


inherits from Y the nondegeneracy condition in Denition 3.1. So, if M

I
=
max
tI
X

(t) and F

I
(u) =P{M

I
u}, one has
F

I
(u) =(1)
d
_
I
E
_
det
_
X
t
(t)
t
(t)u
_
1
A
u
(X
t
,
,t
)
_
p
X

(t),X

(t)
(u, 0)(dt )
(29)
+(1)
d1
_
I
E
_
det
_

X
t
(t)

t
(t)u
_
1
A
u
(X
t
,
t
)
_
p
X

(t),

(t)
(u, 0) (dt ).
We want to pass to the limit as 0 in (29). We prove that the right-hand side
is bounded if is small enough and converges to a continuous function of u as
0. Since M

I
M
I
, this implies that the limit is continuous and coincides
with F

I
(u) by a standard argument on convergence of densities. We consider only
the rst term in (29); the second is similar.
The convergence of X

and its rst and second derivative, together with the non-
degeneracy hypothesis, imply that uniformly on t I , as 0,
p
X

(t),X

(t)
(u, 0) p
X(t),X

(t)
(u, 0). The same kind of argument can be used for
det( X
t
(t)
t
(t)u), on account of the form of the regression coefcients and
the denitions of X
t
and
t
. The only difculty is to prove that, for xed u,
P{C

C} 0 as 0, (30)
where C

=A
u
(X
t
,
t
), C =A
u
(X
t
,
t
).
We prove that
a.s. 1
C

1
C
as 0, (31)
which implies (30). First of all, note that the event
L =
_
sup
sI\{t}
_
X
t
(s)
t
(s)u
_
=0
_
has zero probability, as already mentioned. Second, from the denition of
X
t
(s) and the hypothesis, it follows that, as 0, X
,t
(s),
,t
(s) converge to
X
t
(s),
t
(s) uniformly on I \ {t}. Now, if / C, there exists s = s() I \ {t }
such that X
t
( s)
t
( s)u > 0 and for > 0 small enough, one has X
t
( s)

t
( s)u >0, which implies that / C

.
On the other hand, let C \ L. This implies that
sup
sI\{t}
_
X
t
(s)
t
(s)u
_
<0.
From the above-mentioned uniform convergence, it follows that if > 0 is small
enough, then sup
sI\{t}
(X
t
(s)
t
(s)u) < 0, hence C

. Equation (31)
follows.
276 J.-M. AZAS AND M. WSCHEBOR
So, we have proved that the limit as 0 of the rst term in (29) is equal to the
rst term in (25).
It remains only to prove that the rst term in (25) is a continuous function
of u. For this purpose, it sufces to show that the function u P{A
u
(X
t
,
t
)}
is continuous. This is a consequence of the inequality
|P{A
u+h
(X
t
,
t
)} P{A
u
(X
t
,
t
)}|
P
_

sup
sI\{t}
_
X
t
(s)
t
(s)u
_

|h| sup
sI\{t}
|
t
(s)|
_
and of Theorem 4.1, applied once again to the process s X
t
(s)
t
(s)u dened
on I \ {t}.
5. Asymptotic expansion of F

(u) for large u.


COROLLARY 5.1. Suppose that the process X satises the conditions of
Theorem 4.2 and that in addition E(X
t
) =0 and Var(X
t
) =1.
Then, as u +, F

(u) is equivalent to
u
d
(2)
(d+1)/2
e
u
2
/2
_
I
_
det((t))
_
1/2
dt, (32)
where (t) is the variancecovariance matrix of X

(t).
Note that (32) is in fact the derivative of the bound for the distribution function
that can be obtained by Rices method [Azas and Delmas (2002)] or by the
expected Euler characteristic method [Taylor, Takemura and Adler (2004)].
PROOF OF COROLLARY 5.1. Set r(s, t) :=E(X(s), X(t)), and for i, j =1, d,
r
i;
(s, t) :=

s
i
r(s, t),
r
ij;
(s, t) :=

2
s
i
s
j
r(s, t), r
i;j
(s, t) :=

2
s
i
t
j
r(s, t).
For every t, i and j, r
i;
(t, t) = 0,
ij
(t) = r
i;j
(t, t) = r
ij;
(t, t). Thus X(t )
and X

(t) are independent. Regression formulae imply that a


t
s
= r(s, t),
t
(s) =
1r(t ,s)
n(s,t)
. This implies that
t
(t) =(t) and that the possible limits values of
t
(s)
as s t are in the set {v
T
(t )v : v S
d1
}. Due to the nondegeneracy condition
these quantities are minorized by a positive constant. On the other hand, for s =t ,

t
(s) >0. This shows that for every t I one has inf
sI

t
(s) >0. Since for every
t I the process X
t
is bounded, it follows that a.s. 1
A
u
(X
t
,
t
)
1 as u +.
Also
det
_
X
t
(t)
t
(t)u
_
(1)
d
det((t))u
d
.
DISTRIBUTION OF THE MAXIMUM 277
Dominated convergence shows that the rst term in (25) is equivalent to
_
I
u
d
det(
t
)(2)
1/2
e
u
2
/2
(2)
d/2
_
det(
t
)
_
1/2
dt
=
u
d
(2)
(d+1)/2
e
u
2
/2
_
I
_
det(
t
)
_
1/2
dt.
The same kind of argument shows that the second term is O(u
d1
e
u
2
/2
), which
completes the proof.
Acknowledgment. We thank an anonymous referee for very carefully reading
the rst version of this work and for very valuable suggestions.
REFERENCES
ADLER, R. J. (1981). The Geometry of Random Fields. Wiley, London.
ADLER, R. J. (1990). An Introduction to Continuity, Extrema and Related Topics for General
Gaussian Processes. IMS, Hayward, CA.
AZAS, J.-M. and DELMAS, C. (2002). Asymptotic expansions for the distribution of the maximum
of a Gaussian random eld. Extremes 5 181212.
AZAS, J.-M. and WSCHEBOR, M. (2001). On the regularity of the distribution of the maximum of
one-parameter Gaussian processes. Probab. Theory Related Fields 119 7098.
AZAS, J.-M. and WSCHEBOR, M. (2002). On the distribution of the maximum of a Gaussian eld
with d parameters. Preprint. Available at http://www.lsp.ups-tlse.fr/Azais/publi/ds1.pdf.
BRILLINGER, D. R. (1972). On the number of solutions of systems of random equations. Ann. Math.
Statist. 43 534540.
CABAA, E. M. (1985). Esperanzas de integrales sobre conjuntos de de nivel aleatorios. In Actas del
Segundo Congreso Latinoamericano de Probabilidades y Estadstica Matemtica 6581.
CRAMR, H. and LEADBETTER, M. R. (1967). Stationary and Related Stochastic Processes. Wiley,
New York.
CUCKER, F. and WSCHEBOR, M. (2003). On the expected condition number of linear programming
problems. Numer. Math. 94 419478.
DIEBOLT, J. and POSSE, C. (1996). On the density of the maximum of smooth Gaussian processes.
Ann. Probab. 24 11041129.
FEDERER, H. (1969). Geometric Measure Theory. Springer, New York.
FERNIQUE, X. (1975). Rgularit des trajectoires des fonctions alatoires gaussiennes. Ecole dEt
de Probabilits de Saint-Flour IV. Lecture Notes in Math. 480 196. Springer, New York.
LANDAU, H. J. and SHEPP, L. A. (1970). On the supremum of a Gaussian process. Sankhy a Ser. A 32
369378.
LIFSHITS, M. A. (1995). Gaussian Random Functions. Kluwer, Dordrecht.
MILNOR, J. W. (1965). Topology from the Differentiable Viewpoint. Univ. Press of Virginia.
PITERBARG, V. I. (1996a). Asymptotic Methods in the Theory of Gaussian Processes and Fields.
Amer. Math. Soc., Providence, RI.
PITERBARG, V. I. (1996b). Rices method for large excursions of Gaussian random elds. Technical
Report 478, Univ. North Carolina.
TAYLOR, J. E. and ADLER R. J. (2002). Euler characteristics for Gaussian elds on manifolds. Ann.
Probab. 30 533563.
TAYLOR, J. E., TAKEMURA, A. and ADLER, R. (2004). Validity of the expected Euler characteristic
heuristic. Ann. Probab. To appear.
278 J.-M. AZAS AND M. WSCHEBOR
TSIRELSON, V. S. (1975). The density of the maximum of a Gaussian process. Theory Probab. Appl.
20 847856.
WEBER, M. (1985). Sur la densit du maximum dun processus gaussien. J. Math. Kyoto Univ. 25
515521.
YLVISAKER, D. (1968). A note on the absence of tangencies in Gaussian sample paths. Ann. Math.
Statist. 39 261262.
LABORATOIRE DE STATISTIQUE
ET PROBABILITS
UMR-CNRS C5583
UNIVERSIT PAUL SABATIER
118 ROUTE DE NARBONNE
31062 TOULOUSE CEDEX 4
FRANCE
E-MAIL: azais@cict.fr
CENTRO DE MATEMTICA
FACULTAD DE CIENCIAS
UNIVERSIDAD DE LA REPBLICA
CALLE IGUA 4225
11400 MONTEVIDEO
URUGUAY
E-MAIL: wscheb@fcien.edu.uy