Vous êtes sur la page 1sur 5

Multiple attenuation: an overview of recent

advances and the road ahead (1999)


ARTHUR B. WEGLEIN, ARCO Exploration and Production Technology, Plano, Texas, U.S.

This paper is an overview of the current state of multi-

ple attenuation and developments that we might anticipate in the near future.
The basic model in seismic processing assumes that
reflection data consist of primaries only. If multiples are
not removed, they can be misinterpreted as, or interfere
with, primaries. This is a longstanding and only partially
solved problem in exploration seismology. Many methods
exist to remove multiples, and they are useful when their
assumptions and prerequisites are satisfied. However,
there are also many instances when these assumptions are
violated or where the prerequisites are difficult or impossible to attain; hence, multiples remain a problem. This
motivates the search for new demultiple concepts, algorithms, and acquisition techniques to add to, and enhance,
our toolbox of methods.
Furthermore, interest in multiple attenuation has been
rejuvenated due to the industry trend toward more complex, costly, and challenging exploration plays. These
include deepwater with a dipping ocean-bottom and targets that are subsalt and sub-basalt. These circumstances
can cause traditional methods to bump up hard against
their assumptions. The heightened economic risk and complexity of these E&P objectives raise both the technology
bar and the associated stakes for methods that can accommodate less a priori information and fewer restrictions
and unrealistic assumptions.
Methods that can reach that level of effectiveness often
place extra demands on processing costs, and on a more
complete sampling and rigorous definition of the seismic
experiment (e.g., the need for the source signature in water).
However, that trade-off and added expense can be a real
bargain if they enable the identification and removal of
heretofore inaccessible multiples while preserving primaries. Indeed, being able to distinguish, for example, a
gas sand from a multiple under a broader set of complex
circumstances makes the extra cost of processing pale compared with reducing the risk and improving the reliability of drill or no-drill decisions.
Two basic approaches to multiple attenuation. Methods
that attenuate multiples can be classified as belonging to
two broad categories: (1) those that seek to exploit a feature or property that differentiates primary from multiple
and (2) those that predict and then subtract multiples from
seismic data. The former are typically filtering methods,
and the latter are generally based on the prediction from
either modeling or inversion of the recorded seismic wavefield. This classification is not rigid; methods will often have
aspects associated with each category.
There are some who have proposed an alternate point
of view: Primaries and multiples are considered as signal
to be imaged or otherwise utilized. We anticipate further
developments of this more inclusive approach. The current dominant viewpoint is the exclusive one, where primaries are signal and multiples are noise.
There is tremendous value in the latter approach, since
depth and time, and separating reflection from propagation are relatively simple for the model of signal as pri-

40

THE LEADING EDGE

JANUARY 1999

maries. The focus of this review will be on methods of multiple attenuation.


Filtering methods. These methods exploit some difference between multiple and primary. This difference may
only become apparent in a particular domain; hence the
reason these techniques employ so many transformations.
Table 1 shows, for filter methods, the various domains
used today, the type of algorithm, and the feature being
exploited.
Note that the feature being exploited can be roughly
categorized into periodicity and separability. The first
group assumes that the key difference between multiples
and primaries is that the former are periodic while the primaries are not. The second group assumes that by applying some transform to the data, the separation between
primaries and multiples can be realized by muting a portion of the new domain. The transform is based on a feature that differentiates signal from noise, usually the
difference in moveout between primary and multiple
events. But the spatial behavior of a targeted multiple
event can also define such a transform.
The filter corresponds to a mute of the principal components of a particular estimate of the covariance matrix.
In this case, we attenuate the targeted multiple rather than
all multiples in the data.
The stacking technique is slightly different in that we
do not mute, but we still rely on the moveout difference
between the NMO-corrected primaries and the uncorrected multiples.
Tau-p deconvolution has aspects that belong to both the
filtering and the prediction and subtraction categories; it has
recently been extended to accommodate dipping layers.
Of course, when these assumptions do not hold the
methods can fail. For example, multiples become less periodic with offset; primaries may be periodic; multiples and
primaries may overlap; multiples of diffractions are not
accommodated; and primaries and multiples from either
curved and dipping reflectors or beneath a laterally varying overburden violate assumptions of periodicity and
hyperbolic moveout. Nevertheless, mild violations of these
assumptions can lead to less effective but still useful results.
Two general caveats:
1) when the assumptions behind a multiple-attenuation
method are violated, the results can be not only the incomplete removal of multiples but also the concomitant (and
Table 1.
Domain
t
tau-p
t-x
principal comp.
f-k
tau-p
f-k

Algorithm
predictive decon
Radon transform + predictive decon
stacking
eigenimages + reject filter
2-D FT + reject filter
Radon transform + reject filter
3-D FT + reject filter
JANUARY 1999

Feature
periodicity
periodicity
separability
separability
separability
separability
separability
THE LEADING EDGE

Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

0000

perhaps even more serious) damage to primaries;


2) beware of the fallacy of expressing a 1-D method in
terms of 2-D or 3-D data and believing that a complete
multidimensional method has been derived. There are
many 2-D and 3-D phenomena (e.g., diffractions) that
have no 1-D analog. Multidimensional methods derive
from multidimensional theory.
Filtering is typically less costly than prediction and subtraction: hence, when effective, it is often the method of
choice. Moreover, among new developments in filter methods, we anticipate advances in interpretation-driven
schemes, such as 3-D prestack versions of targeted multiple techniques.
Wavefield prediction and subtraction. In these procedures, a wave-theoretic concept of how a given multiple
type is generated is used to predict and subtract the multiple. At present there are three different wavefield prediction and subtraction methods: wavefield extrapolation;
feedback loop; and inverse-scattering series. Each has a
unique and distinct concept concerning the generation
and removal of multiples; and each has a different required
level of a priori and/or a posteriori information. The latter can be in the form of an algorithmic requirement for
subsurface information or the need for the intervention of
an interpreter with an interjection of judgment, analysis,
or discrimination.
Wavefield extrapolation is a modeling and subtraction
method; whereas the feedback and inverse- scattering methods are based on the prediction mechanisms within two
different inversion procedures.
Wavefield extrapolation. The wavefield extrapolation
method models wave propagation in the water column. It
takes the data one round trip through the water column,
and then adaptively subtracts this up- and downward-continued data from the original.
This method requires: (1) an a priori estimate of the
water depth and, (2) an a posteriori estimate of a set of
parameters for an adaptive matching and subtraction
process. These matching coefficients are derived within
the context of a phenomenological/statistical model and
therefore have an implicit dependence on the water-bottom reflection coefficient and the wavelet. This implicit
dependence makes the parameter difficult to interpret,
exploit, or estimate in terms of physical processes. However, the very important upside of this implicit or indirect dependence is that the method doesnt require
explicit knowledge of the ocean-bottom reflection coefficient or the wavelet. The method has a demonstrated

0000

THE LEADING EDGE

JANUARY 1999

effectiveness and an important niche in seismic processing.


Free-surface multiple elimination: the feedback and
inverse-scattering methods. The surface multiple elimination methods derive from the physics of waves reflecting at a free surface. A relationship is established between
the recorded data, containing free-surface multiples, and
the desired data with those multiples absent. These derivations make no assumption about the medium below the
receivers. There are series and full operator solutions that
can be realized in different transformed data domains.
The feedback and inverse-scattering techniques provide
different methods for removing all multiples. The former
is based on a free-surface and interface model and the latter on a free-surface and point-scatterer model.
The free-surface and interface removal and free-surface
and point-scatterer formulations both model the free-surface
reflector as the generator of free-surface multiples. They differ in their modeling of the source; the former method models in its simplest form the source as a vertical dipole in the
water, whereas the latter models the source as a monopole.
To compensate for the actual monopole nature of the source,
dipole data are approximated by removing the receiver ghost
and leaving the source ghost intact. In the inverse-scattering formulation, the presence of the obliquity factor reflects
its modeling of the monopole source. While the two formulations for free-surface multiples are conceptually and
algorithmically distinct, in practice the differences between
the two methods are often overshadowed by other factors
(e.g., cable feathering, source-and-receiver array effects, and
errors in deghosting). However, there are circumstances
where the differences matter; e.g., when seeking to interpret
the results of source-signature estimates (especially with
shallow-targets and long-offsets); and when interest is in
increasing the number of deterministic phenomena accommodated, thereby reducing the burden on
statistical/adaptive techniques.
Practical prerequisites include: (1) estimating the source
signature and (2) missing near-trace compensation. This signature is actually a combination of source signature, freesurface reflection coefficient, instrument response, and
algorithmic and numerical factors. A measured source signature could complement and enhance the efficiency of current processing approaches. Missing near traces can often
be reasonably estimated, using trace-extrapolation methods,
when the nearest phone is within the precritical region of
the ocean-bottom reflection. There are important cases, for
example in shallower water, when current extrapolation
methods fail; new acquisition or processing methods will be
needed. Measurement of the near traces seems to be a direct

JANUARY 1999

THE LEADING EDGE

Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

41

and anticipated solution. In the past, these prerequisites


seemed insurmountable practical hurdles. But today, certain
consistently reliable methods for satisfying them have been
demonstrated; others will follow, and this new technology
will reach its full promise.
Specifically, the physics behind these free-surface multiple elimination methods requires absolutely no a priori or a
posteriori information; however, the methods that are in
current use for finding the prerequisite wavelet (typically
based on minimization of the energy) sometimes require an
interpretive intervention.
In addition, one of the underlying strengths of the freesurface multiple elimination techniques (the ability to separate primary from multiple with arbitrary close moveout)
can be compromised by the energy-minimization condition
for the wavelet. To advance this technology, it is important
that we clearly distinguish between assumptions behind the
physics of the method itself and assumptions of the procedures designed to satisfy the prerequisites of the method.
The energy-minimization criterion has an impressive
track record, in its various adaptive and global-search formulations, for providing a useful wavelet for these multiple-attenuation methods.
However, for the feedback and inverse-scattering methods to reach their potential, wavelet estimation methods will
need to be developed that can avoid the current pitfalls.
In addition to understanding how well different multiple-attenuation techniques perform under different subsurface conditions (e.g., 1-D, 2-D, 2.5-D, and different
degrees of cross-line 3-D complexity), there are several key
and related issues that are high in technical and economic
priority and play an important role in both appropriate current applications and in charting a path for the future.
Among them are: (1) how current 3-D acquisition impacts
(and influences the development of) different multipleattenuation techniques under different subsurface conditions; and (2) charting a course that identifies the potential
added-value that derives from enhancements, in acquisition and processing, needed to satisfy the more datademanding techniques and, thereby, specifically identify
geologic circumstances where these enhancements could
have a differential and significant cost/benefit.
Internal multiples: the feedback and inverse-scattering
methods. Free-surface multiples are multiples that have
experienced at least one downward reflection at the airwater free-surface; internal multiples are multiples that
have all of their downward reflections below the free surface. Internal multiples have experienced reflectors that
are in general more remote and harder to precisely define
(in comparison with free-surface multiples); hence, internal multiples are more difficult to predict and attenuate.
Furthermore, internal multiples are in general more difficult to remove, even in 1-D circumstances using methods
that depend on moveout differences (than free-surface multiples), since internal multiples have often experienced similar (or even higher) velocities to primaries in their vicinity.
The feedback method models primaries and internal
multiples in terms of the actual medium and interfaces
(reflectors) that are the sources of those events. The inversescattering method models primaries and internal multiples in terms of reference medium propagation
(propagation in water) and scattering at every point where
the properties of the earth differ from water.
The two fundamentally different models for the generation and associated-inverse-removal of internal multiples (in the interface and point-scatter model) lead to
42

THE LEADING EDGE

JANUARY 1999

completely different (1) cataloging of multiples; (2) algorithms for their attenuation; and (3) requirements for a priori or a posteriori information.
The feedback method, with its associated interface
model for internal multiples, proceeds from one reflector
down to the next and removes all internal multiples that
have their shallowest downward reflection at that reflector. To realize that program, within the feedback method,
requires at least an implicit estimate of the velocity model
of the downward continuation operators and updating of
those operators. The latter updating typically depends on
the flatness criteria for image gathers. Although we recognize that this is a commonly used criteria, it is nevertheless a significant assumption to consider it a necessary and
sufficient condition for a correct downward continuation,
especially under complex circumstances.
An alternate realization of the feedback program for
internal multiples appears to avoid certain aspects of the a
priori information by substituting the infusion of a posteriori information, through the judgment of an interpreter
who decides at each interface what is primary and what is
multiple. The feedback method of internal multiple removal
would seem to be particularly effective and most appropriate
when the internal-multiple-generating reflectors are shallow and smooth and when the macromodel needed to reach
them from the measurement surface is not very complex.
The inverse-scattering method for attenuating internal
multiples derives from the multiple prediction and subtraction subseries that reside within the only multidimensional direct inversion methodology: the inverse scattering
series.
The removal of multiples is viewed as one of the steps,
stages, or tasks that a direct inversion method would have
to perform prior to imaging and inverting primaries for relative changes in earth mechanical properties. The inversescattering series performs direct inversion; hence, the
inverse-scattering series must contain a part of itself, i.e., a
subseries, that is devoted to the task of removing multiples.
If the overall series starts with no a priori information, then
each task is carried out without a priori information. The
distinct subseries that attenuate free-surface and internal
multiples have been identified. Each term in the internal
multiple-attenuating series provides a mechanism for predicting and attenuating all multiples that have experienced
a certain number of reflections, independent of the location
of those reflectors. Absolutely no a priori or a posteriori information is required about the subsurface velocity or structure below the hydrophones. No iteration or interpretive
intervention is ever needed. The method doesnt depend
on periodicity or moveout differences or stripping. The
inverse-scattering method is the only multidimensional
method for attenuating all multiples that doesnt require any
form of a priori or a posteriori information. We would expect
that the inverse-scattering series for attenuating internal
multiples would be particularly well suited and appropriate when the reflectors that generate the internal multiples
are either not shallow, or not simple, or not smooth, or when
a complex macromodel would be needed to carry out the
downward continuation to a reflector.
A strength of the feedback method is that, when it can
carry out its program to a given reflector, the cost per reflector is roughly twice the cost of the free-surface algorithm.
The cost of the inverse-scattering series approach to internal multiple attenuation is considerably greater.
Early tests indicate that the incremental costs of performing inverse-scattering internal multiple attenuation are
about an order of magnitude (10 times) over performing
JANUARY 1999

THE LEADING EDGE

Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

0000

Table 2. Prediction and subtraction methods.


MODELING AND
SUBTRACTION

INVERSION

INVERSION

Wavefield
extrapolation

Feedback

Inverse-scattering
Series

Types of
multiples

Water-bottom,
peg-leg and
first-layer
reverberations

Free surface multiples


Internal multiples
(all orders one
interface at a time)

Free surface multiples


Internal multiples
(all interfaces one
order at a time)

Fundamental
physical
unit

Water-layer
and
ocean-bottom

Free-surface
+
Interface (reflector)

Free-surface
+
Point scatterer

Additional
information needed

Water depth (a priori)


Adaptive subtraction
(a posteriori)

None for free-surface


Internal: A priori velocity model
mimplicit for CFP operators;
mand updating; or an a posteriori
minterpretitive decision at each reflector

None for free-surface


or internal multiples

free-surface multiples alone. This factors into the overhead costs of data preprocessing and quality control that
are required for free-surface multiple attenuation. The ratio
is significantly larger if you just compare the compute
cycle time of the inverse scattering internal multiple algorithm to the free-surface case. However, it is important to
note that the inverse scattering procedure accommodates
all reflectors at once.
It certainly appears that from both a cost/benefit and
domain of applicability that the feedback (interface) model
and inverse-scattering (point-scatterer) model could evolve
into complementary approaches to the important problem
of internal multiple attenuation. In any given case, you
might choose one or the other or a combination of the two.
Early field tests of the inverse-scattering and interface
methods for internal multiples are encouraging. Table 2
summarizes the prediction and subtraction methods.
Conclusions. Multiple attenuation methods continue to
develop, evolve and mature, driven by the confluence of
heightened technical challenge and increased economic risk.
Whereas filter-methods are continuously moving toward
greater effectiveness, the wavefield prediction and subtraction techniques are the current point men in the assault on
the most resistant and troublesome multiples. The latter
allow for the most complex subsurfaces but require clarity
and completeness in the seismic experiment. Although they
are in general more demanding, their demands are in a realm
where we are able, in principle, to satisfy them. They are a
reasonable trade for earlier demands or assumptions about
the subsurface that are intrinsically beyond our reach or

0000

THE LEADING EDGE

JANUARY 1999

knowledge. All new methods (including multidimensional


wave-theoretic procedures) attempt to capture ever more
complete and realistic descriptions of the phenomena that
seismic signals experience. Although progress is measured
by ever more complete descriptions and models, they are
never completethus the constant and ubiquitous need for
statistical, interpretive, and adaptive procedures to accommodate the components of reality beyond the current best
deterministic model. As the latter tools (that address out-ofdeterministic-model phenomena) become more general and
flexible, they increase their practical contribution. However,
the clearest measure of overall advancement is determined
by the movement of deterministic methods into the domain
of statistics and adaptive-interpretive techniques.
The dominant practical issue today is the 3-D application of these techniques with current 3-D acquisition. For
example, the lack of well sampled cross-line data provides
the impetus for developing new extrapolation, interpolation, and acquisition techniques.
It is difficult to overstate or exaggerate the significance
of the landmark event that developed as a byproduct of
the absolute need for the seismic wavelet, within the bandwidth, for the inverse-scattering and feedback methods.
Current standard practice uses the output of the multiple
attenuation itself to find its wavelet. The result is that
under a large set of circumstances we have made progress
toward processing the absolute pressure field, within the
bandwidth, due to an impulsive source.
That it is possible to estimate the source signature from
typical towed streamer data, within the bandwidth, is a singular event in seismic processing history. The eventual

JANUARY 1999

THE LEADING EDGE

Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

43

impact of this true pressure-field band-limited impulse


response determination on other yet-to-be-developed seismic methods and applications could even overshadow the
multiple-attenuation methods from which it emanated.
Further work is needed to determine the source wavelet
under a broader set of circumstances.
Basically, there are two current approaches to multiple
attenuation: (1) Distinguish and separate multiple from primary and (2) predict and subtract multiples from the data.
These broad categories contain several subgroups of methods, each, in turn, with specific strengths and limitations.
Filter methods are typically less expensive than prediction
and subtraction; and, when effective, are the methods of
choice.
The attitude we advocate is a tool-box approach, where
these strengths and limitations are understood and where
the appropriate method is chosen based on effectiveness,
cost, and processing objectives.
Suggestions for further reading. Inverse scattering series
for multiple attenuation: An example with surface and
internal multiples by Araujo et al. (SEG 1994 Expanded
Abstracts). Water reverberationstheir nature and elimination by Backus (GEOPHYSICS, 1959). Wavefield extrapolation techniques for prestack attenuation of water
reverberations by Benth and Sonneland, (presented at
SEGs 1983 Annual Meeting). Deepwater peg-legs and
multiples: emulation and suppression by Berryhill and
Kim (GEOPHYSICS, 1986). Nonlinear inverse scattering for
multiple attenuation: Application to real data, Part 1 by
Carvalho et al. (SEG 1992 Expanded Abstracts). Surface multiple attenuationtheory, practical issues, and examples
by Dragoset (1992 EAGE Abstracts). Multichannel attenuation of high-amplitude peg-leg multiples: Examples
from the North Sea by Doicin and Spitz (EAEG 1991
Annual Meeting). Multichannel attenuation of water-bottom peg-legs pertaining to a high-amplitude reflection by
Doicin and Spitz (SEG 1991 Expanded Abstracts). Seismic
applications of acoustic reciprocity by Fokkema and van den
Berg (Elsevier, 1993). Suppression of multiple reflections
using the Radon transform by Foster and Mosher
(GEOPHYSICS, 1992). Removal of surface-related diffracted
multiples by Hadidi et al. (1995 EAGE Abstracts). Inverse
velocity stacking for multiple elimination by Hampson
(CSEG Journal, 1986). A strategy for multiple suppression
by Hardy and Hobbs (First Break, 1991). Source signature
estimation based on the removal of first-order multiples
by Ikelle et al. (SEG 1995 Expanded Abstracts). Radon multiple elimination, a practical method for land data by
Kelamis et al. (SEG 1990 Expanded Abstracts). The suppression of surface multiples on seismic records by
Kennett (Geophysical Prospecting, 1979). Targeted multiple
attenuation by Kneib and Bardan (EAGE 1994 Annual
Meeting). Predictive deconvolution in shot-receiver
space by Morley and Claerbout (GEOPHYSICS, 1983). 2-D
multiple reflections by Riley and Claerbout (GEOPHYSICS,
1976). Principles of digital Wiener filtering by Robinson
and Treitel (Geophysical Prospecting, 1967). Decomposition
(DECOM) approach to wavefield analysis with seismic
reflection records by Ryu (GEOPHYSICS, 1982). Longperiod multiple suppression by predictive deconvolution
in the x-t domain by Taner et al. (Geophysical Prospecting,
1995). Application of homomorphic deconvolution to
seismology by Ulrych (GEOPHYSICS, 1971). Surface-related
multiple elimination: an inversion approach by Verschuur
(Ph.D. dissertation, ISBN 90-9004520-1). Adaptive surface-related multiple elimination by Verschuur et al.
44

THE LEADING EDGE

JANUARY 1999

(GEOPHYSICS, 1992). Attenuation of complex water-bottom


multiples by wave-equation based prediction and subtraction by Wiggins (GEOPHYSICS, 1988). Velocity-stack
processing by Yilmaz (Geophysical Prospecting, 1989). Why
dont we measure seismic signature by Ziolkowski
(GEOPHYSICS, 1991). Multiple suppression by single channel and multichannel deconvolution in the tau-p domain:
by Lokshtanov (SEG 1995 Expanded Abstracts). Comparing
the interface and point-scatterer methods for attenuating
internal multiples: a study with synthetic dataPart 1 by
Verschuur et al. (SEG 1998 Expanded Abstracts). Comparing
the interface and point-scatterer methods for attenuating
internal multiples: a study with synthetic dataPart 2 by
Matson et al. (SEG 1998 Expanded Abstracts). Wave equation prediction and removal of interbed multiples by
Jakubowicz (SEG 1998 Expanded Abstracts). Wavelet estimation for surface-related multiple attenuation using a
simulated annealing algorithm by Carvalho and Weglein
(SEG 1994 Expanded Abstracts). 3-D surface-related multiple prediction by Nekut (SEG 1998 Expanded Abstracts).
Deghosting and free-surface multiple attenuation of multicomponent OBC data by Ikelle (SEG 1998 Expanded
Abstracts). Removal of water-layer multiples from multicomponent sea-bottom data by Osen et al. (SEG 1996
Expanded Abstracts). Multiple wavefields: separating incident from scattered, up from down, and primaries from
multiples by Ziolkowski et al. (SEG 1998 Expanded
Abstracts). 2-D multiple attenuation operators in t-p
domain by Liu et al. (SEG 1998 Expanded Abstracts).
Hough transform based multiple removal in the XT
domain by Pipan et al. (SEG 1998 Expanded Abstracts).
Dereverberation after water migration by Parrish (SEG
1998 Expanded Abstracts). Multiple suppression: beyond
2-D. Part 1: theory by Ross (SEG 1997 Expanded Abstracts).
Multiple suppression: beyond 2-D. Part 2: application to
subsalt multiples by Ross et al. (SEG 1998 Expanded
Abstracts). Estimation of multiple scattering by iterative
inversion. Part 1: Theoretical considerations by Berkhout
and Verschuur (GEOPHYSICS, 1997). Internal multiple attenuation using inverse scattering: Results from prestack 1D and 2-D acoustic and elastic synthetics by Coates and
Weglein (SEG 1996 Expanded Abstracts). Removal of elastic interface multiples from land and ocean bottom seismic data using inverse scattering by Matson and Weglein
(SEG 1996 Expanded Abstracts). An inverse scattering series
method for attenuating multiples in seismic reflection
data by Weglein et al. (GEOPHYSICS, 1997). Removal of
internal multiples: Field data example by Hadidi and
Verschuur (SEG 1997 Expanded Abstracts). LE
Acknowledgments: I once again express my sincere appreciation to all
of the contributors to my 1995 talk which formed the nucleus of this
paper. In addition, I thank A. J. Berkhout, Eric Verschuur, Ken Matson,
Chi Young, and Helmut Jakubowicz for the truly outstanding collaborative effort and joint research papers presented at SEGs 1998 Annual
Meeting. Many views expressed in this paper were gleaned from the
conclusions and comparisons of that study. That collaboration and
analysis continue. I offer my congratulations to the Delft group for its
outstanding model of integrity and openness and to Verschuur and
Matson for leading the comparison studies and analysis. I thank the
ARCO management for supporting this effortin particular, J. K.
OConnell for his considered and astute technical analysis and perspective
and Steve Moore for his strong interest and constant support. Dennis
Corrigan, L. Peardon, Paulo Carvalho, F. Gasparotto, T. Ulrych, David
Campbell, Andre Romanelli, W. Beydoun, B. Davis, H. Akpati, Vandemir
Oliveira, P. Stoffa, Luc Ikelle, Steve Hill, and Bill Dragoset are thanked
for helpful and constructive suggestions.
Corresponding author: A. Weglein, aweglein@arco.com
JANUARY 1999

THE LEADING EDGE

Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

0000

Vous aimerez peut-être aussi