Vous êtes sur la page 1sur 48

MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

3.1 Pre-processing

Figure 3.1.1 shows the raw shot gathers with all identified events. This forms a basis
for designing the pre-processing flow for enhancement of the data before the main
processing (Section 3.2). It is noted that corrections for field statics (resulting from
variable overburden characteristics along the survey line) has been already been
applied (by Dr. Andy Carter) and the shot gathers desampled at this stage of pre-
processing. The following sections summarise the various procedures utilized to
enhance the signal to noise ratio of the shot gathers before sorting into CDPs and the
carrying out subsequent processing.

Pre-Signal Arrivals Air-blast

Refracted First Arrivals Ground Roll

Shallow Reflections Main Reflections

Figure 3.1.1 Raw Shot gathers showing all main events

MSc Team 1: Bishop Wood Seismic Reflection Processing 1


MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

3.1.1 BAND PASS (Kirstin)

3.1.2 AUTOMATIC GAIN CONTROL (AGC)


The AGC which varies the gain applied to trace samples as a function of sample
amplitude within an AGC time window was used to enhance the appearance of the
gathers. Although a ‘cosmetic’ operation with no physical meaning, it enabled a rough
estimate and correction for loss in amplitude of the seismic wave with time and offset.
Since no velocity information is available at this stage of processing, the AGC was
considered the best possible operator for easy view of events in the gather during the
initial phase of processing. Several AGC operator lengths (20ms to 120ms) were
tested (Figure 3.1.2.1) and an optimum AGC window length of 100ms was chosen
as the best.

Optimum
AGC
window:
100ms

Figure 3.1.2.1 Diagram showing parameter (operator length) tests for the optimum AGC.
The 100ms operator length appears to give the best possible enhancement to the shot
gathers without creating excessive artefacts in the gathers.

The final AGC parameters used are given in Table 3.1.2.1 below.
Table 3.1.2.1 Summary table of AGC parameters
Operator Length 100
Type of AGC Scalar Mean
Basis for Scalar application Centred
Note: No Robust Scaling and ‘Hard zeroes’
were excluded.

MSc Team 1: Bishop Wood Seismic Reflection Processing 2


MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Figure 3.1.2.2(c) shows the Shot gathers after application of the AGC and Band
Pass filter. A comparison of this with the Figure 3.1.2.2(b) shows that most of the
noise in the raw gathers (Figure 3.1.2.2 (a)) has been suppressed while the major
reflections were enhanced.

TRACE KILLS
After the initial pre-processing (Band-Pass and AGC) of the shot gathers, most of the
traces that initially appeared unusable were greatly enhanced but a few traces noisy
traces (probably resulting from improper geophone-ground coupling, high level of
noise around geophone locations or bad cable channels) persisted. Trace killing was
used to turn these bad traces to hard zeroes (the ‘killed’ traces would not be used in
subsequent processing). Figure 3.1.2.2(d) shows examples of bad traces (Traces 21
and 23) in shot gathers (29 and 30). This would ensure that only actual events (and
not noise) are consequently stacked.

TOPMUTE
Pre-signal arrivals (noise) were identified in the raw gathers (Figures 3.1.2.2a and
3.1.2.3a). The refracted first arrivals should normally be the first events in each trace
and across each gather, therefore signals before these were considered as noise and
set to ‘hard zeroes’ using the topmute as shown in Figures 3.1.2.2b and 3.1.2.3b. A
starting ramp of 30ms was used for the topmute in order to prevent ‘ringing’.

BOTTOM MUTE
One of the final processing flows utilised the bottom mute which effectively removed
most of the highly prevalent ground-roll from the data. As would be seen in the main
processing flow, this cuts out part of the reflected arrivals at the near offset and also
reduces the amount of data available in CMPs with little folds (at the binning and end
of the survey line) resulting in areas with no data in the final stacked section. This set
of gathers would not be subjected to much further pre-processing (f-k filtering for
example). Although part of the useful signal had been lost in this process, the final
stack from this set of gathers would be compared with the one from the other
processing flows. Like the topmute, an ending ramp of 30ms was used in order to
prevent ‘ringing’.
Figure 3.1.2.3b shows the Shot gathers after application of the top and bottom
mutes and band optimum band pass filtering.

MSc Team 1: Bishop Wood Seismic Reflection Processing 3


Presignal Noise
Presignal Noise removed

(a) (b)

Noisy
Traces

Residual
Ground Roll
Residual
Airblast

(c) (d)

Figure 3.1.2.2 Diagrams showing (a) Raw Shot Gathers (SIN 1 &2); (b) Shot gathers after application of Top-mute to remove pre-signal noise; (c) Shot
gathers after application of optimum band pass filter (50-100-200-400) and optimum AGC (100ms Operator length) with evidence of residual Airblast and
ground roll; and (d) Shot gathers (SIN 29 & 30) showing bad traces (traces 21 and 23) that were killed (i.e. not used in subsequent processing)

MSc Team 1: Bishop Wood Seismic Reflection Processing 4


Presignal Noise
Removed

Presignal
Noise

Ground Roll
Removed

Ground
Roll

(a) (b)

Figure 3.1.2.3 Diagrams showing (a) Raw gathers with

MSc Team 1: Bishop Wood Seismic Reflection Processing 5


MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

3.1.5 Prestack Deconvolution


The recorded seismic signal may be considered as the convolution of the source
signal with the instruments, the geophones, and the response of the Earth. The Earth
response includes some undesirable effects, such as reverberation, attenuation, and
ghosting. The objective of the Spiking deconvolution before stack (DBS) used in this
stage of pre-processing is to estimate these effects as linear filters, and then design
and apply inverse filters. Yilmaz (1987) gives in-depth description of the
deconvolution process. Figure 3.1.5.1 below summarises the input/output time
functions and procedure for the deconvolution process.

Let’s start by the convolution model:

Where:
x is the input time function
h is the time domain representation of the filter
f is the output, and
* is used to indicate convolution - a combination of multiplication and addition.

The ‘minimum phase’ predictive deconvolution process was used to collapse the
wavelet back to a spike and effectively removing the effect of other events (e.g. short
path multiples) which tend to broaden the wavelet. Equation 3.1.5.1 is a summary of
the deconvolution process.
Predictive Deconvolution based on the least squares definition:

3.1.5.1
Figure 3.1.5.1 is an example of the autocorrelation function (ACF) for a shot gather
(FFID 1110). The ACF for 24 shot gathers (FFID 1100-1034) is shown in Figure
3.1.5.2. The Gap, Operator length and Pre-whitening ratio (%) were defined the ACF
and parameter tests.

6
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Gap

Figure 3.1.5.1: Autocorrelation function (ACF) for FFID 1100. Position of the Chosen Lag is
shown in Red.

Figure 3.1.5.2: Autocorrelation function (ACF) for FFID 1100-1124. The Red line show the
position of the chosen lag (at the second zero crossing.

Figure 3.1.5.3 shows the parameter tests for different operator lengths (6-14ms) with
a constant gap of 11ms. The parameter test for different gap values (at a constant
operator length of 10ms) is shown in Figure 3.1.5.4. Samples of shots gathers (SIN 5
and 6) before and after deconvolution are presented in Figure 3.1.5.5.

7
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Best
Operator
Length
=10ms

Figure 3.1.5.3: Parameter test for different operator lengths (6-14ms). Operator length 10ms
appears to give the best result without introducing many artifacts into the original gather.

Figure 3.1.5.4: Parameter test for different gaps (6-14ms). Gaps 10ms and 12 appear to
show good results, therefore a 11ms gap/lag was chosen as the optimum.

8
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

o The Gap determines which part of the ACF will be untouched by the
deconvolution. The part from lag = 0 to lag=G, represent the primary reflection.
An operator length of 11ms was found to give the best result.
o The Operator length L determines how many points are in the filter and what
extent of the ACF, from the lag = G+1 to G+L, will be zeroed by the
deconvolution. An operator length of 10ms was found to be optimum.
o The (%) pre-whiting is a small adjustment to ACF at lag =0 which effectively
ensure numerical stability. A percentage pre-whitening of 0.1% was used for in
the deconvolution.

(a)

Reflections sharper
after deconvolution

(b)
Figure 3.1.5.5 Shot gathers (SIN 5 and 6) (a) before and (b) after the predictive/
spiking deconvolution (Operator Length: 10ms; Gap: 11ms).

9
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

3.1.6 F-K Analysis (Kirstin)

10
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

3.2 Main Processing


Figure 3.2.0.1 below shows the flowchart used for the main processing of the pre-
processed dataset. The following section gives a summary of each steps used at this
stage of processing. It should be noted that although only results from the best
dataset obtained from one of the pre-processing flows (one which utilised F-K
filtering) are displayed, the dataset from the other pre-processing flow (Bottom Mute
route) was also subjected to the same processing and the final stacks (which is of
inferior quality) displayed in Appendix 6.

Figure 3.2.0.1: Flow Chart showing the processing flow involved in Main Processing

3.2.1 Velocity Analysis


Aims
1. To prepare the pre-processed data for velocity analysis
2. To perform velocity analysis on the data to obtain a velocity field for the
dataset that is representative of the true velocity field.
3. To use this velocity field to perform an NMO correction on the data and
produce a stacked section suitable for residual statics and subsequent
processing.

Formation of Supergathers
Before velocity analysis can be performed the data must first be organised into
Supergathers. A Supergather is formed by combining a number of CDPs, these

11
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

super gathers are generated at regular spacing along the survey line. The benefits of
using Supergathers in velocity analysis are two-fold; firstly anomalous reflectors and
noise are attenuated while continuous reflectors are enhanced. Secondly the
amount of gathers to be analysed is decreased making the process more time-
efficient. The Pre-processed data was arranged into Supergathers using the 2D
Supergather Formation module in Promax. The key parameters used in the flow are
shown in table 3.2.1.1 below:

Table 3.2.1.1: Key Supergather formation parameters


Parameter Value used Parameter description
Maximum CDP fold 38 Maximum fold present in
the dataset.
Minimum centre CDP 7 The centre position of the
number initial Supergather
Maximum centre CDP 350 The centre position of the
number final Supergather.
CDP increment 15 The increment between
adjacent Supergather
points in CDPs
CDPs to combine 5 Number of CDPs to
combine in a Supergather

This means that the Supergathers used for the velocity analysis were formed from 5
CDPs at 15 CDP increments along the survey line. A CDP smash of 5 was chosen
as it was sufficient to suppress influences of any anomalous data without smoothing
out any lateral variation. An increment of 15 was chosen as it gave a good regular
sampling of the data along the line (giving 22 Supergathers). The 2D Supergather
formation module was applied to the data and output saved to disk ready to be read
in by the Velocity Analysis Precompute module. Note: The actual stacking of the
CDPs to form the Supergathers is performed in the Velocity Analysis Precompute
module using the trace headers defined by the 2D Supergather formation module.

Velocity analysis pre-processing


Once the Supergather geometries had been defined the Velocity Analysis
Precompute module was used to stack the Supergathers and subsequently calculate
semblance values for these Supergathers and generate a set of velocity function
stacks (VFS). The velocity analysis pre-processing is necessary to form the visual
displays used to manually pick the velocity function for the data. The Key
Parameters used in this model are shown in table 3.2.1.2 below:

Table 3.2.1.2: Key Velocity Analysis pre-processing parameters

12
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Process Parameter Value Used Parameter description


Semblance Minimum semblance 200 Minimum stacking
analysis value velocity used
Maximum semblance 3500 Maximum stacking
analysis value velocity used
Number of semblance 50 Number of semblance
Calculations scans performed
between the max and
min velocities
Semblance velocity Equal velocity Defines spacing style of
axis semblance scans within
the max and min
velocities
Semblance sample 2.5 Spacing in time of
rate semblance scans
Semblance calculation 5 Size of the semblance
window calculation window
Semblance Scale Time Selects scaling style of
Normalization mode Slice semblance display
Velocity Number of Stack 7 Number of stack velocity
Function velocity functions functions to be
Stacks computed
Number of CDPs in per 5 Number of CDPs within
stack strip each stack velocity
function
Guide minimum value 1000 Minimum value of central
stack velocity function
Guide maximum time 3000 Maximum value of
value central stack velocity
function
Velocity variation at 250 Variation of adjacent
time 0 stack velocity functions
at time = 0
Velocity variation at 750 Variation of adjacent
maximum time stack velocity functions
at maximum time
NMO Maximum NMO stretch 100 Maximum NMO stretch
Percentage allowed.

Semblance
The semblance panel display is central to the subsequent manual velocity picking
process and hence it is vital that the parameters used to define its computation are
correctly chosen. A semblance panel is generated by the following process:
1. At a given zero offset travel time to and velocity v a hyperbolic trajectory can
be defined through the data.
2. A window of finite length w is passed through the data along this trajectory.
3. The data within the window along each hyperbolic trajectory is compared for
similarity/coherency. A statistical measure of the ‘sameness’ of the seismic

13
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

information in each wavelength is calculated. Semblance is one way of


performing this comparison, defined below:

 ( a ) 2

Semblance  K m

 
M a2 
K  m 
Where K is the number of samples in the window length w, m is the number
traces across the gather and a is the recorded amplitudes at each sample. A
high semblance value indicates a high level of ‘sameness’ while a low
value indicates a low value of ‘sameness’. Therefore for a reflection with zero
offset time to, if the semblance is calculated for the correct velocity v then a
high value will be obtained.
4. Semblance values are calculated for a range of to and v values for each
Super gather and the values plotted and contoured in a velocity (x axis) and
zero offset time (y axis) display.

Justification of semblance parameters: The parameters were chosen to


adequately resolve the stacking velocity as a function of zero-offset travel time for the
data. The key parameters controlling resolution are number of semblance
calculations (resolution in velocity) and the semblance sample rate (resolution in
time). A time sampling of 2.5ms was used and is adequate to sample and resolve
any events in time for the length of the data (source wavelet from autocorrelation was
approximately 10ms, see earlier). 50 different velocities were used for each t o, this
provides good velocity resolution without requiring excessive computation time. A
velocity range of 200m/s to 3500m/s was chosen as it includes all likely velocities
found in near surface sedimentary rocks. A semblance window of 5ms was chosen
to avoid temporal smoothing. These values were used to produce all semblance
panels for use in velocity analysis. Figure 3.2.1.1 below shows a semblance panel
obtained using these parameters for a representative Supergather.

Velocity function stack (VFS)


A set of velocity function stacks were generated to aid picking of the velocity function.
A VFS consists of a small number of CMPS displayed side by side corrected for
NMO using a simple linear velocity function with travel time (in this case). Several of
these VFSs are displayed side by side each with a slightly varying linear velocity
function. In principle if the correct NMO velocity has been applied for a given event
no move-out on the VFS should be observed. Therefore if the correct stacking

14
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

velocity for an event lies on or close to one of these synthetic velocity functions then
zero move-out will be observed on the respective VFS. The parameters chosen are
shown in table 3.2.1.2. Currently 7 VFSs are displayed each consisting of 5 CMPs,
the central VFS (number 4) with a velocity function beginning at 1000m/s at t=0
increasing linearly to 3000m/s at maximum time. These parameters were chosen as
they express a significant velocity range, with enough CMPs to observe move-out but
without cluttering the display by over sampling.

Figure 3.2.1.1: Semblance panel and associated CMP gather: Supergather 187

Velocity Analysis
Velocity Analysis was performed using the Velocity Analysis module. The velocity
analysis module takes the pre-computed information and provides an interactive
velocity picking environment where both semblance and VFSs are displayed. The
picked velocities are saved to an external velocity table specified in the module
parameters. As the module is using pre-processed data the only Key parameter
specified in this module is the NMO stretch percentage. The NMO stretch

15
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

percentage in this case dictates the percentage stretch that can occur on the
dynamic CMP (shown to the right of the semblance display). This parameter was set
to 100% for the velocity analysis. This is justified as the data is from the near surface
and contains some relatively low velocities due to unconsolidated layers overlying the
bedrock. As a result of this significant stretching occurs at early times, therefore to
preserve fold of cover at these earlier times a more lenient NMO stretch mute is
required. Figure 3.2.1.2 shows the typical velocity analysis display for the first
iteration of velocity analysis for a representative Supergather.

Figure 3.2.1.2: Typical Velocity Analysis Display: Supergather 187

Velocity Analysis Method


The Stacking velocity function was obtained by using the interactive velocity analysis
module to pick a velocity-time section at each Supergather position. The following
method was used to pick the velocities:
1. Locate a clear peak on the semblance plot, which indicates an event at the
respective zero offset time was coherent along the trajectory specified by the
respective velocity.
2. Check the velocity at that time on the VFSs to see what velocity provided a
good alignment and cross-reference it with the velocity obtained from the
semblance panel.
3. If there was a good correlation the velocity was picked.

16
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

4. The dynamic CMP gather would then be updated to show the effect of a NMO
correction at that velocity.
5. Check the Dynamic stack profile to see the estimation of the amplitude of the
respective event in the final stacked section.
6. Check the velocity with other events at that depth and against previously
picked interpolated velocities (shown in orange on Figure 3.2.1.2) for
consistency and possible multiples.
7. If acceptable select another semblance peak and repeat.
This process was repeated for all Supergathers along the profile to produce a
velocity-time section for the survey line. The velocity-time section for the 1 st iteration
of velocity analysis is shown in Figure 3.2.1.3.

Figure 3.2.1.3: Velocity-Time section after 1st iteration of velocity analysis (N.B. pre-
residual statics).

The velocity-time section shows a clear increase in velocity with depth with two
regions of higher velocity increase at approximate times of 80-60ms and 180-220ms.
The higher velocities seen at later times also seem to decrease towards the later
CDP numbers. The velocity field appears to be quite variable with some large
discontinuities; these are most likely to be due to remaining static shifts in the data
that need to be corrected for at a later stage (see section 3.2.2).

17
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Problems and considerations

Low fold on end Supergathers: Due to the available data and geometry of the
survey the Supergathers at either end of the survey contained a lower fold of cover
with respect to the central Supergathers. As a result of this the semblance plot was
poorly defined for these Supergathers (no far offsets). To overcome this problem the
initial velocity profile was not picked on the first Supergather but instead the first
velocity profile was picked several Supergathers into the profile to ensure higher
quality initial picks. This allowed the end Supergathers to be interpreted while
considering similar structure nearby. However due to the poor quality of the
semblance panel the velocity picks at the ends of the profile are subject to larger
degrees of error.

Multiples: Multiples both short and long path will give rise to peaks on the
semblance panel as they stack coherently along a given velocity and zero offset
travel time. Short path multiples in the data (ideally removed by Deconvolution – see
section 2.0 Pre-processing) will act to reduce the sharpness of the primary reflection
events as they will give rise to peaks very close to the primary events. This smearing
out effect of the data will lead to a reduction of accuracy when picking the data. Long
path multiples appear some time after the associated primary event. As a result of
this they can be easily identified as they have a characteristically slow velocity (same
as primary) for the time at which they are observed. All efforts were made to ensure
that multiple events were not picked during the velocity analysis, as this would give
rise to an incorrect stacking velocity and ultimately attenuation of primary events.
See Figure 3.2.1.4 for an example of a likely multiple event.

18
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Figure 3.2.1.4: Example of a likely multiple event: Supergather 187 (N.B. 3 rd iteration
semblance panel).

Velocity Iterations: The velocity section shown in Figure 3.2.1.3 is not a final
stacking velocity section for the data. To produce a final velocity section correction
for residual statics must first be made to the data (detailed in section 3.2.2). As these
corrections are dependant on the quality of the velocity model the process of residual
statics and velocity analysis are repeated iteratively. This iterative process and the
subsequent velocity models are detailed subsequently in section 3.2.3.

3.2.3 Residual Statics


Aims
1. To pick a suitable auto-statics time gate on a stacked section
2. To calculate residual statics corrections using an optimised residual statics
algorithm.
3. To apply and assess the performance of these residual statics corrections.

19
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Residual statics is the process by which any small scale static errors in the data
which arise as a result of near surface in-homogeneities, small scale topography and
variable geophone spacing are corrected for. These static errors will give rise to
slight time mismatches in the data that are not as a result of the target geological
structure; therefore it is desirable to remove these effects. If residual statics are not
corrected for the following detrimental effects will occur on the data:
1. Poor amplitude of primary events: if reflectors are not aligned properly then
they will not stack effectively and hence will result in poor amplitude
enhancement and noise reduction.
2. Non representative of subsurface: If any effects of topography or near
surface discontinuities are not corrected for then all events below the effected
area will also be effected. Therefore the target horizons will contain artefacts
that do not occur in reality.
3. Poor alignment of hyperbola in velocity analysis: If the traces are poorly
aligned then they will not give a clear peak on semblance analysis and hence
the velocities chosen will be less accurate. As well as this the NMO
correction applied to the CMPs will not result in the best possible alignment of
the target events.
Therefore it is critical that residual statics corrections are applied and repeated
iteratively to improve both the velocity time section and the residual statics
corrections themselves.

Picking the Auto-statics horizon


The first stage in the calculation of statics is to pick the auto-statics horizon. The
auto-statics horizon was picked on a stacked section which had been assembled
after applying an NMO correction based on the velocity-time section picked in the
first iteration of velocity analysis (see section 3.2.1). Full details of the stacking
process and NMO correction can be found in section 3.2.4. The maximum NMO
stretch percentage used in the stack was 100%. This is justified as the data is from
the near surface and contains some relatively low velocities due to unconsolidated
layers overlying the bedrock. As a result of this significant stretching occurs at early
times, therefore to preserve fold of cover at these earlier times a more lenient NMO
stretch mute is required. Figure 3.2.2.1 shows the stacked section that was used to
pick the auto-statics horizon. It can clearly be seen from that stacked image that
there are small mismatches between traces for apparently coherent continuous
events.

20
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Figure 3.2.2.1: Stack with no residual statics corrections applied, compiled using 1 st
iteration velocities, AGC (window 50ms applied for cosmetic purposes)

The residual statics process requires a manually chosen time gate that acts as the
auto-statics reference horizon. This horizon is picked manually and the subsequent
processes are based on the assumption that this event is continuous. The reflection
that was chosen to be the reference horizon was the upper strong event occurring
between 100 ms and 110 ms. This event was chosen as it appears to be continuous
from the initial stack with no evidence of faults or other discontinuities, as well as this
it is high in amplitude and will therefore give good results from the auto-statics
algorithm. Figure 3.2.2.2 shows a zoomed in region of the reflector where the
horizon has been picked (shown in red), again the static time shifts are clearly visible.

Calculating and applying the static shifts


Once the Auto-statics horizon had been manually picked the residual statics could be
computed and applied to the data. This was done using the Maximum Power Auto-
statics Module. This module calculates a static time shift using a modified version of
the method described in by Ronen and Claerbout, 1985. This process generates a
pilot trace by summing a number of CDPs along the specified horizon for a specified
finite time gate. This pilot trace is then systematically used with each CDP to define
source and receiver statics for each trace.

21
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Figure 3.2.2.2: Detail of the manually picked auto-static horizon, no auto-statics


applied to stack, AGC (window 50ms applied for cosmetic purposes)

Table 3.2.2.1: Key Residual statics calculation parameters


Parameter Value Used Parameter Description
Smash 7 The number of CDPs that
are stacked together to
form the pilot trace
Time Gate 40 Time gate at which the
Pilot trace is computed for
RMS static change 0.05 During the iterative
convergence criteria process to compute the
statics, if the difference
between successive
iterations is less than this
the iteration will be halted.
Maximum number of 4 Maximum number of
iterations iterations to perform.
Minimum live samples in a 60 The maximum allowable
gate (%) limit of non-zero samples
to be included in a gate
before the trace is
excluded
Maximum static correction 4 The Maximum static shift
allowed (ms) that can be computed for
the data.

Justification for parameters: The smash for the generation of the pilot trace was
chosen at 7 as this included enough CDPs to provide a representative trace without
smoothing the trace too much so that it was no longer comparable with the
surrounding CDPs (crucial as the specific form of the wavelet is used in time-series

22
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

techniques to calculate the statics). The Time gate was chosen as it was large
enough to encapsulate the picked horizon but not so large that it significantly
increased processing time. The RMS convergence criteria and maximum iterations
were specified to avoid excessive computation time spent on a non converging
sequence. The Maximum static correction was chosen as to allow enough of a time
shift to adequately correct the data but without causing incorrect matching of
horizons by over correcting. This results in artefacts in the data such as ‘leg-jumps’
and ‘breaks’ which appear similar to faults on the section.
Once the residual statics had been computed they were applied to the data using the
Apply residual statics module. Figure 3.2.2.3 shows the stacked profile after the 1 st
round of residual statics corrections. It can clearly be observed that the coherency of
the events was improved especially in the near surface section of the data.

Figure 3.2.2.3: Stack after the 1st iteration of residual statics, AGC (window 50ms
applied for cosmetic purposes) Iterations of Velocity analysis and residual Statics

It has been highlighted earlier that the accuracy of both the velocity analysis and
residual statics corrections is greatly improved if they are done iteratively in a cyclic
fashion. The reasons for the improvement in the quality of the final velocity-time
section and the final statics corrections are:
1. Events corrected for residual statics will give rise to sharper semblance peaks
as the energy on each trace will be aligned better on hyperbolic paths (used
by the semblance algorithm). This improvement of alignment is most

23
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

noticeable in the near surface events. Hence data that has been corrected
for residual statics gives rise to a more representative velocity-time section.
2. The process of calculating residual statics requires a stacked section and
hence a representative velocity model. If this velocity model is improved then
the calculated statics will in-turn be improved.

The process of velocity analysis and residual statics was iterated three times, this
was done it resulted in a considerable improvement in both the calculated statics and
velocity-time section. The following figures detail the improvements in the velocity-
time sections and stacked sections as a result of the iterations.

24
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

(a)

(b)
Figure 3.2.3.1: Improvements in the velocity-time section, a) shows the 2 nd iteration
velocity-time section and b) shows the final velocity-time section.

25
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Figure 3.2.3.2: shows successive improvements in both semblance display and


associated auto-statics horizon at successive iterations.
a). Semblance panels for successive iterations, notice the sharpening of peaks
especially at shallow depths.

b). Picked Auto-statics horizon at successive iterations, notice the clear


improvement in event alignment.

Main Processing Final Stacked Section


Figure 3.2.3.3 below shows the final stacked section after three iterations of velocity
analysis and residual statics. On comparison with figure 3.2.2.1 (the initial stack with
the 1st iteration velocities). It can clearly be seen that the alignment of events has
been greatly improved and that the near surface information has been significantly

26
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

enhanced. The final stacked section and associated final velocity-time section were
then saved to disk ready for use in Post-Processing.

Figure 3.2.3.3: Main Processing Final Stacked section, stacked using final velocity
field, AGC (window 50ms applied for cosmetic purposes)

The final stacked section obtained from the ‘Bottom-Mute Route’ is displayed in
Appendix 6.

3.2.4 Stacking/NMO Correction

27
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

3.3 Post-Processing
Figure 3.3.1 shows the final stacked section obtained from the iterative velocity
analysis/residual statics described in Section 3.2 above.

Figure 3.3.0.1 Final Stacked section obtained from main processing of the dataset.

A smoothed velocity field (Figure 3.3.0.2) was obtained from the final refined
velocities and used for most part of the post processing.

Figure 3.3.0.2 Final smoothed velocity field used for post-processing of the final
stack (Figure 3.3.0.1).

28
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

3.3.1 Migration
Several non-geological diffracted/dipping coherent and incoherent events and are
seen in this stacked section. The Migration process defined as “an inversion
operation involving rearrangement of seismic information elements so that reflections
and diffractions are plotted at their true locations” (Sheriff, 1999) would be used to
correctly position these events.
The final dataset (both pre-stack and post-stack) was migrated using different
migration algorithms (Kirchhoff Post-stack Time Migration, Stolt/Phase Shift 2D
Migration, Fast Explicit Finite Difference Time Migration, and Prestack Kirchhoff Time
Migration) in order to determine the best possible migration algorithm to be used. The
following sections outline these different Migration algorithms applied used for this
operation. Results from tests carried out using various parameters for each algorithm
are presented in Appendices 1-4.

Kirchhoff Time Migration


This Migration algorithm appears to give the best possible migrated section and used
for the final stack obtained was used for the final depth conversion. The algorithm
performs repositioning of all points in the 2D time section by applying a Green’s
function to each CDP location using a travel time map. It uses a vertically and
laterally variant RMS velocity field, VRMS (x,t), in time and provides good handling of
steep dips, up to 90 degrees. Yilmaz (1987) gives a detailed overview of the
technique.
Figure 3.3.1.1 shows best migrated section obtained using this algorithm. Several
other tested parameters for this algorithm resulted in geologically less plausible
sections presented in Appendix 1(a-h).
The final stack from the Bottom Mute processing route (Appendix 6a) was also
migrated using this algorithm. The migrated section is presented in Appendix 6b.

Stolt/Phase Shift 2D Migration


Figure 3.3.1.2 shows the best stack obtained from using the Stolt/Phase Shift 2D
Migration algorithm. This migration was carried out on the stacked seismic section
and it utilises Stolt’s f-k method or a variation of Gazdag’s phase-shift method. Yilmaz
(1987) gives a detailed overview of the Stolt/Phase Shift technique and suggests that
the algorithm provides consistent geophysical treatment of the input data. The best
section obtained from this algorithm (Figure 3.3.1.2) is comparable to the one
obtained from the Kirchhoff Time Migration but it shows features (highlighted) that are
likely to be non geological. Appendix 2 (a-d) shows different stacks obtained from
using various parameters in this algorithm.

29
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Figure 3.3.1.1 Final Migrated Section obtained using the Kirchhoff Time Migration algorithm.
[Max. Frequency Migrated: 400Hz; Migration Aperture: 15ms; Maximum Dip Migrated: 30deg.]

Non-Geological??

Figure 3.3.1.2 Diagram showing the Migrated Section obtained using the Stolt/Phase Shift
Migration Algorithm. [Migration Algorithm: Phase Shift; Migration Dips: Up to 90 degrees only]

30
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Fast Explicit Finite Difference Time Migration


The best section obtained from the Finite Difference Time Migration is presented in
Figure 3.3.1.3. The algorithm migrates stacked data in the time domain using a
modification of explicit finite-difference extrapolators. The interval velocities used in
the algorithm were obtained from direct manipulation of the final stacking velocities.
Like other finite-difference methods, this algorithm can handle fully variable interval
velocity fields in time. The best stack obtained from this migration algorithm is
comparable to the one from the Kirchhoff migration but only slightly inferior in the
continuity of events. Appendix 3 (a-d) shows the stacked sections obtained from
various parameter tests in the Finite Difference algorithm.

Prestack Kirchhoff Time Migration


This algorithm was used migrate the Prestack data. It migrates the dataset by
applying a Green’s function to each CDP location using an analytic RMS-velocity
NMO curve. The appearance of the final stack (Figure 3.3.1.4) appears to be worse
than all the other algorithms used for migration of the dataset. This is likely due to
improper parameterization of some of the parameters used in this algorithm.
Appendix 4 (a-d) contains resulting stacks from other parameters tested in this
algorithm.

Figure 3.3.1.3 Diagram showing the Migrated Section obtained using the Fast Explicit Finite
Difference Time Migration Algorithm. [Time Step:10ms; Percentage padding: 100]. This
section is almost as good as the result obtained from Kirchhoff Time Migration (Figure
3.3.1.1).

31
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Wrong Folds of Cover??

Figure 3.3.1.4 Diagram showing the Migrated Section obtained using the Prestack Kirchhoff
Time Migration Algorithm. [Migration Aperture: 30; Maximum Frequency to Migrate: 400Hz].
Highlighted regions shows highly deteriorated zones. The Folds of cover (Top panel) is also
dissimilar to the original (This further makes the resulting Migrated stack suspect but no firm
explanation is available for this).

3.3.2 Multiple Suppression

Prestack Deconvolution (Section 3.1.5) had been used to suppress the short-path
multiples. The following sub-sections describe the strategies utilised to further
suppress potential long path multiples in the data. Most of the coherent (and
incoherent) events below the major reflections (around 100ms) in the final stacked
section are suspected to be multiple events.

3.3.2.1 F-K demultiple


The following steps were used to implement the f-k demultiple:
o A semblance analysis (as described in section 3.3) was carried out but picking a
stacking velocity function between primary multiple semblance peaks.
o The “intermediate” velocities were used for NMO correction
o As shown in figure 3.3.2.1, primaries were overcorrected and suspected and
curve upward, while multiples should be under-corrected and remain curved
downward
o The NMO – corrected gathers were then transformed into the FK space:
primaries will now fall in the negative wavenumber segment, multiples in the

32
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

positive segment (with overlap on/around the k=0 axis because primaries and
multiple both have minimal moveout at nearest offset.
o The positive (+ve) wavenumber half was then rejected, keeping energy
along/near k=0.
o An inverse transform was then used to obtain the original gathers (with
overcorrected primaries).
o The “intermediate velocities” were backed off and the original (correct) NMO
velocity applied to obtain a new stacked section.

Figure 3.3.2.1: CMPs with slightly overcorrected primaries. Very little evidence of multiples
was observed in the shot gathers.
Figure 3.3.0.1 shows the final

33
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Ampitude recovery application:


Theory behind:
All of the energy initially contained in the shot is spread out over a larger and larger area as
time passes. This causes one of the possible losses of energy on a field record, and is
generally referred to as spherical divergence.
Another cause of energy loss is known as inelastic attenuation. This is simply the energy lost
due to the particles of earth through which the wave travels not being perfectly elastic - some
of the energy is absorbed and permanently alters the position of the particles.

Other more complex forms of energy loss (some of which are frequency dependent) include
that caused by the friction of particles moving against each other, and losses at each interface
through which the wave travels and is refracted. (Some of the energy in the original seismic
P-wave is converted into an S-wave at each interface and not recorded - more on this later!)

In all cases this generates a total loss of signal that decreases with time, which approximates
to:

where r is the radius of the wavefront and x is an absorption coefficient

34
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

Practical procuders:
After we get the velocity from the velocity analysis, we applied gain recovery on VELOCITY
BASED SCALING,
The results when we applied it doesn’t give the expected results(it doesn’t show up, as we
can see from fig( )

Amplitude recovery function doesn’t work as expected

fig.( ), Before and after applying amplitude recovery function, on VELOCITY BASED
SCALING ALGORITHM

35
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

3.3.3 Depth Conversion

The final Migrated Section (Figure 3.3.1.1) obtained from the Kirchhoff Time
Migration was depth converted using the Smoothed Velocity field. Figure 3.3.3.1 is
the final depth section obtained from the depth conversion. The depth section
obtained from using the non-smoothed final velocity field and also using interval
velocities obtained from the smoothed velocity field are presented in Appendices 5a
and 5b respectively.

Figure 3.3.3.1 Diagram final depth converted section (using smoothed RMS velocities).

36
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

4.0 INTERPRETATION
4.1 GEOPHYSCIAL INTERPRETATION
4.2 GEOLOGICAL INTERPRETATION

37
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

5.0 DISCUSSION AND CONCLUSION

38
MSc Team 1: Bishop Wood Seismic Reflection Processing EARS5165

4.3 REFERENCES

Sherriff,

Yilmaz, O., 1987, Seismic Data Processing: SEG, Tulsa, 240-353.

39
Appendix 1 Kirchhoff Time Migration
(a) Max. Dip: 5deg, Max. Frequency: 400Hz, Migration Aperture: 5m (b) Max. Dip: 90deg, Max. Frequency: 400, Migration Aperture: 5m

(c) Max. Dip: 90deg, Max. Frequency: 400Hz, Migration Aperture: 1000m (d) Max. Dip: 90deg, Max. Frequency: 400Hz, Migration Aperture: 20m

40
(e) Max. Dip: 40deg, Max. Frequency: 400Hz, Migration Aperture: 4m (f)Max. Dip: 25deg, Max. Frequency: 400Hz, Migration Aperture: 10m

(g)Max. Dip: 30deg, Max. Frequency: 400Hz, Migration Aperture: 30m (h)Max. Dip: 50deg, Max. Frequency: 400Hz, Migration Aperture: 30m

41
Appendix 2 Stolt/Phase Shift Migration
(a) Migrated Dips: Up to 90deg. Only; Stretch: 0.6; No AGC (b)Migrated Dips: Beyond 90deg. Only; AGC Length: 100

(c) Migrated Dips: Up to 90deg only.; AGC Length: 100 (d) Migrated Dips: 90deg. and Beyond; AGC Length: 100

42
Appendix 3 Fast Explicit Finite Difference Time Migration
(a) Time Step: 1000ms; Percentage Padding: 30 (b) Time Step: 1000ms; Percentage Padding: 100

(c) Time Step: 10ms; Percentage Padding: 100 (d) Time Step: 1ms; Percentage Padding: 30

43
44
Appendix 4 Prestack Kirchhoff Time Migration
(a) Migration Aperture: 0m (b) Migration Aperture: 30m

(c) Migration Aperture: 100m (d) Migration Aperture: 1000m

45
Appendix 5 Depth Conversion
(a) Kirchhoff Time Migration; Non-Smoothed RMS Velocities; Max. Freq.: 400Hz (b) Kirchhoff Time Migration; Smoothed RMS Velocities; Max. Freq.: 400Hz

(c) Kirchhoff Time Migration; Smoothed Interval Velocities; Max. Freq.: 400Hz (d) Kirchhoff Time Migration; Smoothed Interval Velocities; Max. Freq.: 200Hz

46
Depth Conversion (Contd)
(e) Finite Difference Migration; Smoothed Interval Velocities (f) Stolt Migration; Smoothed Interval Velocities

Prestack Bottom Muted

47
Appendix 6 Bottom Mute Stacks/Migration
(a) Final Stack (Unmigrated) (b) Final Stack; Migrated (Kirchhoff Time Migration); Residual Statics corrected

48

Vous aimerez peut-être aussi