Vous êtes sur la page 1sur 5

Increasing Seismic Resolution by Post-stack Processing Procedures in Postle Field, Oklahoma

Mohsen Minaei*, Thomas L. Davis Colorado School of Mines


Summary The thickness of the producing sand layer in Postle field is below tuning thickness. In reflection seismology, tuning happens when the ratio of seismic wavelength to bed thickness is equal to or greater than four. When tuning happens, the amplitude of the overlying layer shows a linear relationship with the thickness of the underlying layer (in this case, reservoir layer). This relationship is used in studying thin reservoirs. However, to perform more sophisticated characterization of a thin reservoir such as time lapse, that linear relationship is not enough and it is required to measure reservoir characteristics directly. To see the reservoir layer, it is necessary to increase frequency content of seismic data while keeping the noise level low at high frequencies. The current methods of frequency enhancement in the industry cannot provide enough increase in the frequency content to make the reservoir layer visible. In this study, I have created a workflow to increase the frequency content of seismic data without introducing noise. The bandwidth extension procedure begins with a zero-phase spiking deconvolution. This procedure increases the frequency content, but it tends to decrease signal to noise ratio at high frequencies. To make sure that no incoherent noise is extrapolated, the original dataset is split into seven frequency subsets using band pass filter. A mild smoothing filter on each subset prepares them for extrapolation and removes incoherent noise. Then sparse-spike deconvolution is applied on each subset separately. When the subsets are stacked back, extrapolated noise is suppressed if any exist. Sparse-spike deconvolution can see very subtle changes in the waveform that are related to thin layers. Furthermore, frequency splitting method helps sparse-spike deconvolution detect spikes more effectively. Lastly, zero-phase spiking deconvolution increases the power of those subtle spikes and makes them visible in the seismic image. The workflow is linear, so the result is reversible. That is, if the high-frequency data are filtered back to the original bandwidth, the result is the same as the original product. The final product of my workflow has a flat spectrum while showing greater spatial and temporal resolution than the original data. Introduction There have been numerous efforts to calculate the minimum thickness (tuning thickness) visible in seismic data. Widess (1973) showed that wavelength divided by four is the way to calculate tuning thickness. This equation is now widely used and accepted by most geophysicists. Greg Partyka (1999) showed that amplitude of seismic data changes linearly with thickness when the reservoir thickness is below tuning thickness. Recently, Thomas Pierle (2009) elaborated on how every sample counts in resolving thickness, and we are no longer limited by wavelength. However, all of these methods have limitations. Although there is a linear relationship between thickness and amplitude below tuning thickness, the thickness cannot be determined directly below the wavelength limit but only inferred. In addition, Pierles method consists of examining the changes in slope of the wavelet. However, this method is so sophisticated and hard to implement that makes it impractical for everyday use. To see thin layers, it is necessary to have a greater bandwidth and high frequencies. But, the ability of seismic data to distinguish thin geological layers is ultimately limited by signal to noise ratio (S/N) conditions at high frequencies (Helmore, 2009). My proposed workflow is very easy to implement and increases the frequency content of data while suppressing noise and keeping S/N high. This enables us to resolve very thin layers effectively. Theory and Method Postle field is located in Texas County of Oklahoma. The producing layer is called Morrow A sandstone that has a maximum thickness of 75ft in the study area. This study is focused on Hovey Morrow Unit (HMU) in the northern part of the field. The 3D-9C survey was shot using vibroseis with a sweep frequency of 6-100 Hz and a sampling rate of 2 ms. The survey area of 2.5 by 2.5 miles is covered by 120 inlines and 121 crosslines. For this study, only PP data are shown, but the methodology is as effective for SS and PS data as for PP data. Figure 1 shows the interpolated gross sand map from well data with injection pattern and well locations. There are 77 wells in the area. Based on the well logs in the study area, the reservoir thickness changes from zero to 75 ft, and the velocity of the P-wave at the reservoir level is equal to 13000 ft/s. The conventionally processed data, shown in Figure 2, have a peak frequency around 20 Hz. Thus, the tuning thickness is equal to:

= =

= 162.5

(1)

This value is greater than twice the greatest thickness of the reservoir. Therefore, I do not expect to distinctly see the reservoir layer in the seismic data.

2011 SEG

SEG San Antonio 2011 Annual Meeting


Downloaded 01 Oct 2011 to 202.152.194.146. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

1036

Increasing Seismic Resolution by Post


Transform software from Transform Software and Services, Inc has been used to run sparse spike deconvolution (SSD) and other processing steps in this study. The following methodology is used to implement SSD in Transform software. Application of SSD on the data results in revealing subtle spikes that are related to thin layers. Those subtle spikes are enhanced with zero-phase spiking deconvolution (ZPSD) which is performed in the frequency domain, so it does not alter the phase (Yilmaz, 2001). Since SSD extrapolates the bandwidth based on the original one, the original data should be as much noise-free as possible. Therefore, a smoothing filter could be beneficial to remove incoherent noise. This also helps in reaching the sparsity assumption which is required for application of SSD. Any remaining incoherent noise is suppressed using frequency splitting technique. Based on this workflow, SSD should extrapolate only the signal, not noise. Therefore, the high frequency portion of the spectrum is noise-free and ZPSD can be applied without concern for accentuating noise.

950-

Figure 1: The interpolated gross sand thickness map from well data with well locations and injection pattern.

SSD is a process in which the bandwidth of the data is extrapolated to the Nyquist frequency constrained by forming the fewest (sparse) number of large absolute amplitude events (spikes) in the resultant output. This spectral extrapolation is achieved using a prediction error methodology, minimum entropy (Wiggins, 1978), where the filter is designed over the retained pass band (below the extrapolation frequency) and then predicted forward to the Nyquist frequency. The extrapolation frequency is autodetermined from a local ensemble spectral analysis (eight traces either side of the central trace), therefore capturing spatial variations in spectral content. Note that SSD does not try to recover low-amplitude, high-frequency portion of the spectrum. It merely predicts frequency values greater than the original spectrum using an iterative procedure. The main objective of sparse-spike deconvolution methods is to provide a significant increase in bandwidth content from band-limited seismic observations (Velis, 2008). It is shown that as low as 20-25% of a bandwidth is sufficient in practice for a high-quality reconstruction (Levy and Fullagar, 1981). While maximizing the spikiness of the output traces, SSD selectively suppresses frequency bands over which the ratio of coherent signal-to-random noise is lowest and thereby emphasizes those bands in which coherent signals dominate (Wiggins, 1978). Thorough explanation of the method can be found in the works of Velis (2008), Dossal and Mallat (2005), Walker and Ulrych (1983), Wiggins (1978).

14000
Power (db)

-20 -40 -60 0


10 20 30 40 50 60 70 80 Frequency (Hz) 10 20 30 40 50 60 70 80 90 90

0 1000Time (ms)

105011001150-

Figure 2: Conventionally processed dataset with its power spectrum. Note that the peak frequency is around 20 Hz. Also note that the spectrum is not flat. The reservoir zone is indicated with an arrow, but it does not have a distinct peak. The blue color indicates peak and the red color indicates trough in the data.

The workflow starts with ZPSD to flatten the original spectrum. ZPSD is a very essential procedure in this

2011 SEG

Time (ms)

SEG San Antonio 2011 Annual Meeting


Downloaded 01 Oct 2011 to 202.152.194.146. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

1037

Increasing Seismic Resolution by Post


workflow because it sets the base for SSD and performing prediction of high frequencies. ZPSD increases the peak frequency from 20 Hz to 35 Hz, but it is still not enough to resolve the reservoir layer. It should be noted that ZPSD tends to boost high frequency noise. To fight the noise, I split the data into several subsets using band pass filter and apply a Gaussian smoothing filter to each subset. Then, SSD is applied on each subset to predict the high frequencies. The next step is to combine the subsets so that the common predicted frequencies are boosted while random predictions are suppressed. The result is a dataset that its frequency spectrum is extended to the Nyquist, but it is not flat. To flatten the spectrum, ZPSD is applied to the combined dataset. The final ZPSD is restricted by sweep frequency (6-100 Hz). Since all the high frequencies are predicted, ZPSD does not bring up noise. Figure 3 shows the result of this workflow.
950-

extended dataset, top and bottom of the reservoir were picked and all the calculations were done within that window.
-1000-

-1050-

-1100Time(ms) -1000-1050-1100Time(ms)
10 20 30 40 50 60 70 Frequency (Hz) 80 90
10 20 30 40 50 60 70 Frequency (Hz) 80 90

Figure 4: Comparison between the original data (left) and the final product of my workflow (right). Note that the top of reservoir channel is visible because of the high peak frequency (90 Hz).

14000
Power (db)

-20 -40 -60 0 20


60 100 140 180 Frequency (Hz) 240
Figure 5: Amplitude maps of the original data (right) and final product of my workflow (left). The original dataset shows a very smooth map which does not have enough variability to see characteristics of a thin channel. The bandwidth extended map is more consistent with the interpolated gross sand thickness from well data (Figure 1).

Figure 3: Final result of application of my workflowit has a flat spectrum and improved temporal and spatial resolution.

Results Figure 4 shows comparison of the final product of conventional processing vs. that of my workflow with the respective power spectra over the reservoir zone. Note the presence of the new horizon (indicated by an arrow) which could not be seen in the conventionally processed data. That horizon is the top of the reservoir. The new dataset is used for calculation of a set of attributes for comparison purposes. On the original dataset, a 16 ms window was used for calculations. On the bandwidth

Time (ms)

Figure 5 compares amplitude maps of the two datasets. The conventionally processed data show a very smooth map with low variability in characteristics, whereas the map from my workflow shows a higher degree of detail. This degree of detail is required to characterize the thin reservoir

effectively. The conventionally processed data are so smooth that they cannot show the variation in thickness or other characteristics between two wells (the distance between two wells is approximately half a mile). It is good for a global image, but is not enough for studying injection patterns and making a decision for new drilling locations. Also note that the original data fail to map the sand in northern area whereas bandwidth extended data shows high amplitudes which are consistent with the interpolated gross sand thickness (Figure 3). That is why the correlation coefficient for the new dataset is higher. Figure 6 shows a

2011 SEG

SEG San Antonio 2011 Annual Meeting


Downloaded 01 Oct 2011 to 202.152.194.146. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

1038

Increasing Seismic Resolution by Post


comparison between correlation of amplitude of the original and bandwidth extended datasets to gross sand thickness. Bandwidth extended dataset shows 11% higher correlation than the original dataset with lower deviation.
3
Regression Fit y = 0.01570 x + 1.445 Correlation Coefficient 0.61

assumption for SSD. Also, it shows a flat spectrum which confirms that the high-value portion of the spectrum contains actual data, not noise. Conclusion The proposed workflow improved spatial and temporal resolution with increasing frequency content with a minimum amount of noise. It boosted the peak frequency from 20 Hz to around 90 Hz, thus increasing the seismic resolution. The frequency splitting technique suppressed generated random noise by enhancement procedures, especially zero-phase spiking deconvolution. A mild Gaussian smoothing filter helped more in removing incoherent noise and conditioned data for application of sparse-spike deconvolution. The frequency spectrum was extrapolated to Nyquist frequency by applying sparse-spike deconvolution. Finally, zero-phase spiking deconvolution boosted the power of the week spikes associated to thin layers and made them visible in the seismic image. The workflow does not change the phase of the original data which makes it more desirable than the methods that alter the phase. Also, it is a linear process which means that if I apply a band pass filter to go back to the original frequency, the result is the same as the original data. The reason is that sparse-spike deconvolution does not change the original spectrum and keeps it intact. The workflow was applied on three different datasets and all of them showed improvements. This means that the workflow is not data dependent and could be applied to other seismic datasets. Acknowledgement I would like to acknowledge Dr. Davis and Dr. Benson from Colorado School of Mines for their great mentorship. I also appreciate Dr. Lynn for very helpful discussions on bandwidth extension and processing flows. I thank Mike Raines from Whiting Company for his great help and providing me with the well data. A special thanks goes to my friends at Postle team of RCP including Naser Tamimi, Aaron Wandler, Paritosh Singh, Rafael Pinto and Nataly Zerpa. Finally, I would like to thank Amelia Webster, David Forel, Bill Bashore and Murray Roth from Transform Software and Services, Inc for providing me with a license to Transform software and very constructive talks.

100
Regression Fit

80 Correlation Coefficient 60

y = 1.114 x 20.49 0.72

2.5

Amplitude

Amplitude

40 20 0

1.5

-20 -40
10 20 30 40 50 60 70

10

20

30

40

50

60

70

Sand Thickness

Sand Thickness

Figure 6: Comparison between correlation of the original (left) and the bandwidth extended (right) datasets.

Another way to evaluate validity of the bandwidth extended data is to calculate synthetic seismograms. Figure 7 shows calculated synthetic seismogram along with the extracted wavelet and its amplitude spectrum for one of the wells. The synthetic seismogram shows 60% correlation to the bandwidth extended data in the reservoir zone. The original

Figure 7: : Synthetic seismogram with extracted wavelet and its amplitude spectrum. The synthetic seismogram shows 60% correlation to bandwidth extended data. Synthetic seismogram is shown in blue color and the bandwidth extended data is shown in black color. The red traces are merely a copy of the trace passing through the well. Note that the extracted wavelet is zero-phase and shows a flat spectrum.

data shows 44% correlation in the same window. The synthetic seismogram and the bandwidth extended data are shown in blue and black colors, respectively. The red traces are a copy of the trace passing through the well. The Gamma ray log shows low values at the bottom of the display indicating the reservoir sand. The extracted wavelet is zero-phase which is consistent with the required

2011 SEG

SEG San Antonio 2011 Annual Meeting


Downloaded 01 Oct 2011 to 202.152.194.146. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

1039

EDITED REFERENCES Note: This reference list is a copy-edited version of the reference list submitted by the author. Reference lists for the 2011 SEG Technical Program Expanded Abstracts have been copy edited so that references provided with the online metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web. REFERENCES

Dossal, C., and S. Mallat, 2005, Sparse spike deconvolution with minimum scale: Proceedings of Signal Processing with Adaptive Sparse Structured Representations, 2005, 123126. Helmore, S., 2009: Dealing with the noise Improving seismic whitening and seismic inversion workflows using frequency split structurally oriented filters: 78th Annual International Meeting, SEG, Expanded Abstracts, 28, 33673371. Levy, S., and P. K. Fullagar, 1981, Reconstruction of a sparse spike train from a portion of its spectrum and application to high-resolution deconvolution: Geophysics, 46, 12351243, doi:10.1190/1.1441261. Ooe, M., and T. J. Ulrych, 1979, Minimum entropy deconvolution with and exponential transformation: Geophysical Prospecting, 27, no. 2, 458473, doi:10.1111/j.1365-2478.1979.tb00979.x. Partyka, G. A., 2001, Seismic thickness estimation: Three approaches, pros and cons: 71st Annual International Meeting, SEG, Expanded Abstracts, 503506. Pierle, T. A., 2009, Seismic Resolution: Thinner than first believed: 78th Annual International Meeting, SEG, Expanded Abstracts 28, 10141019 Velis, D. R., 2008, Stochastic sparse-spike deconvolution: Geophysics, 73, no. 1, R1R9, doi:10.1190/1.2790584. Walker, C., and T. J. Ulrych, 1983, Autoregressive recovery of the acoustic impedance: Geophysics, 48, 13381350, doi:10.1190/1.1441414. Widess, M. B., 1973, How thin is a thin bed?: Geophysics, 38, 11761180, doi:10.1190/1.1440403. Wiggins, R. A., 1978, Minimum entropy deconvolution: Geoexploration, 16, no. 12, 2135, doi:10.1016/0016-7142(78)90005-4. Yilmaz, O., 2001: Seismic data analysis: SEG, 1.

2011 SEG

SEG San Antonio 2011 Annual Meeting


Downloaded 01 Oct 2011 to 202.152.194.146. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

1040

Vous aimerez peut-être aussi