Vous êtes sur la page 1sur 16

Structural Overview and Skeleton Interpretations

Every interpretation project begins with a literature search and a review of the regional or basin geology. Armed with an overall understanding of the geology, tectonics, depositional history, production history and geochemistry of the survey area, the interpreter can proceed to set up his or her interpretation. The next step is the same as a 2-D interpretation--to gain a perspective on the structure of the prospect (including its horizons and how they express themselves, and what the faults look like), identify any areas of bad data and generally become familiar with the data. One good tool for gaining this perspective is the movie or data animation. By stepping through the data one inline at a time, then one crossline at a time and, finally, one time slice at a time, you can gain a very good understanding of how the reflectors behave, where the major faults are and what the major fault blocks are. You can also get a preliminary idea about any areas which exhibit anomalies or features of particular interest.

Tying Well Lags: Synthetic Seismograms


We start our interpretation with the absolutely critical step of displaying our well logs and the synthetic seismograms we have generated for each well. With time and effort, it is possible to get good correlations between well and seismic data, and it is worth the effort to do so at the beginning of the interpretation. Wells should be tied to the seismic data nearest to the actual well location. Ensuring the correct position of all well heads and deviated well bores on a display is as much the responsibility of the interpreter as it is the responsibility of the individual who loads the data. Also, be sure to set your projection limits to reasonable distances to avoid confusion and unnecessary delays in extracting and posting well information for sections as they are displayed. Attention to detail at the beginning will save you countless headaches later. If your initial positions are wrong, nothing else will ever be right. Take the necessary time and effort to check locations and make sure that they are correct on both the maps and seismic displays. For the most part, synthetic seismograms are generated on a software package different from the interpretation system. This can be a bit annoying if changes need to be made to fit the synthetic to the data, but it is the norm at this time. We are not going to illustrate any of the synthetic generation capabilities here, since that is outside the scope of this discussion. Instead, we will assume that synthetic seismograms are already created and loaded for each well. Synthetic seismograms are displayed at the well location either as wiggle or color variable density display traces that are overlaid or inserted into the seismic data at that point. Some systems can even display the synthetic trace down the track of a deviated well bore for more accurate matching of depth events to time. Correlation capabilities for synthetic seismograms vary from system to system. The ability to correlate is, however, predicated on the basic paper function of being able to move the synthetic around on the section and to shift it up and down to match the reflections derived from the well logs with those that appear on the seismic section. Some systems allow the interpreter to fix or "peg" particular events on the synthetic to events on the seismic and to stretch and/or compress either the entire synthetic or segments of the synthetic between defined points to make it correlate to the seismic data. This process, of course, implies velocity changes in the synthetic, but, at this time, no program keeps track of these changes to test for their reasonableness. In reality, modifying the synthetic to match the seismic is a little backwards--the synthetic represents the best correlation of real, measured data from the well logs and the seismic. However, it is normal to adjust the synthetic to the seismic rather than reprocess the seismic to match the synthetic. As workstations become faster and interactive processing techniques improve, it may one day be possible to reverse the process.

Well-processed seismic data and synthetics derived from carefully edited and calibrated log data will usually tie remarkably well. Poor correlations between synthetic and seismic are normally due to

the phase of the synthetic not being compatible with the seismic poor sonic log calibration poor quality sonic or density log data poorly processed seismic data Preliminary Fault Correlation and Structural Blacking
The first active step in most interpretations is to get a sense of the faulting regime. The major faults in the data set are identified and roughed-in using inlines, crosslines and time slices to tie the data together. This does two things --first, it allows the interpreter to get a sense of the tectonics and structure of the prospect and, second, it helps to identify and delineate the major structural blocks that make up the area. Fault digitizing at this point can be as detailed as the interpreter chooses to make it, and faults may be stored in a single, generic file or as named faults. Fault picking is invariably done in manual, point mode. A valuable tool in establishing horizon continuity is the character of the seismic data itself. This refers not only to sequences of reflectors, but also to the internal character of the reflectors, the intervals between dominant reflectors and those intervals which might correspond to depositional facies or have other stratigraphic significance. Sometimes the correlation of wavelet character is subjective; sometimes it can be substantiated by well logs, or even by synthetic seismograms. In any event, it is important to note the characteristics of the data you are interpreting, because they will be important criteria in constructing a complete and coherent three-dimensional model of the subsurface. Generally, we rough in fault patterns beginning with those which are obvious on time slices and then carry them to a selected grid of inlines and crosslines. This gives a good starting point for interpreting the prospect horizons. Structural blocking is the next step. This means identifying the major blocks within the survey wherein data will be related. In general, the blocks outline areas which can be picked together. Faulting will frequently be the dominant factor in blocking out the major sections of the data prior to interpretation. Structural blocks can be vertical as well as horizontal, and events such as major unconformities that mark long hiatuses in the geologic sequence, layers such as anhydrites which tend to obscure deeper data, areas with clusters of bright spots, etc., can all be used to vertically define the blocks of the survey. Block identification can be as subjective as you want.

Interpreting the Skeleton


The 3-D interpretation proceeds by roughing in the limits of the survey for the horizons being picked. This process is sometimes called a skeleton interpretation because it forms the bare bones of the seismic interpretation, which will be fleshed out through the remainder of the interpretation process. There may be areas, or structural blocks, which cannot be easily tied in to the overall skeleton due to lack of data, poor data, or large discontinuities. However, the process does its best to establish the interpretation framework for the important horizons throughout the area of the 3-D survey.

The order of picking the skeleton framework given here is not cast in stone, but rather it is offered as a guideline. Use whatever displays are necessary in whatever order they are needed to establish the correlations of the skeleton interpretation and to tie data across the survey. I like to begin by picking in the inline direction. If the survey was designed and oriented correctly, the inlines are likely to represent the direction of primary dip. Start the skeleton interpretation by bringing up an inline which either intersects or is within one line spacing of a synthetic seismogram. Use the synthetic and the well log data to determine which reflectors correspond with which geologic horizons. Extend the horizons from the well, one at a time, by picking short lateral segments on the inline for each horizon. Repeat this step for each well which falls on the line. The next step is to extend the interpretation of the horizons across the inline section ( Figure 1 ).

Figure 1

Because you have a firm tie between the well and the seismic as well as a preliminary fault pattern analysis, it should not be too difficult to pick the horizons across the full extent of the section. If necessary, leave areas around faults where the horizon termination cannot be accurately determined uninterpreted and rough in the faults. Having picked the full inline section for all of the horizons of current interest, proceed to the crossline which either intersects the well you just used to tie the inline or passes nearest to it. Display the well log and synthetic seismogram, and make sure that the display of the intersections of the picks (penetration points) made on the inline is activated on the display. Now proceed to make the correlation between the picks on the inline section and the corresponding horizon reflections on the crossline section ( Figure 2 ).

Figure 2

After you are certain of the correspondence of the horizon ties, complete picking each horizon across the full extent of the crossline section. Check the database and redisplay the saved horizons to make sure that the system is storing your picks correctly. After completing this step, you will have established correlations for all of your major horizons between the well log and lines that span the entire extent of your data volume in both directions. Repeat this process for every well in the survey, tying the logs and the synthetic seismograms to the seismic data along the nearest inline and crossline. You will begin to intersect previously picked lines, tied to synthetic seismograms in other wells, as you work through this process. Take careful note of whether or not your picks tie correctly with the picks on the other lines. Also, make sure that your posted basemap display is activated so that you can watch your picks being posted in colorcoded time values on the basemap as you work. This can also help you identify interpretation errors when they occur ( Figure 3 ).

Figure 3

Now, you have a loose grid of interpreted lines tying together each well with lines that extend across the entire survey area. If inconsistencies and misties are found at the line ties, it will be necessary to go back and compare the lines where these problems occur and, perhaps, recorrelate the synthetics in order to resolve these problems. Once the orthogonal vertical section skeleton structure is in place, proceed to interpreting time slices ( Figure 4 ).

Figure 4

Looking at a typical vertical inline, select a time slice which intersects one of the interpreted horizons and bring it to the screen. The intersections of the picks made on the vertical sections (penetration points) will be displayed on the time slice to give you a guide for picking the horizon across the time slice. Carefully pick the desired phase for the horizon across the time slice. Note that the time slice interpretation looks like a rough contour line connecting all of the other picks from the vertical sections. Make allowances for faults and note the fault offsets to help clarify the movement patterns for your data set. Pick the obvious faults either as named fault planes or store them in the generic fault file. Repeat this process on at least one time slice for each horizon you are picking. When this step is complete, you will have tied together all three dimensions of the data and tied all of the reflectors directly to the well data. An excellent quality control tool, and the final arbiter of the correctness of the ties of one well to another, is provided by an arbitrary well tie line ( Figure 5 ).

Figure 5

This line offers the opportunity to create a seismic cross section between wells. It is not necessary to wait until this point in the skeleton interpretation process to bring in these arbitrary lines. They should be used as needed to gain a full and complete picture of the data and how the wells tie together. Bring up one or more well tie lines, of however many panels are necessary. Interpret them as you did the inlines and crosslines, tying them to the data in the well logs and on the synthetic seismograms. Try to include single panel lines connecting each of the wells in combinations of two this is much more effective than trying to tie everything on one, massive, multi-panel line. Extend the lines all the way to the limits of the survey, if possible, to gain the full advantage of the well-tie correlations throughout the data set ( Figure 6 ).

Figure 6

The next step in the skeleton interpretation is to extend your interpretation to the limits of the survey. If your survey has several distinct structural blocks that are being interpreted separately, each block should be as well-defined as possible, its limits and boundaries should be determined and the interpretation should be extended to these limits. For the purposes of our survey example, we have treated the entire data volume as a single unit, but the principle is the same, regardless of how many different blocks there are in the survey. Pick inline #1, if possible, and inline N, the last line in the survey. Do the same for the crosslines, tying in all of the picks made on other tying inlines, crosslines, time slices and arbitrary lines. When you are finished, you will have a complete skeleton interpretation, incorporating all of the inlines and crosslines that tie directly to wells, lines of arbitrary orientation tying all of the wells together, and the outermost lines of each block in the survey. You can now use the computer to create a couple of maps--a contour map of the surface you are building (so that you can get an idea of how accurate your model is) and a mistie map (to identify any picks which do not correctly tie together). If a mistie is found, go to the lines in question, re-tie them to wells and resolve the problem. Mistie conflicts resolved at this point will save countless headaches later. When you have finished, the framework of your interpretation will be done, and it will be time to proceed to the details of fine picking. All picking from this point on will be inside sections where you already know the location of your horizons.

Preliminary Fault Interpretation


In all probability, you have been picking the major faults as you have been going along in the skeleton interpretation process. These picks, combined with any done during the fault assessment and structural blocking phase, can be displayed on the basemap (or in the fault correlation module,

if available) to gain a fairly good understanding of what the major faulting looks like in the data volume. Understanding the faults will be a major factor in determining how you proceed with picking individual blocks of data and how you correlate data across the faults. When you are interpreting faults, picking too many faults in a generic fault color can often be confusing on the screen display. So, perform fault correlations and assign faults to named fault planes as you work.

Using Maps During the Interpretation


Mapping has already been mentioned as a quality control tool. Now, however, we take a more detailed look at the interactive mapping products which are valuable during an interpretation. We present four types of maps--ribbon, quick, interpretive and mistie maps. Most systems can produce maps which are posted with color-coded time values in real time as the interpreter works. These are called ribbon maps. This type of map is an excellent quality control tool, because gross errors show up on the map as incompatible color changes. It is not very accurate, however, for small misties like those usually found in a 3-D interpretation. Another type of map, a quick map, allows the interpreter to look at the interpreted data as colorfilled, contoured surfaces or sometimes as plain contours. These algorithms use a quick gridding process, often triangulation, to produce the map surface. Contours are then calculated and displayed, so the interpreter can get a map view of what the surface looks like and how the interpretation is progressing. Another map style, interpretative maps, are contour maps on which you may edit control points and draw contours by hand. The valuable feature of these maps is that the edited contours, either calculated or hand-drawn, may be loaded back into the database and displayed as a picked surface over the seismic data. This gives you the opportunity to iterate between the rather sterile world of picking sections and the art form and geological expression of mapping. It also allows you to test whether the data support what you feel the map should look like. Conversely, the map information can help point out trends and possibilities that you might otherwise overlook in the seismic data. Ideally, the connectivity between mapping and interpretation should be transparent and quick, and the interpreter should be able to interactively modify a surface, mold it into whatever form is desired and have the changes plotted over the data instantly. We are a ways from this state yet, but we are getting closer. In correctly interpreted 3-D data, there should be no misties between picks on inlines, crosslines, time slices, arbitrary lines, etc. But, as we all know, errors occasionally happen. Mistie maps have been mentioned as a method for isolating and highlighting interpretation differences. Most mistie maps post colored dots at the line intersections where a picking value difference between the sections has a disagreement of more than some minimum value set by the interpreter. Using the mistie map, the interpreter still has to go back to the seismic data and resolve the conflict on a case-by-case basis, since no automated system for adjusting misties can possibly take into account the interpreters subjective judgment regarding the accuracy and quality of the picks.

Infill Picking
At this point, we have a correlated and consistent skeleton interpretation which ties together all of our wells and carries the correlation to the edges of the data volume. Now we begin a process called in fill picking to complete the interpretation. Infill picking sounds like a trivial task, but it is not. In the skeleton phase, we glossed over small faults, anomalies and difficult spots in the data. Infill

picking, however, is done at a much greater level of detail than the skeleton picking because you are going to pick each line, crossline and time slice for every horizon. This is where we resolve conflicts, correlate all portions of the data, and finally determine the subsurface geology of our prospect. If you have "simple" horizons on good quality data, you may want to use automatic spatial tracking algorithms to save you the time required to hand-pick the horizons on every line. Infill picking does not necessarily have to be done on all inlines, crosslines and time slices for all the horizons, if the prospect does not warrant it. However, the detail possible from using all of the data available is much greater than will be gained by subsampling and skipping lines, time slices, etc. Just how dense the infill grid will be depends on the level of detail needed in any particular area of the survey. All seismic is governed by the rules of seismic resolution and we cannot resolve features smaller than our frequency content and Fresnel zone will allow. In areas of difficult or ambiguous data or in areas of high interest, however, the subtle changes in data from trace to trace might be significant in unraveling the interpretation puzzle. The detail level of infill picking also allows you to more accurately identify the positions of structural boundaries and fault planes. It will usually be to your advantage if the full resolution of the data is used when picking along faults. Within the limitations of seismic resolution, it is usually preferable to use all of the data available.

Horizons
Generally, it is best to concentrate on one horizon at a time when interpreting a survey. We normally focus on one segment of the geology at a time, so that we can have an undistracted view of what we are trying to interpret and correlate. Usually we begin infill picking on one boundary of the survey and step through the sections in order, picking the horizon on each section in turn. This process is greatly facilitated on the workstation by the step function (sometimes called a roll function) which allows the interpreter to display the next line of the same general orientation (i.e., the next inline or the next crossline) by selecting a single menu function. An increment may also be specified so that every other line or every fifth line, etc., will be the next display called, depending upon the density of picking the interpreter is pursuing. Step functions work in all orthogonal directions, and some systems even support stepping through a data set extracting sections at some interval parallel to an arbitrary line. Composite displays combining sections of more than one orientation are also extremely useful for infill picking. These displays allow the interpreter to carry correlations from one seismic display domain into another while confirming the validity of the ties at the same time. Vertical line-to-time slice and concertina displays are the most common composite displays for tracking horizons over multiple sections. Concertina displays (multi-window displays with different sections or different attributes displayed in each panel) are frequently used to track horizons over parallel sections or define them more accurately by using more than one attribute of each section. Some systems allow the interpreter to project horizon and fault picks from other picked sections onto the current section to help reference and correlate horizons. These reference lines may also be copied as live picks ready to be used or edited to fit the data on the new section more accurately. This is a real time-saver in areas with little variation between sections and relatively flat reflectors. Concertina displays, when incorporating grids of arbitrarily-oriented lines, are also used to look at features which do not fall along orthogonal lines. For example, a suite of displays with inline, crossline, a 45-degree left line and a 45-degree right line, like the rotational movie, give a feel for dip, strike and features that change angle or position over some range in the data. Chair displays (incorporating tying sections of two inlines and a time slice) are also useful infill picking tools because they tie lines as well as time slices to define the horizon.

Faults
Faults are usually picked on sections when we pick horizons. Many interpreters like to pick faults first in order to establish structural boundaries on the section before trying to decipher how the horizons correlate. Often, however, it is necessary to concentrate on picking only faults for a few lines in order to resolve small faults or complex fault movements. It is also helpful to associate fault picks with fault planes. We have already discussed the mechanics of picking faults, but lets look at how we deal with the faults themselves. Most systems use a special fault symbol for horizons which terminate against faults. Some systems even calculate the fault heave and throw based on the placement of these fault symbols! Some faults are obvious, but some are very difficult to resolve and define. Usually a fault can be represented by a line along the discontinuity, but sometimes an indeterminate zone is left in the interpretation because the precise definition of the fault is not possible. Also, fault geometries change and frequently we see branching and healing along faults. But how do we know when a discontinuity is significant enough to pick as a fault? I prefer to pick everything fault-like and resolve, later on, whether or not there is continuity and sufficient grounds to identify a fault or fault segment independently. Some faults have zones where the rock is broken up and it is possible to trace hundreds of tiny fault segments and chunks of rock that are isolated by them. In cases like these, it is often best to define a fault zone and to approximate it for mapping purposes as a fault polygon. Fault correlation routines can also help in such cases. The various small fault segments may be associated together using fault correlation routines provided in the software.

Correlating Data Across Faults


Once a fault is identified, it is given a name and treated like a surface. These named fault planes are just as important as horizon picks (or more so). Fault plane mapping and tracing the intersections of fault planes with horizon surfaces are basic techniques for isolating hydrocarbon traps which are due to or controlled by faulting. The correlation of seismic data across faults is a much more intense problem requiring original thought and some flexibility on the part of the interpreter. Occasionally, correlating across a fault is simple because of small amounts of dip-slip movement and some dominant characteristic of a horizon that permits it to be identified on both sides of the fault. More common, however, is movement in all spatial axes along a fault which results in changes in layer thickness and reflection character across the fault. In this case, you will need to use alternate methods of correlation. These usually take the form of cut-and-slide, palinspastic reconstruction, drop correlation and fault plane mapping methods. Most methods take advantage of known correlations to identify horizons and these methods will be described below. First, lets discuss the cut-and-slide method. Most systems are capable of defining a polygon along a fault and extracting the data displayed within it into a movable segment. The interpreter takes a section of known correlation and moves it around on the screen, looking for a data match on the other side of the fault. This process is often complicated by lateral movements that have transported the actual correlation some meters away and possibly even on to another line. Not many systems allow the interpreter to section out a piece of data and display it on another line, although more advanced systems offer other types of correlation capabilities. One of these capabilities allows you to not only move the data polygon but also to rotate it, thereby approximating the slippage of the section down the fault plane. This is particularly valuable along growth faults, which tend to curve. An additional capability is the ability to stretch and/or compress the data within the polygon. This can help you correct for differential rates of deposition on the upthrown and downthrown sides of faults.

The second correlation method, palinspastic reconstruction, allows you to rebuild a section to a condition approximating what it might have looked like before deformation and faulting took place. This means removing the effects of all fault movement and the effects of differential compression and/or deposition. The horizons are restored to a flat or near-flat condition, such as the conditions that might have existed when the layers were laid down. The data between the horizons is adjusted to make allowance for these processes. This is a difficult technique to do properly, and the results are speculative at best. The technique can identify features in the data which have been masked by deformation, but it takes a great deal of work to generate such a display. Drop correlation is an extension of the polygon techniques described above. Here, however, the interpreter defines a window around a segment of an interpreted line. The polygon is then moved around the seismic line. When it is placed in a position where the seismic data correlates with it, a button is pressed on the mouse and the horizons picked in the polygon are placed as points on the section. The interpreter can then go back to these points and use them as starting points for interpreting the rest of the line. A final technique that is useful in determining where fault planes intersect horizons and what the data on both sides look like is fault plane mapping. Mapping produces a fault plane surface that can be brought back into the seismic interpretation system and compared against the data. Filling in the areas between picked faults with a reasonable fault plane surface helps the interpreter to define where the fault plane actually lies and how to interpret the data around it.

Specialized Techniques
So far, we have dealt with seismic data which has been fairly unambiguous and easy to interpret. But what about complex areas in the 3-D data set? What tools exist for dealing with areas that are less clear? In general, we work from the known to the unknown, from the areas with good data and correlations into the areas without. In addition, 3-D data allows us to work all around a problem--above, below and on both sides. We can also drop arbitrary lines through bad data areas to gain additional correlations. We keep working the data until the correlations have been extended all the way through it. Here, we will look at several common techniques to help interpreters work through difficult data areas. They include the use of complex trace attributes, multi-attribute data displays and multi-attribute map displays. We will also consider direct hydrocarbon detection and interpreting amplitude anomalies.

Complex Trace Attributes


Composite displays of complex trace attributes are an excellent display for understanding what is going on in difficult data areas. The fact that two or more views of the same data are available in a single display gives a different perspective on the data and how they tie together. Of course, the attributes are no better than the data used to generate them, but it is still possible to use Hilbert attributes to clarify many ambiguities. The interpreter is advised to be extremely careful when picking horizons or faults on attribute displays, especially in manual mode. Slight mispicks made on attribute sections--slightly off the minimum and maximum amplitudes of the data--can result in erroneous extracted amplitude values later. A frequently used attribute display contains panels of phase, instantaneous frequency and reflection strength. Phase gives coherency for peaks, troughs and zero crossings. It highlights unconformities and faults. Phase can also be particularly valuable in resolving picks on time slices because the color class intervals of a color phase display can be set to change colors exactly on the peak, trough, etc., which are not always discernible on a time slice. Sometimes phase cannot be displayed for a time slice due to the complexity of the calculation process which is based on

wavelets in time. To calculate a phase slice, it is necessary to process data for each trace in the survey and then select the one sample necessary from each trace to create the display. This is beyond the capability of a workstation, so if phase is used for a horizontal slice, it is necessary to process an instantaneous phase attribute volume and load it in parallel with the amplitude data volume. Then, phase displays for any line, composite, or time slice may be used at will. Most systems now support multiple attribute volumes on the system that are accessible at the same time. Instantaneous frequency plots are invaluable for isolating velocity anomalies and the effects of attenuation caused by changing rock properties. Low-velocity zones (such as gas-filled sands) seriously attenuate the acoustic energy, causing a severe loss of high frequencies. On a frequency plot, these sands stand out like neon lights! Reflection strength is a measure of acoustic impedance contrast across horizons. It helps us isolate small features of differing composition and can help to resolve attributes of layers and rocks. Reflection strength displays are also excellent for resolving small faults.

Multi-Attribute Data Displays


Displays which incorporate a number of different data types in a single display are not in common use at this time, but they should be. Color amplitude/phase displays are a prime example of a multiple attribute data display. The instantaneous phase angle used for this display emphasizes horizon continuity and shows faults and pinchouts extremely well. In a color amplitude/phase display, a seismic section is shown in color variable density. Four color class intervals are defined, and colors are applied so that there are transitions in color at every major phase break-point, peak and trough, and at both zero crossings. Within each color class interval, the intensity of the color is modulated by the amplitude value of the original seismic data sample. In this manner, the plot contains all of the horizon continuity and desirable characteristics of a phase display without giving up the high and low amplitude information of the original seismic data. This type of display is ideal for the workstation, and several algorithms have been developed and patented for using it. Other types of modulated color displays are possible, limited only by your imagination and what you need to see in the sections. Ideally, the workstation should allow you to interactively define display types, just as it allows multidimensional cross plots in petrophysics. It appears that software developers are heading in this direction.

Multi-Attribute Map Displays


More familiar to todays interpreter are multi-attribute maps which overlay structural contours--either in time or depth--onto another surface display. Structural contours overlaid on extracted amplitude maps give a great deal of information with very little effort. The contours show the highs and lows of the surface, and the amplitude data show the variation of amplitudes in the surface. This highlights bright spots and shows amplitude changes associated with changes in fluid type, etc. Another valuable multiple attribute map display is obtained by superimposing structural contours on velocity maps. This display is often used for quality control during processing, but with the expansion of interpretative processing on workstations, velocity mapping is receiving more attention. There are also many other types of multiple attribute displays and maps which you can use to highlight particular characteristics of the data and to correlate the distribution of different attributes of a surface.

Direct Hydrocarbon Detection

The previous discussion leads directly into how we identify anomalies in the data which are due to hydrocarbon accumulations. We refer, of course, to bright spots and flat spots. Bright spots are amplitude anomalies that result from extremely high contrasts in acoustic impedance over a limited area. These anomalies are often associated with gas-filled sandstones, which have a significantly lower velocity than the rocks around them. Bright spot mapping, or mapping of amplitude anomalies in general, is easier in 3-D because of the dense data coverage provided by close line and trace spacings. It is often possible, therefore, to map the tops, bottoms, lateral extents and internal reflections of a bright spot in a 3-D data set. In the event the bright spot is drilled and is productive, these bright spot maps become extremely valuable. Flat spots are anomalous straight lines which can sometimes be the contact zones between different fluid types in a trap--gas/oil, oil/water, etc. Flat spots are not as easy to pick as bright spots, but they can be just as significant in indicating potential hydrocarbon zones. Parallel extraction, or horizon slicing, can be helpful in detecting bright spots and flat spots. As an aside, one of the effects of a bright spot or any other zone of anomalous low velocity is a "pull down" effect on all the data beneath it. Due to the longer transit time in the velocity anomaly, which can not easily be compensated for in processing, these velocity effects can create false structures in deeper layers. The attenuation effects of low velocity anomalies can be seen in a general dimming of amplitudes beneath the anomaly. Due to redundancy and stacking, however, these effects tend to "heal" themselves in the section below the anomaly.

Picking Amplitude Anomalies


Picking amplitude anomalies on maps is basically the same as picking horizons. When interpreting anomalies on vertical sections, however, it is best to create separate horizon files for the top and bottom reflectors. Be prepared to do a detailed examination of every inline, crossline and time slice in the area. You will also need to create many arbitrary lines in order to determine and map the extent of the anomaly accurately. In addition, complex attribute displays help to determine the extent of the anomaly, and the horizons may be used in volumetric mapping programs.

Mapping
Throughout this discussion, we have emphasized mapping as an interpretation tool, as a quality control aid used while picking horizons, and as an aid in understanding the structure and characteristics of the prospect. In this final segment on mapping, we examine these and other roles of mapping as they relate to 3-D surveys. Mapping should never be considered as an end product only. Creating and using maps while interpreting lines and time slices helps to clarify what a horizon surface looks like, how it is broken by faults and which segments are likely to be related. Analyzing maps early in the interpretation process also provides valuable information on how to proceed with the interpretation.

Contour Parameters and Map Editing


In defining the way maps are generated and displayed, you should adhere to two general rules of thumb. First, the gridding increments should never be much larger than the interval between interpreted lines. Second, a contouring interval should be selected which allows the surface relief to encompass the range of colors available to display it. It is also helpful, particularly in areas of complex faulting, to project the surface in space as a movable and ratable wire mesh or solid surface.

Almost all mapping systems allow you to edit the surface control points (the picks) both before and after applying gridding algorithms to the map. Smoothing is also an accepted technique to enhance the appearance of the maps. However, each change made to the map is a change away from what you originally thought was the real position of the data. Therefore, wholesale changes to improve the appearance of a map should not be made without considering what is implied by the changes. In any case, all changes made to the map must be compared against the seismic data to ensure that the changes are reasonable and can be substantiated by the seismic data.

Iterating Between Maps and the Seismic Data


The ability to iterate between creating or editing maps and interpreting the seismic data moves mapping from being only a presentation device to being a modeling tool. This helps us visualize our interpretation as we go along, because we can see how our interpretation of the seismic data affects our maps. Each map datum adds something to the entire model, and we continually revise and upgrade our three-dimensional model as we work. Each time we change a map or change an interpretation of a line, we set up conditions to test and revise our model. Modeling points out small errors and alternative interpretations that help to make the interpretation more reasonable and more geologically accurate. It is the interplay between the maps and seismic data that creates a valuable interpretation and allows us to meld an art form with hard science.

Producing Computer Maps


Computer mapping has long been regarded as inferior to hand-contoured mapping because a hand-drawn map (supposedly) reflects the individuals understanding of the prospect, while computer contouring algorithms have no such understanding and creativity. This is essentially true. Even todays sophisticated computer mapping programs cannot take the place of an individuals understanding and competence. However, systems are on the horizon which will allow the interpreter to interface with the seismic data and maps on a creative level, using the computer to test possibilities and produce high-quality final output products that reflect the desires and opinions of the interpreter. Most mapping systems today allow the interpreter to either enter the data into a file that is transported to a mapping program or use an interpretation software mapping program that is able to read the horizon and fault files directly from the database. Once the data are in the program, the interpreter controls all of the operations involved in producing the map and displaying the final copy. We will not discuss the details of any specific computer mapping packages. However, it is important for you to become familiar with the various options in your mapping package and to learn its strengths and idiosyncrasies. One final word about mapping: a 3-D survey is different from a 2-D survey in terms of the density and distribution of data. This actually makes maps based on 3-D data easier to produce and more accurate, because there is less interpolation and more data used in creating accurate contours. The oversampling along seismic lines versus the large areas of no data in a traditional 2-D survey are not problems for 3-D data mapping. In other words, in 3-D surveys, interpretation and interpolation are done on the data, not on the map. Presentation mapping is essentially the same for 3-D data as it is for 2-D. The number of maps and the types of maps may be the same, but the detail possible in the maps is much higher for the properly interpreted 3-D survey.

The Workstation as a Presentation Tool


Our last item of interest involves using the workstation as part of the interpretation presentation. Traditional methods of displaying maps and sections are still the primary vehicle for communicating with the management groups who make decisions on drilling and AFEs. With 3-D data and a little creativity, however, the workstation can be used to create and show more innovative displays. It is

still unusual to get a management team to view an interpretation on the workstation. However, the use of color, multimedia presentations and animated video on the workstation is becoming more common in presentations. The day is coming when video screens and taped presentations will be a standard part of the management review process.

Vous aimerez peut-être aussi