74
BASIC PROCESSING OVERVIEW OBJECTIVE OF SEISMIC PROCESSING The main goal of seismic processing is to obtain the best image of the subsurface. To achieve this goal the seismic processing should improve the signal-to-noise ratio and locate the reflections in their real spatial position. We refer to as noise any energy that is recorded and does not come from the primary reflections. There are two types of noise: Coherent noise: Seismic energy that is consistent from trace to trace. The most common sources of coherent noise are interbed multiples, ground roll, power lines and surface vibrations. Random or ambient noise: Energy that lacks any relationship between traces. Usually, the random noise is caused by instrumental noise, winds and geophone coupling problems. The most effective noise attenuation method (especially for random noise) is CMP stacking. Coherent noise is usually more difficult to suppress, and needs more specialized processes as: radon filters (multiple suppression), notch filters (power line noise), f-k filter (wind noise), etc. The second task of seismic processing is to locate the reflections in their real spatial position; this task is known as imaging. The method used to archive this task depends on the acquisition geometry. Today the most used acquisition geometry is known as multifold acquisition geometry. Figure 1 shows a schematic representation of the multifold geometry, this geometry consists on a number of receiver stations that are separated the same distance (station distance). Each receiver station records the wavefront produced by the seismic source; after the wavefront is recorded (during a fix time interval known as record length), the receivers are move for the next seismic shot location.

Basic Processing

Embed Size (px)

DESCRIPTION

seismic data processing basic

Citation preview

Page 1: Basic Processing

BASIC PROCESSING OVERVIEW

OBJECTIVE OF SEISMIC PROCESSINGThe main goal of seismic processing is to obtain the best image of the subsurface. To

achieve this goal the seismic processing should improve the signal-to-noise ratio and locate the

reflections in their real spatial position.

We refer to as noise any energy that is recorded and does not come from the primary

reflections. There are two types of noise:

Coherent noise: Seismic energy that is consistent from trace to

trace. The most common sources of coherent noise are interbed multiples, ground

roll, power lines and surface vibrations.

Random or ambient noise: Energy that lacks any relationship

between traces. Usually, the random noise is caused by instrumental noise, winds

and geophone coupling problems.

The most effective noise attenuation method (especially for random noise) is CMP

stacking. Coherent noise is usually more difficult to suppress, and needs more specialized

processes as: radon filters (multiple suppression), notch filters (power line noise), f-k filter (wind

noise), etc.

The second task of seismic processing is to locate the reflections in their real spatial

position; this task is known as imaging. The method used to archive this task depends on the

acquisition geometry. Today the most used acquisition geometry is known as multifold acquisition

geometry. Figure   1 shows a schematic representation of the multifold geometry, this geometry

consists on a number of receiver stations that are separated the same distance (station distance).

Each receiver station records the wavefront produced by the seismic source; after the wavefront is

recorded (during a fix time interval known as record length), the receivers are move for the next

seismic shot location.

Page 2: Basic Processing

Figure 1

After the data is loaded, and some initial processes are applied, it is sorted from the

acquisition domain (shot gather domain) to the common-mid-point domain (CMP). The CMP

domain is explained in the Figure   2 ; the data is sorted in groups (gathers) of traces that have the

same source-receiver mid point. In this domain is where the most important imaging processes

are applied.

Figure 2

Page 3: Basic Processing

After the stacking, the seismic section usually does not represent accurately the location

of the reflector. This is because of the normal incidence travel path is only valid for horizontal

seismic interfaces. The process used to correct this effect is called seismic migration. Seismic

migration improves the seismic image because the locations of subsurface structures (especially

faults) are correct in migrated seismic data. Migration collapses diffractions from discontinuities

and corrects bow ties to form synclines.

The most important decision to be taken during a seismic processing project is the

processing flow. The processing flow should be adapted to the seismic data characteristics. The

ability of the processor to find the best combination of process is critical for the quality of the final

section.

BASIC SEISMIC PROCESSING SEQUENCEAlthough seismic processing flows must be adapted according to the characteristics of

the data, they typically include three major steps:

1. Preprocessing and Deconvolution: The objectives of these steps are to:

1. Sort the data in the channel domain (demultiplexing)

2. Delete defective traces (trace editing)

3. Correct the amplitude of wavefront divergence (gain recovery),

datum correction (elevation statics) and remove the seismic source effects

(deconvolution).

2. Stacking and Velocity Analysis: During this step the data is:

1. Sorted to CMP domain (CMP sorting)

2. Moveout velocity is estimated (velocity analysis)

3. The moveout is removed (NMO correction) and the reverberations

are suppressed (multiple attenuation).

3. Migration: The goal of this step is to locate the reflections in the correct

spatial location. This process is called seismic migration; this is a very important step

because the locations of subsurface structures depend on correctly selecting the

parameters.

Figure   1 shows a flow chart of a basic processing sequence. It is important to mention

that static correction is applied to land data and multiple attenuation is a process mainly designed

for marine seismic.

Page 4: Basic Processing

Figure 1

THE UTILITY OF CMP DATAIn discussing the improvement of the signal-to-noise ratio of a section, we maintain as our

data set the near-trace section that we use just to get a first look at the data. We remember,

however, that we can construct essentially zero-offset sections from common-mid-point data.

Furthermore, the multiplicity of data acquired by the CMP technique allows us to get N -type

signal enhancement as one of our benefits. The second stage of sophistication in processing, after

improving signal-to-noise ratio, is an analysis of how CMP data can be used to our best

advantage.

The technique of common-mid-point recording is summarized briefly here. In Figure   1 , a

shot is recorded into the many geophone groups of a spread. Each recorded trace then represents

a source-receiver pair, and a mid-point in turn represents the trace. The mid-point, we remember,

is a surface position halfway between the shot and the center of the geophone group. One such

mid-point and the source-receiver pair it represents are shown in Figure   2 .

Page 5: Basic Processing

Figure 1

Figure 2

Page 6: Basic Processing

By the time the next shot is taken, both shot and spread have moved, or rolled along. We

find that the mid-point of Figure   2 now lies halfway between another source and receiver

(Figure   3 ). In addition, as the line progresses, other source-receiver pairs are similarly disposed

about this mid-point. This one surface position is now the common mid-point for many source-

receiver pairs (Figure   4 ), each of which is, represented by a recorded trace.

Figure 3

Figure 4

Page 7: Basic Processing

While all the traces from Figure   4 have the same mid-point, they can be characterized by

different offsets. This fact permits a new definition of signal — that which is common to recordings

made at different offsets. Noise is, as usual, everything else.

Before we can take advantage of our CMP multiplicity, we have to arrange all the traces

by their mid-point coordinates. This CMP sort is done early in the processing; the suite of traces

thus brought together is a CMP gather (Figure   5 ). We mute the first breaks as we did on the near-

trace section, and are now ready to see the strength of the CMP technique.

Figure 5

FREQUENCY FILTERINGFiltering is a selective deletion of information passing through a system and, unless

otherwise specified, indicates discrimination based on frequency. In processing, this is a useful

approach, since signal and coherent noise (and some random noise) often have different, albeit

possibly overlapping, spectra. (Ambient noise pervades the entire signal spectrum, and therefore

poses a different problem.) We therefore define signal in the present context as that which falls

within a desired frequency band, and noise as anything outside that range.

Page 8: Basic Processing

There are analog filters built in the recording instruments. For a variety of reasons,

however, these are designed deliberately to be wide-band. Therefore, we need to do our own

digital filtering in the processing center. This is most easily performed in the frequency domain,

which we get to by passing a trace through a digital "prism." In much the same way that an optical

prism breaks up a ray of light into its color (or frequency) components, so does a digital prism

reveal the amplitude spectrum, or frequency composition, of the trace (Figure   1 ).

Specification of a filter is in terms of its frequency response (Figure   2 ). Filtering is then

simply a matter of multiplying, frequency by frequency, the amplitude spectrum of the input signal

and the frequency response of the filter. After the filtering (Figure   3 ), we return to the trace as a

time series by passing it, backwards, through the digital prism.

Figure 1

Page 9: Basic Processing

Figure 2

Figure 3

Page 10: Basic Processing

The manner of specifying a digital filter varies with its type. For many commonly used

types, a low-cut corner frequency, a high-cut corner frequency, and the low-cut and high-cut

slopes may specify the filters. In Figure   4 , for example, the low-cut corner frequency is 12 Hz and

the high-cut is 128 Hz.

The corresponding slopes are 20 and 50 dB/octave. Many processors write this response

as 12/20-128/50; others write it as 12(20)-128(50).

For other types of digital filters, the specification is in terms of the corner frequencies and

the null frequencies. Thus, the response of the filter of Figure   5 might be written as 6-12-80-120

Hz.

Figure 4

Page 11: Basic Processing

Figure 5

What types of noise does this frequency selectivity reduce? One is ground roll, the

surface waves that contaminate a section and provide no direct subsurface information. A near-

trace section is particularly vulnerable to ground roll. Because it does not use a spread of

geophones, the near-trace section does not afford recognition of the ground roll by its distinctive

low velocity (Figure   6 and Figure   7 ).

If the surface wave velocity is constant, the ground roll arrives on each trace of the near-

trace section at the same time, and so may appear as a flat, low-frequency reflection.

Page 12: Basic Processing

Figure 6

Page 13: Basic Processing

Figure 7

If the velocity varies, this "reflection"" appears to show structure. "f-k" filtering is the most

effective means of removing ground roll.

Because ground roll has many frequencies lower than those of the signal, filtering is a

useful device. The spectra do overlap, however, so we must take care not to filter out the surface

waves indiscriminately, otherwise we eliminate some good signal also.

Pervading the whole spectrum is wind noise, either directly on the geophones, or

indirectly by blowing dust or rustling vegetation. Digital filtering can help to attenuate those

components of the wind noise that are outside of the signal spectrum.

Finally, there is a host of other noise-producing sources, whether natural, such as animals

and raindrops, or artificial, such as machinery and power lines. Power-line noise, at 50 or 60 Hz, is

induced into the cables or geophones or injected into the cables at points of leakage to the

ground. One measure for counteracting power-line noise is the use of notch filters, having a very

narrow rejection region at the power-line frequency, in the field. Power-line noise is therefore

rarely a problem for processors, although digital notch filters are generally available should the

Page 14: Basic Processing

need arise. The other incidences of noise are random, and are usually susceptible to the N -

type attenuation effected by summing.

Wave action, marine organisms, streamer friction and turbulence, and streamer jerk

generate noise in marine work. Much of this can be dealt with only in processing. Given the

continuous nature of seismic operations at sea, any wait for conditions to improve cannot be less

than the time for a full circle; this is a serious loss of production. So, contracts generally specify

that recording be suspended when noise reaches a certain threshold, or when a certain number of

traces are noisy, or when air gun capacity falls below a certain volume. Shots from other boats

(Figure   8 ) and sideswipe from offshore structures (Figure   9 ) are additional causes of noise.

We do not hope to completely eliminate noise, rather, we want to reduce its harmful

effects in processing, having done all we reasonably can in the field to minimize recording it.

Figure 8

Page 15: Basic Processing

Figure 9

ARRAY SIMULATIONAnother common way to improve the signal to noise ratio is the array simulation or trace

mixing, this is a useful method use to improve the lateral continuity of the seismic sections. Within

the context of a near-trace section, we may choose to define signal as that which is common from

trace to trace. Intertrace randomness, therefore, defines noise. Consequently, we can add

adjacent traces representing adjacent reflection points to gain a measure of signal enhancement.

In effect, this sort of summing (Figure   1 three input traces to one output trace yields a 3:1 mix) is

analogous to the source and receiver arrays used in the field. When we perform the array

simulation depicted in Figure   1 , we usually obtain a benefit of signal to noise ratio (Figure   2 a

conventional 3:1 mix).

Page 16: Basic Processing

Figure 1

Page 17: Basic Processing

Figure 2

We pay a price for this benefit, however, if the geology changes rapidly from trace to

trace. The process of adding part of the first output trace into the second output trace, and in this

illustration, into the third output trace smears the data; we have sacrificed lateral resolution.

One situation in which the geology changes rapidly from trace to trace occurs in areas of

steep dip (Figure   3 ).

Figure 3

Here, a legitimate and valuable signal is not common (in the sense of being aligned) from

trace to trace. The output from an array simulation, therefore, can actually reduce the amplitude of

the reflection. Obviously, we need to strike some compromise between our desire for enhanced

signal to noise and the need to maintain the geologic validity of the data.

The prime determinant is the section itself. If the target horizon has steep dip, then trace

summing should be avoided. If, on the other hand, there is little dip at the target depth, and if the

noise is a serious problem, then this summing, or mixing, may be warranted.

Page 18: Basic Processing

PRE-PROCESSING AND DECONVOLUTION

DATA LOADING AND DEMULTIPLEXINGSeismic data is usually recorded in field tapes using the Digital Field Tape Standard

(SEG-D) prepared by the Society of Exploration Geophysicists (SEG). This is a multiplexed format

that sorts the data in scan time (Figure   1 ); it means that all recording channels are scanned in

sequential order for each time. When field tapes arrive to the processing center they are load in

the processing package and demultiplexed (Figure   2 ). Demultiplexing is a simple mathematical

operation that changes the sorting from scan time to recording channel, which is the conventional

format for seismic processing. The easier way to perform the demultiplexing is to transpose the

field data sorted by scan time so the new columns are the recording channels.

Figure 1

Page 19: Basic Processing

Figure 2

TRACE EDITINGDuring this process, noisy traces are deleted to avoid their negative effect on the final

stack. A noisy trace can be defined as a trace that the noise levels are higher than the dynamic

range of the recording system. In this case, the trace is saturated of noise and the signal cannot

be enhanced by any digital filtering method. Experienced analysts are able to identify noisy traces

in a shot gather display.

Most noisy traces are associated with problems in the recording system or, in the case of

land seismic data, defects in the geophone-surface coupling. In the past, monochromatic noise

from power line used to saturate the short dynamic range of the old recording system, forcing the

editing of the traces close to the power lines.

Figure   1 shows a synthetic shot gather with monochromatic noise in the trace 5. If the

frequency of the noise is 60 Hz or 50 Hz, it can be associated to power line source. New recorder

systems have large dynamic range that allows the analyst to attenuate the monochromatic noisy

by applying digital filters (notch filters).

A trace with bad coupling geophone-surface can be observed in the synthetic shot gather

shows in the Figure   2 . This kind of traces is usually called dead trace; those traces should be

deleted because they only add random noise to the final section.

Usually during the seismic acquisition, the crew should repeat the shots that exceed a

maximum number of dead traces. The client or the acquisition QC responsible gives the maximum

number of acceptable dead traces.

Page 20: Basic Processing

Figure 1

Page 21: Basic Processing

Figure 2

GAIN RECOVERY AND TRACE EQUALIZATION

SPHERICAL SPREADINGSeismic amplitude decays with time (Figure   1 ). The most readily determined cause of this

decay is the phenomenon of geometrical spreading, whereby the propagation of energy occurs

with an ever-expanding, curved wavefront. In the simplest case, that of a constant velocity

medium, the wave-front is a sphere. The seismic energy per unit surface area of this sphere is

inversely proportional to the square of the distance from the shot. Energy is also proportional to

the square of amplitude. It follows that seismic amplitude is inversely proportional to distance and,

in a constant velocity medium, to time.

Most of the decay due to spherical spreading occurs early; at later times, the slope of the

decay becomes successively smaller. Finally, the amplitude of the reflection drops below the level

of the ambient noise.

We wish to compensate the known effect of spherical spreading; we want to compress

the amplitudes of the earlier arrivals and amplify the later ones. To do this, we first establish

appropriate reference amplitude A0 and determine its time t0. Because of the inverse relationship

between amplitude and time, the product A0t0 equals the product Antn for all times. So, bringing A

Page 22: Basic Processing

up to the reference amplitude A0 is simply a matter of multiplying it by the factor t/t0. For example, if

t0 is equal to one second, the spreading correction would be effected by multiplying each

amplitude by its time.

We must be careful to stop our compensation at the time when reflection amplitude drops

below ambient noise amplitude, lest we increase the noise. If this time is tN, we simply leave the

multiplication factor at tN/t0 for all times greater than tN.

Corrected shot records (Figure   2 ) and a corrected section (Figure   3 ) demonstrate much

more evenness than before (Figure   4 , two raw shot records, with the near traces arrowed) and

(Figure   5 , the near-trace section) a certain amount of amplitude decay remains (Figure   6 ) but it is

less pronounced. In addition to correcting for spherical spreading, we may also apply an

exponential gain to compensate for absorption losses.

Figure 1

Page 23: Basic Processing

Figure 2

Figure 3

Page 24: Basic Processing

Figure 4

Page 25: Basic Processing

Figure 5

Figure 6

AMPLITUDE ADJUSTMENTS Besides spherical spreading, there are other reasons for the observable decay in seismic

amplitudes. One cause is the fact that velocity is not constant, but ordinarily increases with depth.

Because of Snell's law, this increase means that the growth of the expanding wavefront is also not

constant, but accelerating. For this and other reasons, the observed decay of reflection amplitude

normally exceeds that imposed strictly by spherical spreading.

Amplitude can also vary from trace to trace. These inconsistencies arise not only from

genuine lateral inhomogeneities, but also from conditions in the field. Charge size and depth can

vary along a line (and not always, because we want them to cultural impediments often impose

restrictions on charge size in particular). When low-power surface sources such as Vibroseis are

used, a unit may fail, reducing the size of the source array. In marine work, air guns also

occasionally fail, reducing the source volume somewhat. The result is a nonuniform suite of

bangs. Moreover, on the receiving end, surface conditions can affect geophone plants.

The combined effect is a section where the traces are uneven, across and down. Deep,

weak reflections may be hard to see. A known reflector may appear to come and go across the

section. The solution involves normalizing the traces and then balancing them.

TRACE EQUALIZATION

Page 26: Basic Processing

Trace equalization is an amplitude adjustment applied to the entire trace. It is directly

applicable to the case of a weak shot or a poor geophone plant. We start with two traces that have

been corrected only for spherical spreading (Figure   7 ). Clearly, one trace has higher amplitudes

than the other does, so our task is to bring them both to the same level.

Figure 7

First, we specify a time window for each trace. Here, in the context of a near-trace

section, the windows are likely to be the same, say, 0.0 to 4.0 seconds. Then, we add the

(absolute) amplitude values of all the samples in the window for each trace. Division by the

number of samples within the window yields the mean amplitude of the trace. (As we apply this

process for all the traces of the section, we note the variability of the mean amplitudes.)

The next step is to determine a scaler, or multiplier, which brings the mean amplitudes up

or down to a predetermined value. If, for instance, this desired value is 1000, and the calculated

mean amplitudes are 1700 and 500, our scalers are 0.6 and 2.0; each scaler is applied to the

whole trace for which it is calculated (Figure   7 ).

Equalization enhances the appearance of continuity, and provides partial compensation

for the quirks in the fieldwork that might otherwise degrade data quality.

TRACE BALANCING

Page 27: Basic Processing

Trace balancing is the adjustment of amplitudes within a trace, as opposed to among

traces. Its effect is, again, the suppression of stronger arrivals, coupled with the enhancement of

weaker ones, and its goal is the improvement of event continuity and visual standout. Two trace

balancing processes are Automatic Gain Control (AGC) and time-variant scaling.

As with trace equalization, trace balancing, requires the calculation of the mean amplitude

in a given time window. In this step, however, there are numerous successive windows within

each trace (Figure   8 ), and the scalers apply only within those windows.

Figure 8

So, if our first calculated mean amplitude of Figure   2 is 5000, and the last is 500, and if

we want them both scaled to 1000, our initial approach might be to multiply the amplitudes in the

first window by 0.2, and those in the last by 2.0.

This process, however, would introduce discontinuous steps of amplitude at the junction

of two windows. Two solutions to this are in common use. One solution, known as time-variant

Page 28: Basic Processing

scaling, (Figure   9 ) applies the computed scaler at the center of each window, and interpolates

between these scalers at the intermediate points.

Figure 9

Another approach, automatic gain control (AGC), uses a sliding time window (Figure   10 ),

such that each window begins and ends one sample later than the one before. Again, the scaling

is applied to the amplitude of the sample at the center of the window. In this manner, we effect a

smooth scaling that reacts to major amplitude variations while maintaining sensitivity to local

fluctuations. We also ensure that peculiarly large amplitudes do not have undue influence

throughout the entire trace.

Page 29: Basic Processing

Figure 10

SOME FURTHER CONSIDERATIONS We now have to think about some problems we may encounter. The problem of ambient

noise first arose in our discussion of the spherical spreading correction. We encounter it again

when we have to determine the length of our trace normalization window. Particularly for weaker

shots, ambient noise can dominate signal at later reflection times. In settling on a normalization

window, then, we may choose to use the weakest shot in our data set.

We need also to determine a reasonable window length to be used in trace balancing.

Ambient noise is not relevant here; rather, the prime determinant is the falsification of relative

reflection strengths.

Consider the trace of Figure   11 , it has a high amplitude reflection at 1.8 s, but is otherwise

reasonably well behaved. The center of the first time window we consider is at 1.6 s, at which time

there is a reflection whose amplitude is, say, 625. Because the anomalous reflection is included in

this window, the mean amplitude of the window is higher than it would otherwise be, perhaps 714.

If the desired average is 1000, the required scaler for the window is 1.4, and the reflection at 1.6 s

is brought up to 875.

Page 30: Basic Processing

Further, down the trace the sample at the center of a later window has amplitude of 400.

Within this window, there are no abnormal reflections, and the mean amplitude is 400. Therefore,

the scaler of 2.5 does indeed bring the reflection up to amplitude of 1000.

We see the problem: the large amplitude at 1.8 s causes a falsification of relative

reflection strengths by suppressing those amplitudes within a half window length of it. We reduce

this effect in several ways, acting separately or in concert.

Figure 11

First, we may weight the samples within each window (Figure   12 ). This reduces the

contribution to the mean absolute amplitude of every sample not at the center of the window.

Alternatively, we may reduce the window length (Figure   13 ), thereby reducing the number of

samples affected by the anomalous amplitude.

It varies with data area, but a sensible trace-balancing window (provided a spherical

spreading correction is applied first) is from 500 to 100 ms. Our guide must be the data: if the

amplitudes are fairly uniform, there is less of a need to balance, and we get away from using

longer windows.

A third method is to make the scaler some nonlinear function of the mean absolute

amplitude. We might, for instance, scale to an amplitude of 500 if the mean absolute amplitude in

Page 31: Basic Processing

a window is below 500; to 650 for mean amplitudes between 500 and 800; to 1000 for mean

amplitudes between 800 and 1200; to 1350 for mean amplitudes between 1200 and 1500; and to

1500 for mean amplitudes above 1500.

A fourth method is to ignore, in the calculation of mean amplitude, any values which

exceed the previous mean by some arbitrary ratio (perhaps 3:1). Thus, the scalers are derived

from what we might call the "background" level of reflection activity; the background levels are

balanced, but individual strong reflections maintain their relative strength.

Figure 12

Page 32: Basic Processing

Figure 13

DATUM CORRECTIONSIdeally, our near-trace section should represent accurately the configuration of the

subsurface. Due to topographic and near-surface irregularities, this is not immediately the case. A

line shot across a valley (Figure   1 ) can make a flat reflector appear as an anticline (Figure   2 , the

effect of elevation only).

Page 33: Basic Processing

Figure 1

Figure 2

Page 34: Basic Processing

Matters are further confused when the propagation path goes through the low-velocity

weathered layer. The thickness and velocity of this layer can change from shot-point to shot-point

(and, in the rainy season, from day to day). The resulting section demonstrates a lack of event

continuity as well as false structure (Figure   3 , the combined effect of elevation and weathering).

The key to resolving this problem is to select an arbitrary (but reasoned) datum plane,

such as sea level, and subtract that part of the travel time due to propagation above it. In effect,

this amounts to a "removal" of all material above the datum, and simulates the case of shots and

receivers on the plane. The time shifts that effect this removal are called datum corrections,

because they set zero time at the datum plane.

Alternatively, they are sometimes called field static corrections (field, because they are

calculated directly from field data: elevations, shot depths, weathering depths, etc. and static,

because they are applied over the entire length of the trace) and sometimes they are simply called

field statics.

Figure 3

The simplest of the datum corrections is the elevation correction (Figure   4 ). This

correction is appropriate when the bedrock outcrops at the surface, or is covered by a negligible

layer of soil or fill. We divide the surface elevation (above datum) by the bedrock velocity for each

shot-point (the source static) and its corresponding geophone group (the receiver static). The sum

of these quantities is the total static, and is subtracted from the total travel time to yield the travel

time below the datum.

Page 35: Basic Processing

Figure 4

The situation is slightly different with a buried source (Figure   5 ). The receiver static is the

same, of course, but a source static calculated as above removes too much time; we have to put

some of it back. The amount we restore to the travel time is the source depth (below the surface)

divided by the bedrock velocity. We are now in a position to make the proper corrections despite

variations in the elevations, shot depths, or both.

Figure 5

Page 36: Basic Processing

Let us now introduce some real-world complications (Figure   6 ). In addition to changing

elevations and shot depths, we now have a near-surface layer of unconsolidated sediment above

the bedrock. This material, sometimes called the low-velocity layer, sometimes the weathered

layer, and sometimes just the weathering, is characterized by variability in thickness and velocity.

Further complications may be introduced by the presence of a water table, which is itself subject

to variations in depth. Whatever the case, the effect of this low-velocity layer is to slow down the

seismic wave, so that a simple elevation correction is inadequate. In effect, we have to correct the

correction. This compensation is called the weathering correction.

To determine time corrections (which is what statics are); we need both layer thickness

and velocity. The thickness of the low-velocity layer sometimes becomes apparent as each shot

hole is drilled. The weathering velocity, however, does not, unless we conduct some kind of

velocity survey. The variability of the material may require that such a survey be done at each shot

and receiver location, a procedure that is seldom economically viable. Fortunately, we can get a

direct reading of travel time in the near surface by using an uphole geophone, placed a few meters

from the shot-point, which records the arrival of a direct wave from the shot to the surface.

Figure 6

Figure   7 illustrates a common situation: a shot buried some 10 m below the weathering

and repeated at every receiver location. In this example, the wave generated by a shot at ground

position 1 bounces off a deep horizontal reflector beneath ground position 2, and is recorded by a

receiver at ground position 3. Subsequent shots go through similar travel paths. The total datum

Page 37: Basic Processing

correction for the first trace, then, consists of the source static at ground position 1 and the

receiver static at ground position 3 (Figure   8 ).

Calculation of the source static follows the example of Figure   5 . The receiver static,

however, can be broken down into two parts. The first part is clearly the same as the source static

for the shot at ground position 3. The second part is the travel time recorded by the uphole

geophone; this is the uphole time for the shot at ground position 3.

Some comments are now in order. First, the method of Figure   8 applies only to

subsurface sources. For surface sources such as Vibroseis, the best we can do is an elevation

correction plus whatever weathering information we have. Quite often, a large Vibroseis survey

will have strategically located weathering surveys conducted over it. Or, if it is a mature prospect,

or ties with dynamite lines, the velocity information of other vintages can be brought to bear. In

some areas, the weathering corrections can be derived from first breaks across the spread; this

approach is detailed in GP404 Static Corrections. Whatever the case, the datum corrections will

probably need refining in a later step.

We also find many lines where shots are drilled at alternate group locations, or even

every third. What we do here is simply interpolate uphole times when we do not have a shot at a

receiver location. It may not be correct, but it is easy, and in most cases will not be far wrong.

Figure 7

Page 38: Basic Processing

Figure 8

DECONVOLUTIONDeconvolution is a process that improves the vertical resolution of the seismic data by

removing the effect of the seismic source from each trace. This process is based on the

convolutional model of the seismic trace (Figure   1 ). The convolutional model of the seismic trace

assumes that a seismic trace s(t) is the convolution of a reflectivity r(t) series and a seismic

wavelet w(t). The convolutional equation can be written:

s(t)=r(t)*w(t)

Where s(t) is the seismic trace, r(t) the reflectivity series and w(t) the seismic wavelet. In

seismic exploration, we are interested in the reflectivity series, which is the factor that has

information of the subsurface properties. Equation (1) can be written in the frequency domain as:

S(w)=E(w)W(w)

Page 39: Basic Processing

Figure 1

The simplest way to obtain the reflectivity series is by applying a filter that compresses

the wavelet into a Dirac delta (seismic spike). This process can be written:

δ(t)=w(t)*f(t)

δ(t) is the Dirac delta and f(t) is the filter in the time domain. In the frequency domain the

filter can be obtain by:

F(w)=1/W(w)

In practice, we are not able to compress the seismic wavelet into a spike due to noise and

bandwidth limitations. However, the vertical resolution is always enhanced.  Figure   2 and Figure   3

show a seismic section without and with deconvolution respectively. It is obvious that the vertical

resolution and frequency content have been improved by the seismic deconvolution.

Page 40: Basic Processing

Figure 2

Figure 3

STACK AND VELOCITY ANALYSIS

CMP SORTING

Page 41: Basic Processing

The seismic data is usually recorded in multifold coverage geometry to improve the signal

to noise ratio. The natural domain for field acquisition is the shot-receiver; Figure   1 shows a

schematic representation of the recording geometry. After demultiplexing the data is sorted by

channel or shot-receiver trace as described in previous chapters. The next step is to convert the

data from shot-receiver to common mid point domain (Figure   2 ); this process is called CMP

sorting. During the CMP, sorting traces with the same Midpoint coordinates are grouped. This

trace group is called CMP gather.

In order to perform the CMP sorting the processor should know the acquisition geometry

and source-receiver coordinates. With this information midpoint, coordinates for each trace are

calculated. Finally, the traces are resorted to group the traces with the same midpoint coordinates.

The multifold acquisition technique or CMP technique has many advantages. First, the

redundancy is the best tool for random to noise attenuation. Second, CMP is an effective domain

for velocity analysis; and third it is an effective tool for multiples suppression.

Figure 1

Page 42: Basic Processing

Figure 2

VELOCITY ANALYSISThe nature of a common-mid-point gather is such that reflections must be common from

trace to trace. After all, except in the case of strong dips, they come from substantially the same

area of the reflector. In Figure   1 , we see the traces of a synthetic CMP gather; the gather contains

50 versions (traces) of the reflection information derived from different source-receiver pairs.

Page 43: Basic Processing

Figure 1

We can see reflections common from trace to trace of the gather, but they are not yet

aligned in time; we have yet to account for the effect of the different offsets. As source-to-receiver

offset increases, so does the length of the travel path (Figure   2 ) and, therefore, the travel time. We

remember that this increase in reflection time (Figure   3 ) is known as normal moveout (NMO).

Before we can sum the traces of a gather, we need to make all the reflections align horizontally;

we need to remove the moveout. First, we have to determine just what the NMO is.

Page 44: Basic Processing

Figure 2

Figure 3

Page 45: Basic Processing

At a particular offset, the zero-offset reflection time and the appropriate velocity define

NMO. To see why this is, we modify Figure   2 slightly by moving the source from its actual position

to that of its image in the reflector (Figure   4 ). In the simple case of a reflector with no dip, this

manipulation yields a right triangle, the sides of which are source-receiver offset (x), zero-offset

reflection time (t0), and source-receiver reflection time (tx). If we assume a constant velocity, then

we can easily get all three sides into units of time (since x is in units of length), and then apply the

Pythagorean theorem governing right triangles. In this manner, we calculate the source-receiver

reflection time, from which we subtract the zero-offset reflection time to get the normal

moveout Dt. The usual form of the derived equation is:

02

2200 t

v

xttxtt

This equation is fundamental, and we must either memorize it or be ready to derive it

instantly.

Figure 4

For horizontal reflectors, then, the determination of normal moveout (at a certain reflection

time and offset) is tantamount to the determination of a velocity. The operation of determining the

normal moveout for purposes of NMO correction is therefore called velocity analysis. The name is

sometimes deceptive; what we are really determining is a normal moveout. The term velocity

analysis is widely used, however, and we shall use it here. The procedure is one of selecting a

Page 46: Basic Processing

trial velocity, determining the NMO pattern that results, removing this amount of NMO from the

gather, and measuring, in some way, the resulting alignment. We do this for a number of trial

velocities and select the one that gives the best alignment. This one provides the best fit to the

observed moveout pattern at that zero-offset time.

Although it would be ideal to make NMO determinations for every CMP, this is not

ordinarily done. Velocity analysis is an iterative procedure that must manipulate many reflections

on many traces — it takes a long time and is very expensive. Moreover, we space the analysis

locations out, performing each one at appropriate places along the line, perhaps every kilometer

or so. For the intermediate CMP locations, we interpolate our results. Although there are hazards

in doing this, it is usually acceptable.

NORMAL MOVEOUT CORRECTIONOnce we determine the normal moveout, we apply the correction by removing that

amount of time from the reflection time. The NMO correction is a time shift, and always negative.

Furthermore, since moveout and, therefore, its removal are time varying, we call these time shifts

dynamic corrections. After correction, events line up across the gather (Figure   1 ). By removing the

effect of the offset, we simulate having all the sources and receivers at the mid-point.

Figure 1

Page 47: Basic Processing

Because NMO corrections are dynamic, it is possible that the amount of time-shift varies

within one reflection wavelet. At a given offset, NMO decreases with time; the early part of a

waveform is therefore subject to a greater time-shift than the later part. Figure   2 shows the

comparison between the near and the far trace of the NMO corrected gather the differences are

due to the NMO stretch, and has the appearance of a change in frequency. It is worst at early

times and long offsets, where the rate-of-change of NMO is largest.

Clearly, NMO stretch is a problem when it comes time to sum the traces of the gather.

The solution is to apply a mute to the corrected gather (Figure   3 ), so that reflections with an

unacceptable degree of stretch are not included in the summation. As with all mutes, this is

applied with a ramp. Optionally, we might automatically mute any data for which the NMO stretch

exceeds a given value (e.g. 30%).

The final gather (Figure   3 ) is now ready for summation, to yield one stacked trace. The

summation procedure is known as CMP stacking or, usually, simply stacking; the entire suite of

stacked traces constitutes a stacked section. Moreover, the velocity implicit in the NMO correction

applied before stacking is called the stacking velocity.

Figure 2

Page 48: Basic Processing

Figure 3

MULTIPLE ATTENUATION DEREVERBERATIONThere is a noise category, which is immune to both trace mixing and frequency

discrimination. It is common from trace to trace, so summing enhances it. Its frequency range is

the same as that of the signal, so filtering does not help. It is the water-layer "ringing" that arises

because of the large acoustic contrasts at both the air-water interface and the sea floor. The effect

(Figure   1 ) is that reflections from the ocean bottom come in repeatedly.

Page 49: Basic Processing

Figure 1

Figure 2

Page 50: Basic Processing

This phenomenon actually has two parts. The first part (Figure   2 ) is that portion of the

downgoing energy that never manages to penetrate the sea floor. The second occurs after the

recording of a primary reflection; this energy bounces off the air-water boundary and heads

downward again, to begin a new interlayer bouncing (Figure   2 ). A train of reverberations,

therefore, follows the primary sea-floor reflections, and all other primary reflections, this train is

aggravated if the sea floor is hard.

Dereverberation is the process of counteracting the water-layer ringing. Although the

details are complicated, the process is conceptually simple. In effect, we determine the water

depth (and hence the reverberation times) and estimate the reverberation decay; this allows us to

synthesize the train of reverberations for each reflection, and simply subtract it from the section.

The result (Figure   3 ) is a cleaner, more interpretable, and better-resolved section.

Figure 3

MIGRATION AND POST PROCESSES

SEISMIC MIGRATIONQuite often, our plotted reflection does not represent accurately the location of the

reflector. This is because of the principle that angle of reflection must equal angle of incidence. In

the case of coincident source and receiver (which our methods simulate), this means

perpendicular incidence and reflection (Figure   1 , the travel path for dipping reflectors). Since all

Page 51: Basic Processing

events derived from one CMP are plotted vertically below that CMP, we wish to move all

reflections that are not horizontal. The process is called migration.

Figure 1

Migration is the repositioning of reflections so that their spatial relationships are correct. In

a sense, we move updip that part of the trace with the dipping reflector on it. Effectively, we move

the reflection from the trace that recorded it to the trace that would have recorded it if the source

had been on the reflector, and the travel path was vertically upward (Figure   2 ).

Page 52: Basic Processing

Figure 2

The process of migration is accomplished in several ways, one of which is easy to

understand. In Figure   3 , we see the actual seismic path to a dipping reflector, and in Figure   4 , we

see the error introduced when we plot the reflection below.

Figure 3

Page 53: Basic Processing

Figure 4

The process of migration is accomplished in several ways, one of which is easy to

understand. In Figure   3 , we see the actual seismic path to a dipping reflector, and in Figure   4 , we

see the error introduced when we plot the reflection below the observation point.

If all we have is the one observation at mid-point 1, and we have nothing else to tell us

the dip, then all we know is that we have a reflection at a certain time. In that case, it could have

come from anywhere along a surface (for now, let us say a circle, assuming zero offset and

constant velocity) representing that constant time. Therefore, in Figure   5 , we actually put that

reflector at all its possible sources around the circle.

Then, in Figure   6 , we do the same for the trace from mid-point 2, mid-point 3, and so on.

Page 54: Basic Processing

Figure 5

Figure 6

Our first thought is that this would produce total confusion. Nevertheless, we find that the

circles reinforce in just one zone, and that zone is the true position of the reflector. In practice, we

obtain a migrated section (Figure   7 ) , in which we see only the reflection moved to its correct

position; the distracting "smiles" of Figure   6 , which we would expect to see all over the section,

are usually seen only after the last reflection, when the section becomes dominated by noise. The

noise is a migration artifact, caused by lack of destructive interference. The relationship between

the true dip and the apparent dip is given by: sin tan (Robinson, 1983).

Page 55: Basic Processing

In areas of gentle dip, failure to migrate a line is seldom so egregious an error that whole

fields are missed. Migration does enable the proper display of steeply dipping reflections, and is

critical however, for the clarification of faults and unconformities. However, we must be on guard:

with 2-D migration, the migrated reflection stays in the plane of the section. If the dip has a

component transverse to the line, this is erroneous. Proper migration of such data is afforded by 3-

D techniques.

Figure 7

POST-PROCESSESAfter the CDP stacking, some processes are applied to improve the interpretability of the

final seismic section. Post-processes have two main goals, the first is to reduce the noise level

and the second is to equalize the amplitudes of the final section.

By the time we get to stack, we know the real signal bandwidth of the data. In general, the

next step is to design a bandpass filter with an output/input response of one over that bandwidth,

and a decreasing response on both sides of frequency spectrum. The final frequency filter is

designed to shape the amplitude spectrum of the final seismic section, as well as reduce the

numerical artifacts caused by the migration and previous processes applied during the processing

flow.

Another process used to reduce the random noise of the final stack is the F-X

deconvolution, this process is applied as a final step of the signal enhancement sequence. This

filter uses the concept that the complexity of the wavefront can be modeled by many plane waves,

which can be predicted accurately in the F-X domain, and everything else is rejected as noise.

This effective filter reduces the noise level of the seismic section and increase the lateral

continuity of the reflectors. The problem with this kind of process is that the small discontinuities

Page 56: Basic Processing

and lateral amplitude variations are affected and, in some cases, the filter generates an artificial

lateral continuity.

Amplitude equalization is applied to generate a well-balanced section, for eye appeal.

Interpreters usually use equalized sections for structural interpretation purposes. However,

equalized sections are not suitable for amplitude interpretation, because the relative amplitude

contrast is affected.

For structural purposes, we may then wish to perform trace equalization for exploration,

but never for detailed work. Where we do so, the goal is the balancing of amplitudes, so a time

and space variant automatic gain control (AGC) is preferred; a typical window length is 0.5 s. In

that case, it is still a good idea to provide a comparison section without equalization (Figure   1 , and

Figure   2 show a seismic section without and with AGC applied).

Figure 1

Page 57: Basic Processing

Figure 2

DISPLAYOnce we have make adjustments to the data amplitude (allowing us to see the data

throughout the record length) and have compensated for the near surface (eliminating false

structure and improving reflection continuity), we can view the section. This brings us to the matter

of display; an aspect of processing that is sometimes given inadequate thought. From the point of

view of the processor, this is because the interpreter gives display variables. The interpreter,

likewise, is forced to conform to a large suite of existing data and other company standards. We

will not try, therefore, to outline one "correct" type of display. There are, however, some aspects of

the display, which all companies and contractors treat in a uniform manner. We examine these

first, and then discuss matters, which are truly variable.

STANDARD PLOT CONSIDERATIONS Numbering the shot-points of a seismic section is often regarded as a strictly mechanical

operation; after all, the shots themselves have already been numbered in the field such that there

is some uniformity over the prospect area. In transferring these numbers to the section, however,

the processor should remember that there is usually an offset between the shot and the near

group. The field setup of Figure   1 (The weathering correction; the field method), illustrated here by

Figure   2 is a case in point.

Page 58: Basic Processing

Figure 1

Certainly, Trace 1 comes from Shot 1, so the immediate temptation is to label it 1 on the

section. By the same token, however, Trace 1 comes also from Receiver 3, so our previous logic

makes labeling it 3 on the section just as valid. However, the mid-point for the first shot (and the

reflection point in the simple case of horizontal reflectors) falls directly beneath ground position 2.

We see, then, that Trace 1 is correctly numbered 2 (Figure   3 ).

Figure 2

Page 59: Basic Processing

Figure 3

The direction of plotting conforms to the way maps are drawn: east is to the right. The

lone exception is a line that trends due north, in which case north is to the right.

Any plot, which is given to an interpreter — even a preliminary print — should have a side

label containing the field variables and the processing history. Inclusion of the latter becomes

more important as we increase our processing sophistication. In the steps we have taken so far,

we also need to include the processing variables we have used: the datum and the elevation

velocity; the cutoff time of the spreading correction; the trace normalization window; and the length

of the trace balancing window. Providing the interpreter with all these details allows him to

evaluate the data in light of how the line was shot and processed, and to compare the section

against older vintages. An interpreter can also use this information to decide what he does not

want to do again!

SCALES

Page 60: Basic Processing

When we choose plotting scales for a seismic section, we need first to decide the purpose

of the section. Is this a line from which we want to extract fine stratigraphic detail? On the other

hand, shall we be content to map regional trends and large structures?

If we plan to use the section for detailed work, we find that a vertical (time) scale of ten

centimeters to one second of (two-way) reflection time is usually adequate. This approximates to

four inches per second, and is usually enough to accommodate the frequencies normally

associated with this level of detail. When the effort is more on a regional level, that is, for

reconnaissance lines, we may choose to halve the time scale, so that now we have five

centimeters (about two inches) per second. (When high frequencies, more than 50 or 60 Hz, and

the subtlest traps are the objectives, some interpreters use a time scale of 20 cm/s.

The choice of horizontal scale depends on the vertical scale. For a "full-scale section,"

one with a time scale of 10 cm/s, we find it convenient to make the horizontal scale equal to the

scale of the map on which the interpreter is working. For typical prospects, this is usually 1:25,000,

which means that four centimeters on the section (or map) represents one kilometer on the

ground. In the U.S., the comparable scale is 1:24,000; one inch on the map represents 2000 ft on

the ground. This relationship is useful should the interpreter wish to construct a fence diagram

(Figure   4 ) , a network of sections, aligned as on a map, to illustrate variations in three dimensions.

Figure 4

Page 61: Basic Processing

For the "half-scale section," where the time scale is 5 cm/s, we keep proportions the same

as on the full-scale section by using a horizontal scale of 1:50,000 (two centimeters equals one

kilometer). In both cases, the effect, for typical velocities, is a vertical exaggeration (or horizontal

compression) of about 2:1 (more commonly nowadays, the 3-D volume would be viewed and

interpreted at a workstation).

A hardship may arise when we try to tie our section to those of older vintages that used

different scales. Moreover, the older lines may have been shot with different field variables, which

may mean that a constant horizontal scale results in different trace widths. In either case,

comparisons are difficult. In critically important cases, the solution is to reprocess the pertinent

older data — particularly as analysis techniques continue to improve; we then have an opportunity

to plot these sections to conform to our new standards.

Irrespective of the choice of scales, these variables must be annotated on the section,

preferably across the top of the section (Figure   5 ) , and certainly on the side label (Figure   6 ).

Figure 5

Page 62: Basic Processing

Figure 6

Trace Display Modes

Now to the matter of the traces themselves. We can use five conventional display modes:

wiggles only;

wiggles with variable area;

variable area only; and

variable density, with and without wiggles.

The first, wiggles only, (Figure   7 ) , is the sort of display that comes out of the field camera

monitor.

Page 63: Basic Processing

Figure 7

Subtle features are hard to see, trace overlap can cause confusion, and zero crossings

are not readily apparent. Generally, a display of this sort is appropriate only for a preliminary,

comparative quality check of the data, of the sort the field observer performs.

A sophistication of the wiggles-only plot comes with blacking in one side (Figure   7 ) ; this

is the variable-area-wiggles (v-a-w) plot. The full waveform is plotted, but this mode gives

emphasis only to the blacked-in peaks. Still, the full-waveform plotting is important when we

undertake wavelet processing, and when we want to pick and time an event properly. The variable

area wiggles plot has proven to be the most popular display mode in current practice.

A plot without wiggles but with variable area (v-a) only may be arranged to give equal

emphasis to positive and negative excursions of the trace (Figure   8 ).

Page 64: Basic Processing

Figure 8

In other words, we see the trace swing to the left as clearly as we see it swing to the right.

Such a plot also prints better than a v-a-w plot. The problem with this mode is eyestrain,

particularly for a person with astigmatism. This is no small consideration for someone who must

pore over these sections many hours a day.

With the rectified variable-area plot (Figure   9 ) , peaks (black) and troughs (gray) are

plotted on the same side of the trace. For clarity of faults, these are very effective displays.

Figure 9

The choice of display mode is a matter of personal taste, but is also affected by company

dictum. Currently, the most popular medium of data exchange is the variable area wiggles plot.

With so much data being sold, traded, and processed for partnerships, chances are we shall all

work in this mode at one time or another.