Basic Data Processing Sequence

Embed Size (px)

Citation preview

  • 8/10/2019 Basic Data Processing Sequence

    1/15

    BASIC DATA PROCSSING SEQUENCE

    BASIC DATA PROCESSING SEQUENCE

    There are three primary steps in processing seismic data --- deconvolution, stacking, and migration,

    in their usual order of application.

    Deconvolution acts along the time axis. It removes the basic seismic wavelet (the source timefunction modified by various effects of the earth and recording system) from the recorded seismic trace

    and thereby increases temporal resolution. Deconvolution achieves this goal by compressing the

    wavelet.

    Stacking also is a process of compression. In particular, the data volume is reduced to a plane of

    midpoint-time at zero offset (the frontal face of the prism) first by applying normal moveout correction

    to traces from each CMP gather, then by summing them along the offset axis. The result is a stacked

    section.

    Finally, migration commonly is applied to stacked data. It is a process that collapses diffractions and

    maps dipping events on a stacked section to their supposedly true subsurface locations. In this respect,

    migration is a spatial deconvolution process that improves spatial resolution.

    FIG.1. Seismic data volume represented in processing coordinates midpoint-

    offset-time. Deconvolution acts on the data along the time axis and increases

    temporal resolution. Stacking compresses the data volume in the offset

    direction and yields the plane of stacked section (the frontal face of the

    prism). Migration then moves dipping events to their true subsurface

    positions and collapses diffractions, and thus increases lateral resolution.

    All other processing techniques may be considered secondary in that they help improve the

    effectiveness of the primary processes. For example, dip filtering may need to be applied before

    deconvolution to remove coherent noise so that the autocorrelation estimate is based on reflection

    energy that is free from such noise.

  • 8/10/2019 Basic Data Processing Sequence

    2/15

    BASIC DATA PROCSSING SEQUENCE

    Wide band-pass filtering also may be needed to remove very low- and high-frequency noise. Before

    deconvolution, correction for geometric spreading is necessary to compensate for the loss of amplitude

    caused by wave-front divergence. Velocity analysis, which is an essential step for stacking, is improved

    by multiple attenuation and residual statics corrections.

    Preprocessing Sequence

    Demultiplexing

    Field data are recorded in a multiplexed mode using a certain type of format. The data first are

    demultiplexed as described in Figure 2. Mathematically, demultiplexing is seen as transposing a big

    matrix so that the columns of the resulting matrix can be read as seismic traces recorded at different

    offsets with a common shot point. At this stage, the data are converted to a convenient format that is

    used throughout processing. This format is determined by the type of processing system and the

    individual company. A common format used in the seismic industry for data exchange is SEG-Y,

    established by the Society of Exploration Geophysicists.

    FIG. 2. Seismic data are recorded in rows of samples at the same time at consecutive

    channels. Demultiplexing involves sorting the data into columns of samples all the

    time samples in one channel followed by those in the next channels.

    Editing

    Preprocessing also involves trace editing. Noisy traces, traces with transient glitches, or

    monofrequency signals are deleted; polarity reversals are corrected. In case of very shallow marine data,

    guided waves are muted since they travel horizontally within the water layer and do not contain

    reflections from the substratum.

    Marine data are contaminated by swell noise and cable noise. These types of noise carry very low-

    frequency energy but can be high in amplitudes. They can be recognized by their distinctive linear

    pattern and vertical streaks. The swell noise and cable noise are removed from shot records by a low-cut

  • 8/10/2019 Basic Data Processing Sequence

    3/15

    BASIC DATA PROCSSING SEQUENCE

    filtering. Attenuation of coherent linear noise associated with side scatterers and ground roll may

    require techniques based on dip filtering.

    Gain Recovery

    Following the trace editing and prefiltering, a gain recovery function is applied to the data to correct for

    the amplitude effects of spherical wavefront divergence. This amounts to applying a geometric

    spreading function, which depends on traveltime.

    Optionally, this amplitude correction is made dependent on a spatially averaged velocity function, which

    is associated with primary reflections in a particular survey area. Additionally, an exponential gain

    function may be used to compensate for attenuation losses.

    Geometric spreading correction:

    The earth has two effects on a propagating wavefield;

    a-

    In a homogenous medium, energy density decays proportionately to , where r is thereduis of the wavefront (In practice, velocity usually increases with depth, which causes

    further divergence of the wavefront and a more rapid decay in amplitudes with distance.).

    b-

    The frequency content of the initial source signal changes in a time variant manner as it

    propagates (In practice, high frequencies are absorbed more rapidly than low frequencies.).

    The gain function for geometric spreading compensation is defined by;

    Where is the reference velocity at a spacific time .

    Programmed gain control (PGC):

    PGC is the simplest type of gain.

    Gain function can be defined by interpolation between same scalar values specified at particular

    time sample.

    A single PGC function is applied to all traces in a gather or stacked section to prevent the relative

    amplitude variation in the lateral direction.

    RMS Amplitude AGC:

    The RMS amplitude AGC gain function is based on the rms amplitude within a specified time

    gate on an input trace.

    The gain function is computed as follows;

    The input trace is subdivided into fixed time gate.

    The amplitude of each sample in a gate is squared.

    The mean of these values is computed and its square root is taken. This is rms

    amplitude over this gate.

  • 8/10/2019 Basic Data Processing Sequence

    4/15

    BASIC DATA PROCSSING SEQUENCE

    Instantaneous AGC:

    Instantaneous AGC is one of the most common gain types used.

    The gain function is computed as follows;

    The input trace is subdivided into fixed time gate.

    The mean absolute value of trace amplitudes is computed within a specified time gate. The ratio of the desired rms level to this mean value is assigned as the value of the gain

    function.

    Field Geometry

    Finally, field geometry is merged with the seismic data. This precedes any gain correction that is offset-

    dependent. Based on survey information for land data or navigation information for marine data,

    coordinates of shot and receiver locations for all traces are stored on trace headers. Changes in shot and

    receiver locations are handled properly based on the information available in the observer's log. Many

    types of processing problems arise from setting up the field geometry, incorrectly. As a result, the

    quality of a stacked section can be degraded severely.

    Elevation Statics

    For land data, elevation statics are applied at this stage to reduce traveltimes to a common datum

    level. This level may be flat or vary (floating datum) along the line. Reduction of traveltimes to a datum

    usually requires correction for the near-surface weathering layer in addition to differences in elevation

    of source and receiver stations. Estimation and correction for the near- surface effects usually are

    performed using refracted arrivals associated with the base of the weathering layer.

    The statics corrections require knowledge of the near-surface model. The near-surface oftenconsists of a low-velocity weathering layer. However, there are exceptions to this simplified model for

    the near-surface. Areas covered with glacial tills, volcanic stringers, and sand dunes often have a near-

    surface that may consist of more than one layer with different velocities. Layer boundaries can vary

    significantly from a flat interface to an arbitrarily irregular shape. The single-layer assumption for the

    near-surface also is violated when there is a lateral change in rock composition associated with

    outcrops, pinchouts or a flood plain along a seismic profile.

    In practice, a single-layer near-surface model often is sufficient for resolving long-wavelength statics

    anomalies. Complexities in a single-layer near-surface model can be due to one or more of the following:

    (a)

    Rapid variations in shot and receiver station elevations,

    (b)

    Lateral variations in weathering velocity, and

    (c)

    Lateral variations in the geometry of the refractor, which, for refraction statics, is

    defined as the interface between the weathering layer above and the bedrock below.

  • 8/10/2019 Basic Data Processing Sequence

    5/15

    BASIC DATA PROCSSING SEQUENCE

    Processing Sequence

    Deconvolution

    Deconvolution compresses the wavelet in the recorded seismogram, attenuates reverberations and

    short period multiples, thus increases temporal resolution and yields a representation of the subsurfacereflectivity.

    Typically, prestack deconvolution is aimed at improving temporal resolution by compressing the

    effective source wavelet contained in the seismic trace to a spike (spiking deconvolution). Predictive

    deconvolution with a prediction lag (commonly termed gap) that is equal to the first or second zero

    crossing of the autocorrelation function also is used commonly.

    Although deconvolution usually is applied to prestack data trace by trace, it is not uncommon to

    design a single deconvolution operator and apply it to all the traces on a shot record. Deconvolution

    techniques used in conventional processing are based on optimum Wiener filtering.

    Optimum Wiener Filtering

    Insignal processing,the Wiener filter is afilter proposed byNorbert Wiener.Its purpose is to reduce

    the amount ofnoise present in a signal by comparison with an estimation of the desired noiseless signal.

    Typical filters are designed for a desiredfrequency response. However, the design of the Wiener

    filter takes a different approach. One is assumed to have knowledge of the spectral properties of the

    original signal and the noise, and one seeks thelinear time-invariant filter whose output would come as

    close to the original signal as possible. Wiener filters are characterized by the following:

    1.

    Assumption: signal and (additive) noise are stationary linearstochastic processes with known

    spectral characteristics or knownautocorrelation andcross-correlation

    2.

    Requirement: the filter must be physically realizable/causal (this requirement can be dropped,

    resulting in a non-causal solution)

    3.

    Performance criterion:minimum mean-square error (MMSE)

    Wiener deconvolution is an application of theWiener filter to thenoise problems inherent

    indeconvolution.It works in thefrequency domain,attempting to minimize the impact of deconvoluted

    noise at frequencies which have a poorsignal-to-noise ratio.

    Given a system

    Where * denotes convolution and:

    is some input signal (unknown) at time t.

    http://en.wikipedia.org/wiki/Signal_processinghttp://en.wikipedia.org/wiki/Filter_(signal_processing)http://en.wikipedia.org/wiki/Norbert_Wienerhttp://en.wikipedia.org/wiki/Noisehttp://en.wikipedia.org/wiki/Frequency_responsehttp://en.wikipedia.org/wiki/LTI_system_theoryhttp://en.wikipedia.org/wiki/Stochastic_processhttp://en.wikipedia.org/wiki/Autocorrelationhttp://en.wikipedia.org/wiki/Cross-correlationhttp://en.wikipedia.org/wiki/Causal_systemhttp://en.wikipedia.org/wiki/Minimum_mean-square_errorhttp://en.wikipedia.org/wiki/Wiener_filterhttp://en.wikipedia.org/wiki/Noisehttp://en.wikipedia.org/wiki/Deconvolutionhttp://en.wikipedia.org/wiki/Frequency_domainhttp://en.wikipedia.org/wiki/Signal-to-noise_ratiohttp://en.wikipedia.org/wiki/Signal-to-noise_ratiohttp://en.wikipedia.org/wiki/Frequency_domainhttp://en.wikipedia.org/wiki/Deconvolutionhttp://en.wikipedia.org/wiki/Noisehttp://en.wikipedia.org/wiki/Wiener_filterhttp://en.wikipedia.org/wiki/Minimum_mean-square_errorhttp://en.wikipedia.org/wiki/Causal_systemhttp://en.wikipedia.org/wiki/Cross-correlationhttp://en.wikipedia.org/wiki/Autocorrelationhttp://en.wikipedia.org/wiki/Stochastic_processhttp://en.wikipedia.org/wiki/LTI_system_theoryhttp://en.wikipedia.org/wiki/Frequency_responsehttp://en.wikipedia.org/wiki/Noisehttp://en.wikipedia.org/wiki/Norbert_Wienerhttp://en.wikipedia.org/wiki/Filter_(signal_processing)http://en.wikipedia.org/wiki/Signal_processing
  • 8/10/2019 Basic Data Processing Sequence

    6/15

    BASIC DATA PROCSSING SEQUENCE

    is the known impulse response of a linear time-invariant system.is some unkown additive noise, independent of is our observed signal.

    Our goal is to fin some so that we can estimate as follows:

    Where is an estimate of that minimizes the mean square error.

    The Wiener deconvolution filter provides such a . The filter is most easily described in the frequency

    domain:

    ||

    Where * denotes complex conjugation and:

    and are the Fourier transforms of and , respectively at frequency domain.is the mean power spectral density of the input signal

    is the eman power spectral density of the noise .

    The filtering operation may rather be carried out in the time-domain, or in the frequency domain:

    Whereis the Fourier transform of and then performing a inverse Fourier transform onto obtain .

    The Wiener filter applies to a large class of problems in which any desired output can be considered, not

    just the zero-lag spike. Five choices for the desired output are:

    Type 1: Zero-lag spike,

    Type 2: Spike at arbitrary lag,

    Type 3: Time-advanced form of input series,

    Type 4: Zero-phase wavelet,

    Type 5: Any desired arbitrary shape.

    Spiking Deconvolution

    The process with type 1 desired output (zero-lag spike) is

    called spiking deconvolution. Crosscorrelation of the desired

    spike (1,0,0,.,0) with input wavelet (,,,.,)

    yields the series (,0,0,....,0).

    A flowchart for Wiener filter design

    and application

  • 8/10/2019 Basic Data Processing Sequence

    7/15

    BASIC DATA PROCSSING SEQUENCE

    In conclusion, if the input wavelet is not minimum phase, then spiking deconvolution cannot convert

    it to a perfect zero-lag spike. Although the amplitude spectrum is virtually flat as shown in frame A), the

    phase spectrum of the output is not minimum phase as shown in frame (m). Finally, note that the

    spiking deconvolution operator is the inverse of the minimum- phase equivalent of the input wavelet.

    This wavelet may or may not be minimum phase.

    Prewhitening

    As we mentioned in the previous section spiking deconvolution cannot convert a minimum phase

    wavelet to a perfect zero-lag spike. What if we had zeroes in the amplitude spectrum of the input

    wavelet? To study this, we apply a minimum-phase band-pass filter with a wide passband (30-108 Hz) to

    the minimum-phase wavelet . Deconvolution of the filtered wavelet does not produce a perfect spike;

    instead, a spike accompanied by a high-frequency pre- and post-cursor results. This poor result occurs

    because the deconvolution operator tries to boost the absent frequencies, as seen from the amplitude

    spectrum of the output.

    Predictive deconvolution

    The type 3 desired output, a time-advanced from of the input series, suggests a prediction process.

    Given the input, we want to predict its value at some future time (t + ), where is prediction lag.Wiener showed that the filter used to estimate can be computed by using a special form ofthe matrix equation derived by Robinson and Treitel.

    CMP Sorting

    Seismic data acquisition with multifold coverage is done in shot-receiver (s,g) coordinates. Figure 4a

    is a schematic depiction of the recording geometry and ray paths associated with a flat reflector. Seismic

    data processing, on the other hand, conventionally is done in midpoint-offset (y,h) coordinates. The

    required coordinate transformation is achieved by sorting the data into CMP gathers. Based on the field

    geometry information, each individual trace is assigned to the midpoint between the shot and receiver

    locations associated with that trace. Those traces with the same midpoint location are grouped

    together, making up a CMP gather.

    Figure 4b depicts the geometry of a CMP gather and raypaths associated with a flat reflector. Note

    that CDP gather is equivalent to a CMP gather only when reflectors are horizontal and velocities do not

    vary horizontally. However, when there are dipping reflectors in the subsurface, these two gathers are

    not equivalent and only the term CMP gather should be used.

    The following gather types are identified in Figure 5:

    A)

    Common-shot gather (shot record, field record),

    B)

    Common-receive gather,

    C)

    Common-midpoint gather (CMP gather, CDP gather),

    D)

    Common-offset section (constant-offset section),

    E)

    CMP-stacked section (zero-offset section).

  • 8/10/2019 Basic Data Processing Sequence

    8/15

    BASIC DATA PROCSSING SEQUENCE

    FIG. 4. (a) Seismic data acquisition is done in shot-receiver {s,g) coordinates. The processing

    coordinates, midpoint-(half) offset, (y, h) are defined in terms of (s, g): y = {g + s)/2, h = {g - s)/2. The

    shot axis here points opposite the profiling direction, which is to the left. On a flat reflector, the

    subsurface is sampled by reflection points which span a length that is equal to half the cable length,

    (b) Seismic data processing is done in midpoint-offset (y, h) coordinates. The raypaths are associated

    with a single CMP gather at midpoint location M. A CMP gather is identical to a CDP gather if the

    depth point were on a horizontally flat reflector and if the medium above were horizontally layered.

  • 8/10/2019 Basic Data Processing Sequence

    9/15

    BASIC DATA PROCSSING SEQUENCE

  • 8/10/2019 Basic Data Processing Sequence

    10/15

    BASIC DATA PROCSSING SEQUENCE

    Normal Moveout

    Consider a reflection event on a CMP gather. The difference between the two-way time at a given

    offset and the two-way zero-offset time is called normal moveout (NMO). Reflection traveltimes must

    be corrected for NMO prior to summing the traces in the CMP gather along the offset axis.

    The normal moveout depends on:

    Velocity above the reflector,

    Offset,

    Two-way zero-offset time associated with the reflection event,

    Dip of the reflector,

    The source-receiver azimuth with respect to the true-dip direction, and

    The degree of complexity of the near-surface and the medium above the reflector.

    NMO for a Flat Reflector

    Figure 6 shows the simple case of a single horizontal

    layer. At a given midpoint location M, we want to compute

    the reflection traveltime t along the raypath from shot

    position S to depth point D then back to receiver position

    G. Using the Pythagorean theorem, the traveltime

    equation as a function of offset is :

    Where is the distance (offset) between the source and

    receiver positions, is the velocity of the medium above thereflecting interface, and is twice the traveltime along thevertical path MD.

    NMO for a Dipping Refractor

    Figure 7 depicts a medium with a single dipping reflector. We

    want to compute the traveltime from source location S to the

    reflector at depth point D, then back to receiver location G.

    For the dipping reflector, midpoint M is no longer a vertical

    projection of the depth point to the surface. The terms CDPgather and CMP gather are equivalent only when the earth is

    horizontally stratified. When there is subsurface dip or lateral

    velocity variation, the two gathers are different. Midpoint M

    and the normal-incidence reflection point D' remain common

    to all of the source-receiver pairs within the gather, regardless

    of dip.

    Fig .6. The NMO geometry for a single

    horizontal reflector. The traveltime is

    described by a hyperbola representedby the equation.

    Fig .7. The NMO geometry for a single

    dipping reflector.

  • 8/10/2019 Basic Data Processing Sequence

    11/15

    BASIC DATA PROCSSING SEQUENCE

    The equation for a dipping reflector is:

    From the geometry of the dipping reflector

    so:

    Whereis the dip angle of the reflector

    Moveout Velocity versus Stacking Velocity

    Table 3-3 summarizes the NMO velocity obtained from various earth models. After making a the small-

    spread and small-dip approximations, moveout is hyperbolic for all cases and given by

    The hyperbolic moveout velocity should be distinguished from the stacking velocity that optimally

    allows stacking of traces in a CMP gather. The hyperbolic form is used to define the best stacking path

    as

    where is the velocity value which produces the maximum amplitude of the reflection event inthe stacked trace.

    Velocity Analysis

    In addition to providing an improved signal-to-noise ratio, multifold coverage with nonzero-offset

    recording yields velocity information about the subsurface. Velocity analysis is performed on selected

    CMP gathers or groups of gathers. The output from one type of velocity analysis is a table of numbers as

    a function of velocity versus two-way zero-offset time (velocity spectrum). These numbers represent

    some measure of signal coherency along the hyperbolic trajectories governed by velocity, offset, and

    traveltime.

    In areas with complex structure, velocity spectra often fail to provide sufficient accuracy in velocity

    picks. When this is the case, the data are stacked with a range of constant velocities, and the constant-

    velocity stacks themselves are used in picking velocities.

  • 8/10/2019 Basic Data Processing Sequence

    12/15

    BASIC DATA PROCSSING SEQUENCE

    Factors Affecting Velocity Estimates Velocity estimation from seismic data is limited in accuracy and

    resolution for the following reasons:

    (a)

    Spread length,

    (b)

    Stacking fold,

    (c)

    signal-to-noise ratio,

    (d)

    Muting,

    (e)

    Time gate length,

    (f)

    Velocity sampling,

    (g)

    Choice of coherency measure,

    (h)

    True departures from hyperbolic moveout, and

    (i)

    Bandwidth of data.

    Multiples Attenuation

    Multiple reflections and reverberations are attenuated using techniques based on their periodicity

    or differences in moveout velocity between multiples and primaries. These techniques are applied todata in various domains, including the CMP domain, to best exploit the periodicity and velocity

    discrimination criteria.

    Deconvolution is one of the methods of multiple attenuation that exploits the periodicity criterion.

    Often, however, the power of conventional deconvolution in attenuating multiples is underestimated.

    CMP stacking facilitates attenuation of multiples based on velocity discrimination between primaries

    and multiples. This criterion to attenuate multiples also can be exploited in thefk, rpand Radon-

    transform domains. The degree of success depends on the moveout difference between primaries and

    multiples, and hence, on velocities and arrival times of primary reflections, and the cable length.

    Specifically, the moveout difference between primaries and multiples decreases at shallow times, low

    velocities, and at near offsets.

    Frequency-Wavenumber Filtering (fk filtering)

    Coherent linear events in the txdomain can be separated in thefkdomain by their dips. This

    allows us to eliminate certain types of unwanted energy from the data. In particular, coherent linear

    noise in the form of ground roll, guided waves, and side-scattered energy commonly obscure primary

    reflections in recorded data.

    These types of noise usually are isolated from the reflection energy in thefkdomain. Ground roll

    is a type of dispersive waveform that propagates along the surface and is low-frequency, large-

    amplitude in character. Typically, ground roll is suppressed in the field by using a suitable receiver array.

    A seismic pulse travelling with velocity at angle to the vertical propagate across thespread within an apparent velocity:

  • 8/10/2019 Basic Data Processing Sequence

    13/15

    BASIC DATA PROCSSING SEQUENCE

    Along the spread direction, each individual sinusoidal component of the pulse will have an

    apparent wave number related to its individual frequencywhere:

    Hence , a plot of frequency against apparent wavenumber for the pulse will yield astraight line curve with a gradient of

    .

    k filtering is to enact a twodimensional Fourier transformation of the seismic data fromthe tx domain, then to filter the k plote by removing a wedge-shaped zone or zonescontaining the unwanted noise events, and finally to transform back to tx domain.

    The following are the steps involved infkfiltering:

    (a)

    Starting with a common-shot or a CMP gather, or a CMP-stacked section, applies 2-D Fourier

    transform.

    (b)

    Define a 2-D reject zone in thefkdomain by setting the 2-D amplitude spectrum of thefk

    filter to zero within that zone and set its phase spectrum to zero.

    (c)

    Apply the 2-Dfkfilter by multiplying its amplitude spectrum with that of the input data set.

    (d)

    Apply 2-D inverse Fourier transform of the filtered data.

    Statics Corrections and Frequency-Wavenumber Filtering

    It should be noted that coherent linear noise on shot gathers can be influenced kinematically by

    surface topography and near-surface refractor geometry. Specifically, linearity of the coherent noise

    may be distorted across a shot record. Distortions along a linear event in the tx domain cause

    smearing of energy over a broad range of wavenumbers in the fk domain. This, in turn, would make it

    difficult to specify a pass-fan for reflection energy. It can be concluded that statics corrections, at least inthe form of field statics, should be applied to shot records prior to fk filtering.

    The Slant-Stack Transform

    The Radon Transform

    Velocity Stack Transformation

    Linear Uncorrelated Noise Attenuation

  • 8/10/2019 Basic Data Processing Sequence

    14/15

    BASIC DATA PROCSSING SEQUENCE

    Migration

    A seismic section is assumed to represent a cross-section of the earth. The assumption works well

    when layers are-flat, and fairly well when they have gentle dips.

    With steeper dip the assumption breaks down; the reflections are in the wrong places and have the

    wrong dips.

    In estimating the hydrocarbons in place, one of the variables is the areal extent of the trap. Whetherthe trap is structural or stratigraphic, the seismic section should represent the earth model.

    Dip migration, or simply migration, is the process of moving the reflections to their proper places

    with their correct amount of dips.

    This results in a section that more accurately represents a cross-section of the earth, delineating

    subsurface details such as fault planes. Migration also collapses diffraction.

    Migration Methods

    The objective of seismic data processing is to produce an accurate as possible image of the

    subsurface target, within the constraints imposed by time and money provided. In a few cases the CMP

    stack, in time or depth, may suffice. In almost every case, today, some sort of migration is required to

    produce a satisfactory image. There are two general approaches to migration: post-stack and pre-stack.

    Post-stack migration is acceptable when the stacked data zero-offset. If there are conflicting dips with

    varying velocities or a large lateral velocity gradient, a pre stack partial migration is used to resolve

    these conflicting dips.

    Pre-Stack Partial Migration (PSPM)

    This process, also called dip moveout or DMO, applied before stack provides a better stack section

    and an improved migration after stack. Figure 8 shows how this occurs. After NMO, the trace is

    effectively moved to the midpoint position but if there is significant dip the reflection from the dipping

    reflector is at neither the right place nor the right time. Pre-stack partial migration moves the reflection

    to the zero offset point (ZOP). The reflection is still not quite at the right place and time but the zero-

    offset assumption of post-stack migration is satisfied. Thus, post-stack migration completes the imaging

    to the right place and time.

    Fig.8. Relationship between zero-offset point and midpoint for a dipping reflector

  • 8/10/2019 Basic Data Processing Sequence

    15/15

    BASIC DATA PROCSSING SEQUENCE

    Kirchhoff Migration

    Diffraction migration or Kirchhoff migration is a statistical approach technique. It is based on the

    observation that the zero-offset section consists of a single diffraction hyperbola that migrates to a

    single point Migration involves summation of amplitudes along a hyperbolic path. The advantage of this

    method is its good performance in case of steep-dip structures. The method performs poorly when the

    signal-to-noise ratio is low.

    Finite Difference Migration

    This is a deterministic approach that recalculates the section using an approximation of the wave

    equation suitable for use with computers. One advantage of the finite difference method is its ability to

    perform well under low signal-to-noise ratio condition. Its disadvantages include long computing time

    and difficulties in handling steep dips.

    Frequency Domain or F-K Domain Migration

    Stolt and Phase-Shift migration operate in the F-K domain. Phase shift migration is considered to be

    the most accurate method of migration but is also the most expensive. It is a deterministic approach via

    the wave equation instead of using the finite difference approximation. The 2-D Fourier transform is the

    main technique used in this method. Some of the advantages of F-K method are fast computing time,

    good performance under low signal-to-noise ratio, and excellent handling of steep dips. Disadvantages

    of this method include difficulties with widely varying velocities.