102
Abstract: To atomize the seismic data analysis we designing a framework to analyze the seismic data. In order to identify the location from the huge amount of image data base. We design a framework which takes an image as an input and identifies the seismic locations and displays the seismic data in the user interface. Our framework takes an image as an input, it converts the image into gray scale and it performs background subtraction then it performs the hologram operation to obtain the edges. Scope of Project: This project aims at solving the data mining problem in seismic data analysis to mention the huge amount of the data when it is atomized, for the identification of the region’s, the content searches all the 3D images present in a huge database. In order to achieve this, the analysis of the texture is taken in to consideration which this is done by considering the base as 3D histogram as an orientation which is thereafter used to first find the regions present in the data and is next represented accordingly. Introduction: The structural analysis is a very big topic and our topic the ‘Seismic Analysis’ is its subset. It takes the

Seismic Data Analysis-new[1]

  • Upload
    srujjjn

  • View
    115

  • Download
    6

Embed Size (px)

Citation preview

Page 1: Seismic Data Analysis-new[1]

Abstract:

To atomize the seismic data analysis we designing a framework to analyze the seismic

data. In order to identify the location from the huge amount of image data base. We

design a framework which takes an image as an input and identifies the seismic locations

and displays the seismic data in the user interface. Our framework takes an image as an

input, it converts the image into gray scale and it performs background subtraction then it

performs the hologram operation to obtain the edges.

Scope of Project:

This project aims at solving the data mining problem in seismic data analysis to mention

the huge amount of the data when it is atomized, for the identification of the region’s, the

content searches all the 3D images present in a huge database. In order to achieve this,

the analysis of the texture is taken in to consideration which this is done by considering

the base as 3D histogram as an orientation which is thereafter used to first find the

regions present in the data and is next represented accordingly.

Introduction:

The structural analysis is a very big topic and our topic the ‘Seismic Analysis’ is its

subset. It takes the earthquakes in to consideration and then calculates a building’s

response when it occurs to the region where the building is located. This analysis is

nothing but a part of a process which contains the assessment of the structures present in

the regions where the earthquakes take place

Earthquakes are one of the most destructive natural events that occur on planet earth.

Earthquakes strikes without any prior warning and will end up causing huge lose both to

human and economy. They naturally occur when there is a moment in the parts of earth’s

crust. Small and large depends on the length of the movements. Bigger ones occur when

the movement is about meter or two where smaller ones occur with the movement in

millimetres. Earths outer surface is broken in to the plates. Plates move under, over or

even slide past each other causing the earthquakes. To be more precise movements in the

Page 2: Seismic Data Analysis-new[1]

earth crust cause stress to build up at points of weakness and rocks deform. Stores energy

builds up in the same way as energy build up in the spring of a watch when its wound.

When the stress finally exceeds the strength of the rock, the rock fractures along a fault,

often at a zone of existing weakness within the rock. Stored energy suddenly releases

causing the earthquake.

To mention the most devastating earthquakes that occurred in the recent past, earthquake

in the region of Haiti on Tuesday, Jan 12, 2010 is considered to be one of the most

destructive earthquakes in the past one decade. As per the sources mentioned in the BBC

website there are up to 230,000 died and more than 1 million homeless. Haiti earthquake

is considered to be having a magnitude of 7.0 on the rector scale. The most devastating in

the century is considered one in Kansu, china in the year 1920 where there are at least

200,000 people killed with 8.5 magnitudes and the one in Tangshan, china in the year

1976 where the death toll is up to 255,000 with a magnitude of 8.0. Apart from this the

one in Tokyo- Yokohoma, Japan in 1923 and Armenia, USSR in the year 1988 are also

the major ones.

There are more than tens of incidents only in this century where earthquakes affected

different parts of the world. Coming back to the technical aspects of the

occurrence ofearthquakes there are different terms to be explained. When

earthquake fault raptures, it causes two types of deformation one is static and the

other is dynamic. If there is a permanent displacement of the ground due to the

event then it is referred to be static deformation. The earthquake cycle progresses

from a fault that is not under stress, to a stressed fault as the plate tectonic

motions driving the fault slowly proceed, to rupture during an earthquake and a

newly-relaxed but deformed state. The second type of deformation, dynamic

motions, is essentially sound waves radiated from the earthquake as it raptures.

While most of the plate- tectonic energy driving fault ruptures is taken up by

static deformation, up to 10% may dissipate immediately in form of seismic

waves. The seismic waves are different kinds which are moving in to different

ways. The major types of seismic waves are considered to be body waves and the

surface waves. Surface waves are the ones which move only on the surface of the

planet where the body waves can travel through the inner layers of the earth.

Page 3: Seismic Data Analysis-new[1]

Body waves again divided in to two different types namely compressional waves

or also known as P (primary) waves and the second one are S or the secondary

wave.

Primary waves are the fastest kind of waves and the ones which first arrives at the

seismic station. Primary waves are the ones which travel through the solid rock

and fluids. P waves push the rocks they travel through just like sound waves

pushing the air. Scientists believe that animals can hear the P waves of an

earthquake. In the P waves particles move in the same direction that the wave is

moving in, which is the direction energy is moving. The diagram below explains

the travelling of P wave through a medium.

Page 4: Seismic Data Analysis-new[1]

The secondary wave in an earth quake is slower and can only move through solid not

through the liquid medium. It is the property of the S wave that led seismologist to

conclude that the earths outer core is a liquid. S waves move rock particles up and down,

or side to side – perpendicular that the wave is travelling in. The diagram below explains

the S wave transition.

Page 5: Seismic Data Analysis-new[1]

Surface waves are the low frequent waves that are travelling only through the crust.

Surface waves may arrive after the body waves but they are the ones majorly responsible

for the damage and destruction caused during the earthquakes. In the deeper earthquakes

the damage and strength of the surface waves are reduced. In the surface waves the first

kind are Love wave, named after A.E.H Love, a British mathematician. Confined to the

surface of the crust, love waves produce entirely horizontal motion.

(http://www.geo.mtu.edu/UPSeis/waves.html)

Page 6: Seismic Data Analysis-new[1]

The other kind in the surface waves are Rayleigh wave, named after Lord Rayleigh. This

type of wave just rolls along the ground just like a wave rolls across a lake or an ocean.

When rolling it move the ground up and down and side to side in the direction of moving

wave. The shaking that is felt from the earthquakes are mostly due to the Rayleigh wave.

Page 7: Seismic Data Analysis-new[1]

P waves and the S waves are indirectly allowing the scientists to study the internal

structure of the earth. Because of the speeds and different materials they travel through it

is easy to determine the exact location of the earthquake.

Sensitive seismographs are the principle tool of scientist who studies earthquakes. In the

present world there are thousands of seismograph stations in operation. Even these

instruments have been installed in Moon, Mars and Venus. A simple seismograph looks

like a pendulum. Whenever there is an earthquake or the ground shakes, the base and

frame of the instrument move with it, but inertia keeps the pendulum bob in place. The

bob them will move in relative to the shaking ground and records the pendulum

displacements as they change with the time, tracing out the record called a seismogram.

Each of the seismograph station constitutes of three different pendulums sensitive to

north-south, east-west and vertical motions of the ground. This will record the

seismograms that allow scientists to estimate the distance, direction, Richter magnitude,

Page 8: Seismic Data Analysis-new[1]

and the types of faulting of the earthquake. Network within the seismograph stations will

allow scientists to determine the location of the earthquake.

As seen in the figure, a building has the potential to ‘wave’ back and forth during an

earthquake (or even a severe wind storm). This is called the ‘fundamental mode’, and is

the lowest frequency of building response. Most buildings, however, have higher modes

of response, which are uniquely activated during earthquakes. The figure just shows the

second mode, but there are higher ‘shimmy’ (abnormal vibration) modes. Nevertheless,

the first and second modes tend to cause the most damage in most cases.

The earliest provisions for seismic resistance were the requirement to design for a lateral

force equal to a proportion of the building weight (applied at each floor level). This

approach was adopted in the appendix of the 1927 Uniform Building Code (UBC), which

was used on the west coast of the USA. It later became clear that the dynamic properties

of the structure affected the loads generated during an earthquake. In the Los Angeles

County Building Code of 1943 a provision to vary the load based on the number of floor

levels was adopted (based on research carried out at Caltech in collaboration with

Stanford University and the U.S. Coast and Geodetic Survey, which started in 1937). The

concept of "response spectra" was developed in the 1930s, but it wasn't until 1952 that a

joint committee of the San Francisco Section of the ASCE and the Structural Engineers

Association of Northern California (SEAONC) proposed using the building period (the

inverse of the frequency) to determine lateral forces.

The University of California, Berkeley was an early base for computer-based seismic

analysis of structures, led by Professor Ray Clough (who coined the term finite element.

Students included Ed Wilson, who went on to write the program SAP in 1970 an early

"Finite Element Analysis" program.

Earthquake engineering has developed a lot since the early days, and some of the more

complex designs now use special earthquake protective elements either just in the

foundation (base isolation) or distributed throughout the structure. Analyzing these types

of structures requires specialized explicit finite element computer code, which divides

time into very small slices and models the actual physics, much like common video

Page 9: Seismic Data Analysis-new[1]

games often have "physics engines". Very large and complex buildings can be modeled

in this way (such as the Osaka International Convention Center).

Structural analysis methods can be divided into the following five categories.

Equivalent Static Analysis:

This approach defines a series of forces acting on a building to represent the effect

of earthquake ground motion, typically defined by a seismic design response spectrum. It

assumes that the building responds in its fundamental mode. For this to be true, the

building must be low-rise and must not twist significantly when the ground moves. The

response is read from a design response spectrum, given the natural frequency of the

building (either calculated or defined by the building code). The applicability of this

method is extended in many building codes by applying factors to account for higher

buildings with some higher modes, and for low levels of twisting. To account for effects

due to "yielding" of the structure, many codes apply modification factors that reduce the

design forces (e.g. force reduction factors).

Response Spectrum Analysis:

This approach permits the multiple modes of response of a building to be taken

into account (in the frequency domain). This is required in many building codes for all

except for very simple or very complex structures. The response of a structure can be

defined as a combination of many special shapes (modes) that in a vibrating string

correspond to the "harmonics". Computer analysis can be used to determine these modes

for a structure. For each mode, a response is read from the design spectrum, based on the

modal frequency and the modal mass, and they are then combined to provide an estimate

of the total response of the structure. Combination methods include the following:

absolute - peak values are added together

square root of the sum of the squares (SRSS)

Page 10: Seismic Data Analysis-new[1]

complete quadratic combination (CQC) - a method that is an improvement on SRSS for

closely spaced modes

It should be noted that the result of a response spectrum analysis using the response

spectrum from a ground motion is typically different from that which would be calculated

directly from a linear dynamic analysis using that ground motion directly, since phase

information is lost in the process of generating the response spectrum.

In cases where structures are either too irregular, too tall or of significance to a

community in disaster response, the response spectrum approach is no longer appropriate,

and more complex analysis is often required, such as non-linear static or dynamic

analysis.

Linear Dynamic Analysis:

Static procedures are appropriate when higher mode effects are not significant. This is

generally true for short, regular buildings. Therefore, for tall buildings, buildings with

tensional irregularities, or non-orthogonal systems, a dynamic procedure is required. In

the linear dynamic procedure, the building is modeled as a multi-degree-of-freedom

(MDOF) system with a linear elastic stiffness matrix and an equivalent viscous damping

matrix.

The seismic input is modeled using either modal spectral analysis or time history analysis

but in both cases, the corresponding internal forces and displacements are determined

using linear elastic analysis. The advantage of these linear dynamic procedures with

respect to linear static procedures is that higher modes can be considered. However, they

Page 11: Seismic Data Analysis-new[1]

are based on linear elastic response and hence the applicability decreases with increasing

nonlinear behavior, which is approximated by global force reduction factors.

In linear dynamic analysis, the response of the structure to ground motion is calculated in

the time domain, and all phase information is therefore maintained. Only linear properties

are assumed. The analytical method can use modal decomposition as a means of reducing

the degrees of freedom in the analysis.

Non-Linear Static Analysis:

In general, linear procedures are applicable when the structure is expected to remain

nearly elastic for the level of ground motion or when the design results in nearly uniform

distribution of nonlinear response throughout the structure. As the performance objective

of the structure implies greater inelastic demands, the uncertainty with linear procedures

increases to a point that requires a high level of conservatism in demand assumptions and

acceptability criteria to avoid unintended performance. Therefore, procedures

incorporating inelastic analysis can reduce the uncertainty and conservatism.

This approach is also known as "pushover" analysis. A pattern of forces is applied to a

structural model that includes non-linear properties (such as steel yield), and the total

force is plotted against a reference displacement to define a capacity curve. This can then

be combined with a demand curve (typically in the form of an acceleration-displacement

response spectrum (ADRS)). This essentially reduces the problem to a single degree of

freedom system.

Nonlinear static procedures use equivalent SDOF structural models and represent seismic

ground motion with response spectra. Story drifts and component actions are related

subsequently to the global demand parameter by the pushover or capacity curves that are

the basis of the non-linear static procedures.

Nonlinear Dynamic Analysis:

Page 12: Seismic Data Analysis-new[1]

Nonlinear dynamic analysis utilizes the combination of ground motion records with a

detailed structural model, therefore is capable of producing results with relatively low

uncertainty. In nonlinear dynamic analyses, the detailed structural model subjected to a

ground-motion record produces estimates of component deformations for each degree of

freedom in the model and the modal responses are combined using schemes such as the

square-root-sum-of-squares.

In non-linear dynamic analysis, the non-linear properties of the structure are considered

as part of a time domain analysis. This approach is the most rigorous, and is required by

some building codes for buildings of unusual configuration or of special importance.

However, the calculated response can be very sensitive to the characteristics of the

individual ground motion used as seismic input; therefore, several analyses are required

using different ground motion records.

Literature review:

Abstract

We apply a probabilistic method for learning efficient image codes to the problem of

unsupervised classification, segmentation and de-noising of images. The method is based

on the Independent Component Analysis (ICA) mixture model proposed for unsupervised

classification and automatic context switching in blind source separation [I]. In this

paper, we demonstrate that this algorithm is effective in classifying complex image

textures such as trees and rocks in natural scenes. The algorithm is useful for de-noising

and filling in missing pixels in images with complex structures. The advantage of this

model is that image codes can be learned with increasing numbers of basis function

classes. Our results suggest that the ICA mixture model provides greater flexibility in

modeling structure and in finding more image features than in either Gaussian mixture

models or standard ICA algorithms.

Page 13: Seismic Data Analysis-new[1]

The efficient encoding of visual sensory information is an important task for image

processing systems as well as for the understanding of coding principles in the visual

cortex. Barlow 121 proposed that the goal of sensory is to transform the input signals

such that it reduces the redundancy between the inputs. Recently, several methods have

been proposed to learn image codes that utilize a set of linear basis functions. Olshausen

and Field 131 used a sparseness criterion and found codes that were similar to localized

and oriented receptive fields. Similar results were obtained by Bell and Sejnowski [4] and

Lewicki and Olshausen [5] using the infomax ICA algorithm and

a Bayesian approach respectively.

The results in this paper are along the lines of research of finding efficient codes. The

main difference is the modeling of the underlying structure in mutually exclusive classes

with an ICA mixture model proposed in [I]. The model is a generalization of the well-

known Gaussian mixture model and assumes that the observed data in each class was

generated linearly by independent components with non-Gaussian densities. The ICA

mixture model uses a gradient-based expectation maximization (EM) algorithm in which

the basis functions for each classes are updated using an ICA algorithm. Within each ICA

class the data is transformed such that the variables are as statistically independent from

each other as possible [6, 71. In this paper, the ICA mixture model is applied to images

with the goal of learning classes of basis functions capturing underlying structures of the

image. The learned model can be used in many image processing applications such as

image classification, segmentation, and de-noising. The results demonstrate that the ICA

mixture model provides greater flexibility in modeling structure and in finding more

image features than in either Gaussian mixture models or standard ICA algorithms.

The ICA Mixture Model:

A mixture density is defined as:

Page 14: Seismic Data Analysis-new[1]

where O = (el, - .- , O f 0 are the unknown parameters for the component densities

p(xtlCk,Ok). Assume that the component densities are non-Gaussian and the data within

each class are described by:

We use gradient ascent of the log-likelihood to estimate the parameters for each class k

[I].

The log likelihood function in eq.3 is the log likelihood for each class. For the present

model, the class log likelihood is given by the log likelihood for the standard ICA model:

The adaptation is performed by using gradient ascent

In the basis functions adaptation, the gradient of the component density with respect to

the basis functions Ak is weighted by p(Cklxt, @). In the image processing applications

the mean of the images were removed and the bias vector was set to zero. However, bk

Page 15: Seismic Data Analysis-new[1]

can be adapted as in [I]. Because our primary interest is to learn efficient codes, we

choose a Laplacian prior (p(s) cx exp(-lsl)) because it captures the sparse structure of

coefficients (sk) for natural images. This leads to the simple infomax learning rule:

Equations 2, 3 and 6 are the learning rules employed for this application. The complete

derivation of the learning rules for the ICA mixture model can be found in [8].

Unsupervised image classification and segment at ion:

In [I] we applied the ICA mixture model to learn two classes of basic functions for

newspaper text images and images of natural scenes. The same approach can be used to

identify multiple classes in a single image. The learned classes are mutually exclusive

and by dividing the whole image into small image patches and classifying them we can

identify a cluster of patches which encode a certain region or texture of the image. Two

examples illustrate how the algorithm can identify texture in images by unsupervised

classification. In the first example, four texture images were taken from the Brodatz

texture dataset and put into one image. The figure 1 (a) shows the texture of four different

materials: (top-left) herringbone weave, (top-right) woolen cloth, (bottom-left) calf

leather and (bottom-right) raffia. Four classes of basic functions were adapted using the

ICA mixture model by randomly sampling 8 by 8 pixel patches from the whole image,

i.e. no label information was taken into account. One million patches were processed

which took five hours on a Pentium I1 400 MHz processor. The learned classes

corresponded to the true classes 95% of the time. The automatic classification of the

image as shown in figure 1 (b) was done by dividing the image into adjacent non-

overlapping 16 by 16 pixels patches. The mis-classified patches are shown in different

gray levels than the square region of the texture. On larger problems (up to 10 classes and

textures), the classification error rat was not significantly different. In all experiments we

used the merge and split procedure in [9] which helped to speed up convergence and

Page 16: Seismic Data Analysis-new[1]

avoid local minima. Another example of unsupervised image classification using the ICA

mixture model is the segmentation of natural scenes. Figure 2 (left) shows an example of

a natural scene with trees and rocks. The 8 by 8 pixel patches were randomly sampled

from the image and used as inputs to the ICA mixture model. Two classes of basis

functions were adapted. The classification of the patches is shown in figure 2 (right). The

cluster of class labels can be used to roughly segment the image into trees and rocks.

Note that the segmentation may have been caused by brightness. However, very similar

results were obtained on the whitened image

Figure 1: (a) Texture of four different materials: (top-left) herringbone weave, (top-right)

woolen cloth, (bottom-left) calf leather and (bottom-right) raffia. (b) The labels found by

the algorithm are shown in different gray levels. Mis-classified patches of size 16 by 16

pixels are isolated patches in a different gray level than the square region of the texture.

Image enhancement:

The ICA mixture model provides a good framework for encoding different images types.

The learned basis functions can be used for de-noising images and filling in missing

pixels. Each image patch is assumed to be a linear combination of basis functions plus

additive noise: xt = Aksk + n. Our goal is to infer the class probability of the image patch

as well as the coefficients s k for each class that generate the image. Thus, s k is inferred

from xt by maximizing the conditional probability density p( sk(Akx, t ) as shown for a

single class in 151:

Page 17: Seismic Data Analysis-new[1]

where crk is the width of the Laplacian p.d.f. and Xk = 1/& is the precision of the noise

for each class. The inference model in eq.8 computes the coefficients Sk for each class

Ak , r econstructs the image using ft = AlcSkar nd computes the class probability

p(Ck[ Akf, t). For signal to noise ratios above 20dB the mis-classification of image

patches was less than 2%. However, the error rate was higher when the noise variance

was half the variance of the signal.

De-Noising:

To demonstrate how well the basis functions capture the structure of the data we applied

the algorithm to the problem of removing noise in two different images types. In figure 3

(a) a small image was taken from a natural scene and a newspaper text. The whole image

was corrupted with additive Gaussian noise that had half of the variance of the original

image. The Gaussian noise changes the statistics of

Figure 2: (Left) Example of natural scene with trees and rocks. (Right) The classification

of patches (8 by 8 pixels) using the learned two sets of basis functions. The cluster of

class labels can be used to roughly segment the image into trees and rocks.

Page 18: Seismic Data Analysis-new[1]

the observed image such that the underlying coefficients s are less sparse than the original

data. By adapting the noise level it is possible to infer the original source density by using

eq.8. The adaptation using the ICA mixture model is better than the standard ICA model

because the ICA mixture model is allowed to switch between different image models and

therefore is more flexible in reconstructing the image. In this example, we used the two

basis functions learned from natural scenes and newspaper text. For de-noising, the image

was divided into small 12 by 12 pixel image patch. Each patch was first de-noised within

each class and than classified by comparing the likelihood of the two classes. Figure 3 (a)

shows the original image, (b) the noisy image with the signal to noise ratio (SNR) of

13dB and (c) the reconstructed image by using the Wiener filtering which a standard de-

noising method with SNR=15dB. and (d) the results of the ICA mixture model

(SNR=2ldB). The classification error was 10%.

4.2 Filling in missing data

In some image processing applications pixel values may be missing. This problem is

similar to the de-noising problem and the ICA mixture model can be used as a technique

to solve this problem. In filling in missing pixels, the missing information can be viewed

as another form of noise. Figure 3 (e) shows the same image with now 50% of the pixels

missing. The SNR improved from 7dB to 14dB using the ICA mixture model (figure 3

(f)). The reconstruction by interpolating with splines gave SNR = 1ldB. The classification

error was 20%.

Conclusion:

We have investigated the application of the ICA mixture model to the problem of

unsupervised classification and segmentation of images as well as de-noising, and filling-

in missing pixels. Our results suggest that the method is capable of handling the problems

successfully. Furthermore, the ICA mixture model is able to increase the performance

over Gaussian mixture models or standard ICA models when a variety of image types are

present in the data. The unsupervised segmentation of images by discovering image

Page 19: Seismic Data Analysis-new[1]

textures remains a difficult problem. Since the segmentation technique presented here is

based on the classification of small image patches, the global information of the image is

not taken into consideration. The multi-resolution problem may be overcome by

including a multi-scale hierarchical structure into the algorithm or by re-applying the

algorithm with different scales of the basis functions and combining the results. This

additional process would smooth the image segmentation and the ICA mixture model

could serves as a baseline segmentation algorithm. These results need to be compared

with other methods, such as those proposed by De Bonet and Viola [lo] which measured

statistical properties of textures coded with a large-scale, fixed wavelet basis. In contrast,

the approach here models image structure by adapting the basis functions themselves.

The application of ICA for noise removal in images as well as filling in missing pixels

will result in significant improvement when several different classes of images are

present in the image. Fax machines for example transmit text as well as images. Since the

basis functions of the two image models are significantly different [I] the ICA mixture

model will improve in coding and enhancing the images. The technique used here for de-

noising and filling-in missing pixels was proposed in [ l l , 51. The same technique can be

applied to multiple classes as demonstrated in this paper. The main concern of this

technique is the accuracy of the coefficient prior. A different technique for de-noising

using the fixed point ICA algorithm was proposed in [12] which may be intuitively sound

but requires some tweaking of the parameters. Another issue not addressed in this paper

is the relevance of the learned codes to neuroscience. The principle of redundancy

reduction for neural codes is preserved by this model and some properties of V1 receptive

fields are consistent with recent observations [3, 4, 51. It is possible that the visual cortex

uses over complete basis sets for representing images; this raises the issue of whether

there are cortical mechanisms that would allow switching to occur between these bases

depending on the input. The ICA mixture model has the advantage that the basis

functions of several image types can be learned simultaneously. Compared with

algorithms that use one fixed set of basis functions, the results presented here are

promising and may provide further insights in designing improved image processing

systems.

Page 20: Seismic Data Analysis-new[1]

Background:

There exist various systems to analyze the seismic data. There is a system to analyze the

water flood phases. The shot spacing was 25 m and the offset spacing was 50 m. Pressure

data were collected in marine streamer geometry with a near offset of 250 m and a

maximum offset of 3.2 km. The traces were generated with a record length of 4 seconds

at a 4 ms sample interval, with signal content in the 10-60 Hz frequency bandwidth.

Random, Gaussian distributed noise was added to each trace independently for each of

the three simulated surveys, with a 2:1 S/N ratio at the reservoir reflection pre-flood, and

a 1:1 S/N ratio post-flood. This provided us with three synthetic restack time-lapse

seismic monitor surveys with which we analyzed for signs of water flood activity.

Stacked sections

Figure shows the stacked section from the base survey, before any water injection.

The plot is enlarged to focus on the reservoir reflections at 2 km depth. Stacked reflection

amplitudes are roughly proportional to the P impedance contrast at a reflector, as long as

no anomalous AVO is present. Before water flood, there are no lateral variations in

stacked reflection character along the reservoir, except those due to the added noise.

Figure shows the stacked section from the survey after one time step of water flood

injection. There is some slight dimming in the stacked reflection amplitude centered

about the well locations at 2 and 3 km distance along the line. Figure shows the stacked

section from the survey generated after two time steps of water flood. The dimming in

reflection character is more apparent than after the first water flood, since the water

invasion zone has expanded to a greater distance, thereby creating a larger zone of

lowered P impedance contrast. Figure shows a close-up of the stacked data at reservoir

depth, showing more clearly the dimming and lateral spread of stacked reflection

amplitudes due to the diffusive water flood.

Figure shows the stacked difference section, obtained by subtracting the stacked pre-

flood base section from the stacked water flood section after one time step. The zones of

water invasion are clearly evident in that they give rise to differential reflections,

Page 21: Seismic Data Analysis-new[1]

including diffusive diffraction tails at the diffuse edge of the water slug front. Figure

shows the stacked difference section comparing the base pre-flood survey to the survey

taken after two time steps of water flood. The water invasion zone looks larger in spatial

extent and therefore stronger in amplitude. Again, some diffuse diffraction are evident at

the edge of the water flood. We note that the stacked sections contain poor lateral

resolution of the water flood front because the stacked reflections are smeared laterally

over a large Fresnel zone. Prestack wave-equation migration of the raw CMP data can

collapse the Fresnel zone down to a spatial resolution on the order of a dominant seismic

wavelength, i.e., from hundreds of meters immigrated to tens of meters after migration.

Page 22: Seismic Data Analysis-new[1]

Prestack migrated sections

Figure shows the restack migrated section corresponding to the survey after the

first time step of water flood. Figure shows the same for the survey after two time steps

of water flood. The dimming of the reflection events at the reservoir depths of 2 km is

imaged more clearly than on the stacked sections, and the lateral and vertical resolution is

better. This increase in apparent resolution results from the Fresnel zones having been

collapsed to a spatial wavelength and the diffraction tails having been correctly

positioned at the diffuse edges of the water invaded zone.

Figure shows a close up of the reservoir zone restack migrations after one and two time

steps of water flood. This plot should be compared with the stacked section counterpart in

Figure. The restack migrated images show more correct amplitude variation and lateral

extent than the stacked images, as is physically intuitive.

Figure shows the difference section of the restack migrated sections before and after one

time step of water injection. This figure can be compared directly to the stacked section

counterpart of Figure. Note that the diffraction tails have been collapsed, and the spatial

resolution is good enough to closely match the true extent of the P impedance water

Page 23: Seismic Data Analysis-new[1]

invasion zone as depicted in the true model of Figures and . Figure shows the difference

section of the restack migrated sections before and after two time steps of water injection.

This figure can be compared directly to the stacked section counterpart of Figure. Again,

the diffractions have been correctly positioned to image the diffusive edge of the water

slug, and the spatial resolution has increased from a Fresnel width down to a seismic

wavelength. Note that the boundary of the migrated reflections match closely with the P

and S impedance fronts in the true models of Figures and.

They performed a model study to simulate water flood production in a light oil reservoir

of Ottawa sand, and generate synthetic time-lapse monitor seismic data both pre-flood,

and at two subsequent water flood phases. Pore pressure and oil/water pore saturation

levels are simulated in the reservoir due to two water injection well galleries by diffusive

fluid flow modeling. The pressure and saturation data are converted to rock density and

both bulk and shear module, using rock physics calibration curves derived from

laboratory data. Synthetic seismic reflection data are generated from the resulting

spatially variable rock physics properties at three separate water flood stages. In the

Page 24: Seismic Data Analysis-new[1]

presence of realistic noise levels, stacked and prestack migrated reflection images clearly

show the extent of the water-invaded zone after production. Furthermore, we apply a

prestack seismic impedance inversion method and accurately track the relative P and S

impedance changes in the reservoir rock caused by the varying petro physical conditions

associated with the water flood production process.

Architecture Diagram:

Seismic Analysis Framework

Preprocessing

Feature Extraction & Background Subtraction

Results Analysis

Histogram Generation

Page 25: Seismic Data Analysis-new[1]

Modules:

Read Input Image

Store results Construct gray image

Extract gray scale values

Preprocessing

Page 26: Seismic Data Analysis-new[1]

Feature extraction and background subtraction

Read extracted features

Store result Remove noise

Remove background

Page 27: Seismic Data Analysis-new[1]

Histogram generation

Identify hi dimensional features

Read gray scale image

Extract pointsGenerate histogram features.

Page 28: Seismic Data Analysis-new[1]

References:

Result analysis

Read histogram

Display points Identify seismic features

Identify edges

Page 29: Seismic Data Analysis-new[1]

1. Bahorich, M., Farmer, S.: 3D seismic discontinuity for faults and stratigraphic

features: the coherence cube. Leading Edge 14(10), 1053–1058 (1995)

2. Bakker, P.: Image structure analysis for seismic interpretation. PhD thesis, Delft

University of Technology (2002)

3. Bakker, P., van Vliet, L.J., Verbeek, P.W.: Confidence and curvature estimation

of curvilinear structures in 3D. Int. Conf. Comput. Vis. 2, 139–144 (2001).

4. Bakun, W., and W. Joyner, The scale in central California, Bull. Seis. Soc.

Am., 74, 1827-1843, 1984.

5. Boore, D., W. Joyner, and T. Fumal, Equations for estimating horizontal response

spectra and peak acceleration from western North American earthquakes: a

summary of recent work, Seismol. Res. Lett., 68, 128-153, 1997.

6. Cohee, B., and G. Beroza, A comparison of two methods for earthquake source

inversion using strong motion seismograms, Ann. Geophys., 37, 1515-1538, 1994.

7. Dreger, D. and A. Kaverina, Seismic remote sensing for the earthquake source

process and near-source strong shaking: A case study of the October 16, 1999

Hector Mine earthquake, Geophys. Res. Lett., 27, 1941-1944, 2000.

8. Dreger, D., and A. Kaverina, Development of procedures for the rapid estimation

of ground shaking, PGE-PEER Final Report, 1999.

9. Dreger, D., and B. Romanowicz, Source characteristics of events in the San

Francisco Bay region, USGS Open-File-Report 94-176 , 301-309, 1994.

Page 30: Seismic Data Analysis-new[1]

10. Gee, L., D. Neuhauser, D. Dreger, M. Pasyanos, R. Uhrhammer, and B.

Romanowicz, The Rapid Earthquake Data Integration Project, Handbook of

Earthquake and Engineering Seismology, IASPEI, in press, 2001.

11. Gee, L., D. Neuhauser, D. Dreger, M. Pasyanos, B. Romanowicz, and R.

Uhrhammer, The Rapid Earthquake Data Integration System, Bull. Seis. Soc. Am.,

86, 936-945,1996a.

12. T-W. Lee, M. S. Lewicki, and T. J. Sejnowski. Unsupervised classification with

non-gaussian mixture models using ica. In Advances in Neural Information

Processing Systems 11, volume in press. MIT Press, 1999.

13. H. Barlow. Sensory Communication, chapter Possible principles underlying the

transformation of sensory messages, pages 217-234. MIT press, 1961.

14. B. Olshausen and D. Field. Emergence of simple-cell receptive field properties by

learning a sparse code for natural images. Nature, 381:607-609, 1996.

15. A. J. Bell and T. J. Sejnowski. The 'independent components' of natural scenes are

edge filters. Vzsion Research, 37(23):3327-3338, 1997.

16. M.S. Lewicki and B. Olshausen. A probablistic framwork for the adaptation and

comparison of image codes. J. Opt.Soc., A: Optics, Image Science and Vision,

submitted, 1998.

17. A. J. Bell and T. J. Sejnowski. An Information-Maximization Approach to Blind

Separation and Blind Deconvolution. Neural Computation, 7:1129-1159, 1995.

18. J-F. Cardoso and B. Laheld. Equivariant adaptive source separation. IEEE Trans.

on S. P., 45(2):434-444, 1996.

Page 31: Seismic Data Analysis-new[1]

19. T-W. Lee, M. S. Lewicki, and T. J. Sejnowski. Unsupervised classification with

non-Gaussian sources and automatic context switching in blind signal separation.

IEEE Transactions on Pattern Recognition and Machine Leaning, submitted,

1999.

20. Z. Ghahramani and S.T. Roweis. Learning nonlinear dynamical systems using an

em algorithm. In Advances in Neural Information Processing Systems 11, 1999.

21. J. S. De Bonet and P. Viola. A non-parametric multi-scale statistical model for

natural images. In Advances in Neural Information Processing Systems, volume 9,

1997.

22. M.S. Lewicki and T. J. Sejnowski. Learning overcomplete representations. Neural

Computation, to appear, 1999.

23. P. Hyvaerinen, A.and Hoyer and E. Oja. Sparse code shrinkage: Denoising by

nonlinear maximum likelihood estimation. In Advances in Neural Information

Processing Systems 11, 1999

Bibliography:

APPENDIX A:

Code:

Page 32: Seismic Data Analysis-new[1]

import java.awt.*;

import java.awt.image.*;

import java.io.*;

import com.sun.image.codec.jpeg.*;

import java.util.*;

import java.math.*;

import java.lang.Math.*;

import java.awt.Color;

import java.net.*;

public class test

{

public static void main(String args[])throws ArrayIndexOutOfBoundsException

{

try

{

//Read image from source(only jpg files)

Image img = Toolkit.getDefaultToolkit().getImage("010_2_2(65).jpg");

MediaTracker media = new MediaTracker(new Container());

media.addImage(img,0);

media.waitForID(0);

int i=0,j=0,n=0;

//get width and height of the image

Page 33: Seismic Data Analysis-new[1]

int imgwidth = img.getWidth(null);

int imgheight = img.getHeight(null);

// Declare array

//pixel values

int[] pel = new int[imgwidth*imgheight];

//RGB values

int[][][] rgb=new int [imgheight][imgwidth][4];

System.out.println("Img heigt"+imgheight+":"+imgwidth);

//gray values

int[][] rock = new int[imgheight][imgwidth];

int[][] rock1 = new int[imgheight][imgwidth];

int[][] rock2 = new int[imgheight][imgwidth];

int[][] rock3 = new int[imgheight][imgwidth];

//int[][] gray = new int[imgheight][imgwidth];

int[] gra=new int[256];

int[] grb=new int[imgheight*imgwidth];

//Initialize rock

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

rock[i][j]=0;

Page 34: Seismic Data Analysis-new[1]

}

}

// System.out.println("pixels"+rock1[][]);

/*int mm[]=new int[imgheight*imgwidth];

for(i=0;i<imgheight;i++)

{

for(j=0;j<imgwidth;j++)

{

mm[n++]=((255<<24)|((int)rock1[i][j]<<16)|((int)rock1[i][j]<<8)|((int)rock1[i][j]));

//System.out.println("pixels"+mm[n]);

}

}

*/

//read pixel values to pel array from the readed image

PixelGrabber pg = new

PixelGrabber(img,0,0,imgwidth,imgheight,pel,0,imgwidth);

pg.grabPixels();

int h=imgheight;

int w=imgwidth;

//System.out.println("height"+h);

Page 35: Seismic Data Analysis-new[1]

//System.out.println("width"+w);

//Convert the image in to gray scale using the intensity value of each pixel

for(i=0;i<imgheight;i++)

{

for(j=0;j<imgwidth;j++)

{

rgb[i][j][0] = (pel[n] >> 24) & 0xff;

rgb[i][j][1] = (pel[n] >> 16) & 0xff;

rgb[i][j][2] = (pel[n] >> 8) & 0xff;

rgb[i][j][3] = (pel[n] ) & 0xff;

rock[i][j]=(rgb[i][j][1]+rgb[i][j][2]+rgb[i][j][3])/3;

rock1[i][j]=rock[i][j];

grb[n++]=rock[i][j];

}

}

int[][][] bp = new int[270][7][8];

int m=0;int k,l,q,r;

for(k=0;k<42;k=k+7)

{

for(l=0;l<360;l=l+8)

{

q=k;

for(i=0;i<7;i++)

{

Page 36: Seismic Data Analysis-new[1]

r=l;

for(j=0;j<8;j++)

{

bp[m][i][j]=rock[q][r];

r++;

}q++;

}m++;

}

}

int[] a1=new int[270];

for(m=0;m<270;m++)

{

l=0;

for(i=0;i<7;i++)

{

for(j=0;j<8;j++)

{

l=l+bp[m][i][j];

}

}

a1[m]=l/56;

System.out.println(a1[m]);

}

int[][][] bp1=new int[270][7][8];

for(m=0;m<270;m++)

{

for(i=0;i<7;i++)

{

Page 37: Seismic Data Analysis-new[1]

for(j=0;j<8;j++)

{

bp1[m][i][j]=a1[m];

}

}

}

m=0;

for(k=0;k<42;k=k+7)

{

for(l=0;l<360;l=l+8)

{

q=k;

for(i=0;i<7;i++)

{

r=l;

for(j=0;j<8;j++)

{

rock[q][r]=bp1[m][i][j];

rock2[q][r]=rock[q][r];

r++;

}q++;

}m++;

}

}

for(i=1;i<41;i++)

{

for(j=1;j<359;j++)

{

q=(rock[i-1][j-1]+rock[i-1][j+1]+rock[i+1][j-1]+rock[i+1][j+1])/4;

Page 38: Seismic Data Analysis-new[1]

rock2[i][j]=q;

}

}

double b;

int[] a2=new int[9];

for(i=1;i<41;i++)

{

for(j=1;j<359;j++)

{

b=0;

a2[0]=rock2[i-1][j-1];a2[1]=rock2[i-1][j]*2;a2[2]=rock2[i-1]

[j+1];a2[3]=2*rock2[i][j-1];a2[4]=4*rock2[i][j];a2[5]=2*rock2[i][j+1];a2[6]=rock2[i+1]

[j-1];a2[7]=2*rock2[i+1][j];a2[8]=rock2[i+1][j+1];

for(k=0;k<9;k++)

{

b=b+a2[k];

}

b=b*0.0625;

rock[i][j]=(int)b;

}

}

for(i=1;i<41;i++)

{

for(j=1;j<359;j++)

{

b=0;

a2[0]=rock[i-1][j-1];a2[1]=rock[i-1][j]*2;a2[2]=rock[i-1]

[j+1];a2[3]=2*rock[i][j-1];a2[4]=4*rock[i][j];a2[5]=2*rock[i][j+1];a2[6]=rock[i+1][j-

1];a2[7]=2*rock[i+1][j];a2[8]=rock[i+1][j+1];

for(k=0;k<9;k++)

{

Page 39: Seismic Data Analysis-new[1]

b=b+a2[k];

}

b=b*0.0625;

rock2[i][j]=(int)b;

}

}

n=0;

for(i=0;i<42;i++)

{

for(j=0;j<360;j++)

{

pel[n++]=((255<<24)|(rock2[i][j]<<16)|(rock2[i][j]<<8)|(rock2[i]

[j]));

}

}

writeImage("bilinear(010_2_2(65)).jpg",pel,360,42,img);

double mean=0;

double m1=0;

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

m1+=rock1[i][j];

}}

Page 40: Seismic Data Analysis-new[1]

mean=m1/(imgheight*imgwidth);

System.out.println("mean value:"+mean);

/*int t=0;

int[][]m2=new int[imgheight][imgwidth];

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

m2[i][j]=sum[t++];

// System.out.print(m2[i][j]+ " ");

}

}*/

/*

int mm[]=new int[imgheight*imgwidth];

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

mm[n++]=((255<<24)|((int)m2[i][j]<<16)|((int)m2[i][j]<<8)|((int)m2[i][j]));

//System.out.println("meanfinal" +mm[n0]);

}

Page 41: Seismic Data Analysis-new[1]

}

writeImage("meanimg.jpg",mm,imgwidth,imgheight,img);*/

//rock2[x][y] = f[0][0];

//background subtraction

int[] bi=new int[15120];

int t=0;

n=0;

for(i=0;i<imgheight;i++)

for(j=0;j<imgwidth;j++)

{

t=rock1[i][j]-rock2[i][j]+100;

//System.out.println("new"+t);

if(t>255)

bi[n++]=255;

else if(t<0)

bi[n++]=0;

else

bi[n++]=t;

}

for(i=0;i<15120;i++)

{

pel[i]=((255<<24)|(bi[i]<<16)|(bi[i]<<8)|(bi[i]));

}

writeImage("subt(010_2_2(65)).jpg",pel,360,42,img);

Page 42: Seismic Data Analysis-new[1]

//histogram

int k1;

for(i=0;i<256;i++)

gra[i]=0;

for(i=0;i<(imgheight*imgwidth);i++)

{

k1=bi[i];

gra[k1]=gra[k1]+1;

}

for(i=1;i<256;i++)

{

gra[i]=gra[i-1]+gra[i];

//System.out.println(gra[i]);

}

double i1=0,i2=0,j1=0;

for( i=0;i<(imgheight*imgwidth);i++)

{

k1=bi[i];

i1=gra[k1]*0.000066137;

//System.out.println(i1);

i2=i1*255;

bi[i]=(int)i2;

}

int pell[]=new int[imgheight*imgwidth];

for(i=0;i<15120;i++)

pell[i]=((255<<24)|(bi[i]<<16)|(bi[i]<<8)|(bi[i]));

Page 43: Seismic Data Analysis-new[1]

writeImage("histo(010_2_2(65)).jpg",pell,360,42,img);

n=0;

System.out.println("Monochrome Image");

/* for(int x=0;x<imgheight;x++)

{

for(int y=0;y<imgwidth;y++)

{

pel[n++] = ((255<<24)|(rock3[x][y]<<16)|(rock3[x][y]<<8)|(rock3[x][y]));

//pel1[n++] = ((255<<24)|(rock2[x][y]<<16)|(rock2[x][y]<<8)|(rock2[x]

[y]));

}

}

writeImage("Monochrome.jpg",pel,imgwidth,imgheight,img);

// writeImage("Mono.jpg",pel1,imgwidth,imgheight,img); */

}

catch(Exception e)

{

System.out.println(e);

}

}

Page 44: Seismic Data Analysis-new[1]

static void writeImage(String name,int[] pel,int imgwidth,int imgheight,Image img)

{

try

{

PixelGrabber pgr = new PixelGrabber(img,0,0,1,1,false);

pgr.grabPixels();

ColorModel cm = pgr.getColorModel();

MemoryImageSource m = new

MemoryImageSource(imgwidth,imgheight,cm,pel,0,imgwidth);

Image img1 = Toolkit.getDefaultToolkit().createImage(m);

BufferedImage bi = new

BufferedImage(imgwidth,imgheight,BufferedImage.TYPE_INT_RGB);

Graphics2D g2d = bi.createGraphics();

g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION,RenderingHints

.VALUE_INTERPOLATION_BILINEAR);

g2d.drawImage(img1,0,0,imgwidth,imgheight,null);

BufferedOutputStream out = new BufferedOutputStream(new

FileOutputStream(name));

Page 45: Seismic Data Analysis-new[1]

JPEGImageEncoder jiee = JPEGCodec.createJPEGEncoder(out);

JPEGEncodeParam par = jiee.getDefaultJPEGEncodeParam(bi);

par.setQuality(100/100.0f,false);

jiee.setJPEGEncodeParam(par);

jiee.encode(bi);

out.close();

}

catch(Exception e)

{

System.out.println(e);

}

}

}

import java.awt.*;

import java.awt.image.*;

import java.io.*;

import com.sun.image.codec.jpeg.*;

import java.util.*;

import java.math.*;

import java.lang.Math.*;

Page 46: Seismic Data Analysis-new[1]

import java.awt.Color;

import java.net.*;

import javax.swing.*;

import javax.imageio.*;

public class Seismic

{

public static void main(String args[])throws ArrayIndexOutOfBoundsException

{

try

{if(args.length<1)

{

System.out.println("Usage Seismic inputimagefilename");

System.exit(0);

}

//Read image from source(only jpg files)

Image img = Toolkit.getDefaultToolkit().getImage(args[0]);

MediaTracker media = new MediaTracker(new Container());

media.addImage(img,0);

media.waitForID(0);

int i=0,j=0,n=0;

//get width and height of the image

int imgwidth = img.getWidth(null);

int imgheight = img.getHeight(null);

Page 47: Seismic Data Analysis-new[1]

// Declare array

//pixel values

int[] pel = new int[imgwidth*imgheight];

//RGB values

int[][][] rgb=new int [imgheight][imgwidth][4];

//gray values

int[][] rock = new int[imgheight][imgwidth];

int[][] rock1 = new int[imgheight][imgwidth];

int[][] rock2 = new int[imgheight][imgwidth];

int[][] rock3 = new int[imgheight][imgwidth];

//int[][] gray = new int[imgheight][imgwidth];

System.out.println(imgheight+":"+imgwidth);

Thread.currentThread().sleep(1000);

int[] gra=new int[256];

int[] grb=new int[imgheight*imgwidth];

//Initialize rock

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

rock[i][j]=0;

Page 48: Seismic Data Analysis-new[1]

}

}

System.out.println("Init");

// System.out.println("pixels"+rock1[][]);

/*int mm[]=new int[imgheight*imgwidth];

for(i=0;i<imgheight;i++)

{

for(j=0;j<imgwidth;j++)

{

mm[n++]=((255<<24)|((int)rock1[i][j]<<16)|((int)rock1[i][j]<<8)|((int)rock1[i][j]));

//System.out.println("pixels"+mm[n]);

}

}

*/

//read pixel values to pel array from the readed image

PixelGrabber pg = new

PixelGrabber(img,0,0,imgwidth,imgheight,pel,0,imgwidth);

pg.grabPixels();

int h=imgheight;

int w=imgwidth;

System.out.println("height"+h);

Page 49: Seismic Data Analysis-new[1]

System.out.println("width"+w);

//Convert the image in to gray scale using the intensity value of each pixel

for(i=0;i<imgheight;i++)

{

for(j=0;j<imgwidth;j++)

{

rgb[i][j][0] = (pel[n] >> 24) & 0xff;

rgb[i][j][1] = (pel[n] >> 16) & 0xff;

rgb[i][j][2] = (pel[n] >> 8) & 0xff;

rgb[i][j][3] = (pel[n] ) & 0xff;

rock[i][j]=(rgb[i][j][1]+rgb[i][j][2]+rgb[i][j][3])/3;

rock1[i][j]=rock[i][j];

grb[n++]=rock[i][j];

}

}

System.out.println("Comes");

int[][][] bp=new int[270][7][8];

int m=0;int k,l,q,r;

for(k=0;k<imgheight;k=k+7)

{

for(l=0;l<imgwidth;l=l+8)

{

q=k;

Page 50: Seismic Data Analysis-new[1]

for(i=0;i<7;i++)

{

r=i;

for(j=0;j<8;j++)

{

bp[m][i][j]=rock[q][r];

//System.out.println("Hello"+q+":"+r+":"+m+":"+i+":"+j+":"+rock.length);

r++;

}

//q++;

}

m++;

System.out.println("Here"+m+":"+bp.length);

}

// System.out.println("Here");

}

int[] a1=new int[270];

System.out.println("Comes again");

for(m=0;m<270;m++)

{

l=0;

for(i=0;i<7;i++)

{

for(j=0;j<8;j++)

{

l=l+bp[m][i][j];

}

Page 51: Seismic Data Analysis-new[1]

}

a1[m]=l/56;

//System.out.println(a1[m]);

}

int[][][] bp1=new int[270][7][8];

for(m=0;m<270;m++)

{

for(i=0;i<7;i++)

{

for(j=0;j<8;j++)

{

bp1[m][i][j]=a1[m];

}

}

}

m=0;

System.out.println("Herere");

for(k=0;k<imgheight;k=k+7)

{

for(l=0;l<imgwidth;l=l+8)

{

q=k;

for(i=0;i<7;i++)

{

r=i;

for(j=0;j<8;j++)

{

rock[q][r]=bp1[m][i][j];

rock2[q][r]=rock[q][r];

r++;

Page 52: Seismic Data Analysis-new[1]

}//q++;

}m++;

}

}

System.out.println("Hep");

for(i=1;i<imgheight-1;i++)

{

for(j=1;j<imgwidth-1;j++)

{

q=(rock[i-1][j-1]+rock[i-1][j+1]+rock[i+1][j-1]+rock[i+1][j+1])/4;

rock2[i][j]=q;

}

}

double b;

int[] a2=new int[9];

for(i=1;i<imgheight-1;i++)

{

for(j=1;j<imgwidth-1;j++)

{

b=0;

a2[0]=rock2[i-1][j-1];a2[1]=rock2[i-1][j]*2;a2[2]=rock2[i-1]

[j+1];a2[3]=2*rock2[i][j-1];a2[4]=4*rock2[i][j];a2[5]=2*rock2[i][j+1];a2[6]=rock2[i+1]

[j-1];a2[7]=2*rock2[i+1][j];a2[8]=rock2[i+1][j+1];

for(k=0;k<9;k++)

{

b=b+a2[k];

}

b=b*0.0625;

rock[i][j]=(int)b;

}

Page 53: Seismic Data Analysis-new[1]

}

for(i=1;i<imgheight-1;i++)

{

for(j=1;j<imgwidth-1;j++)

{

b=0;

a2[0]=rock[i-1][j-1];a2[1]=rock[i-1][j]*2;a2[2]=rock[i-1]

[j+1];a2[3]=2*rock[i][j-1];a2[4]=4*rock[i][j];a2[5]=2*rock[i][j+1];a2[6]=rock[i+1][j-

1];a2[7]=2*rock[i+1][j];a2[8]=rock[i+1][j+1];

for(k=0;k<9;k++)

{

b=b+a2[k];

}

b=b*0.0625;

rock2[i][j]=(int)b;

}

}

n=0;

for(i=0;i<imgheight;i++)

{

for(j=0;j<imgwidth;j++)

{

pel[n++]=((255<<24)|(rock2[i][j]<<16)|(rock2[i][j]<<8)|(rock2[i]

[j]));

}

}

writeImage("bilinear(010_2_2(65)).jpg",pel,imgwidth,imgheight,img);

System.out.println("Over");

double mean=0;

Page 54: Seismic Data Analysis-new[1]

double m1=0;

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

m1+=rock1[i][j];

}}

mean=m1/(imgheight*imgwidth);

System.out.println("mean value:"+mean);

/*int t=0;

int[][]m2=new int[imgheight][imgwidth];

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

m2[i][j]=sum[t++];

// System.out.print(m2[i][j]+ " ");

}

}*/

Page 55: Seismic Data Analysis-new[1]

/*

int mm[]=new int[imgheight*imgwidth];

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

mm[n++]=((255<<24)|((int)m2[i][j]<<16)|((int)m2[i][j]<<8)|((int)m2[i][j]));

//System.out.println("meanfinal" +mm[n0]);

}

}

writeImage("meanimg.jpg",mm,imgwidth,imgheight,img);*/

//rock2[x][y] = f[0][0];

//background subtraction

int[] bi=new int[imgheight*imgwidth];

int t=0;

n=0;

for(i=0;i<imgheight;i++)

for(j=0;j<imgwidth;j++)

{

t=rock1[i][j]-rock2[i][j]+100;

//System.out.println("new"+t);

Page 56: Seismic Data Analysis-new[1]

if(t>255)

bi[n++]=255;

else if(t<0)

bi[n++]=0;

else

bi[n++]=t;

}

System.out.println("Done");

for(i=0;i<(imgwidth*imgheight)-1;i++)

{

pel[i]=((255<<24)|(bi[i]<<16)|(bi[i]<<8)|(bi[i]));

}

writeImage("subt.jpg",pel,imgwidth,imgheight,img);

//histogram

int k1;

for(i=0;i<256;i++)

gra[i]=0;

for(i=0;i<(imgheight*imgwidth)-1;i++)

{

k1=bi[i];

gra[k1]=gra[k1]+1;

}

for(i=1;i<256;i++)

{

gra[i]=gra[i-1]+gra[i];

//System.out.println(gra[i]);

}

Page 57: Seismic Data Analysis-new[1]

double i1=0,i2=0,j1=0;

for( i=0;i<(imgheight*imgwidth);i++)

{

k1=bi[i];

i1=gra[k1]*0.000066137;

//System.out.println(i1);

i2=i1*255;

bi[i]=(int)i2;

}

int pell[]=new int[imgheight*imgwidth];

for(i=0;i<(imgwidth*imgheight);i++)

pell[i]=((255<<24)|(bi[i]<<16)|(bi[i]<<8)|(bi[i]));

writeImage("histo.jpg",pell,imgwidth,imgheight,img);

n=0;

ImageIcon icon = new ImageIcon("histo.jpg");

Image image = icon.getImage();

BufferedImage frame =

new BufferedImage(

image.getWidth(null),

image.getHeight(null),

BufferedImage.TYPE_INT_ARGB

);

Graphics g = frame.getGraphics();

Page 58: Seismic Data Analysis-new[1]

g.drawImage(image, 0, 0, null);

CannyEdgeDetector detector = new CannyEdgeDetector();

detector.setGaussianKernelRadius(1.75f);

detector.setGaussianKernelWidth(32);

detector.setSourceImage(frame);

detector.process();

BufferedImage edges = detector.getEdgesImage();

File outputfile = new File("edged.jpg");

ImageIO.write(edges, "jpg", outputfile);

///JFrame window = new JFrame();

ImageIcon icon1 = new ImageIcon(args[0]);

ImageIcon icon2 = new ImageIcon("subt.jpg");

ImageIcon icon3 = new ImageIcon("histo.jpg");

NewJFrame nf = new NewJFrame();

nf.jLabel2.setIcon(icon1);

nf.jLabel3.setIcon(icon2);

nf.jLabel4.setIcon(icon3);

nf.jLabel5.setIcon(new ImageIcon(edges));

nf.setVisible(true);

// JLabel la = new JLabel("Source

Image",icon1,SwingConstants.LEFT);

// JLabel la1 = new JLabel("Background

remo.Image",icon2,SwingConstants.CENTER);

// JLabel la2 = new JLabel("Histogram

Image",icon3,SwingConstants.RIGHT);

Page 59: Seismic Data Analysis-new[1]

// JLabel label = new JLabel("Detected Image",new

ImageIcon(edges),SwingConstants.LEADING);

// window.getContentPane().add(la);

// window.getContentPane().add(la1);

// window.getContentPane().add(la2);

// window.getContentPane().add(label);

// window.pack();

// window.show();

}

catch(Exception e)

{

System.out.println(e);

}

}

static void writeImage(String name,int[] pel,int imgwidth,int imgheight,Image img)

{

try

{

PixelGrabber pgr = new PixelGrabber(img,0,0,1,1,false);

pgr.grabPixels();

ColorModel cm = pgr.getColorModel();

Page 60: Seismic Data Analysis-new[1]

MemoryImageSource m = new

MemoryImageSource(imgwidth,imgheight,cm,pel,0,imgwidth);

Image img1 = Toolkit.getDefaultToolkit().createImage(m);

BufferedImage bi = new

BufferedImage(imgwidth,imgheight,BufferedImage.TYPE_INT_RGB);

Graphics2D g2d = bi.createGraphics();

g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION,RenderingHints

.VALUE_INTERPOLATION_BILINEAR);

g2d.drawImage(img1,0,0,imgwidth,imgheight,null);

BufferedOutputStream out = new BufferedOutputStream(new

FileOutputStream(name));

JPEGImageEncoder jiee = JPEGCodec.createJPEGEncoder(out);

JPEGEncodeParam par = jiee.getDefaultJPEGEncodeParam(bi);

par.setQuality(100/100.0f,false);

jiee.setJPEGEncodeParam(par);

jiee.encode(bi);

out.close();

}

Page 61: Seismic Data Analysis-new[1]

catch(Exception e)

{

System.out.println(e);

}

}

}

Screen Shots:

Page 62: Seismic Data Analysis-new[1]
Page 63: Seismic Data Analysis-new[1]

import java.awt.*;

import java.awt.image.*;

import java.io.*;

import com.sun.image.codec.jpeg.*;

import java.util.*;

import java.math.*;

import java.lang.Math.*;

import java.awt.Color;

import java.net.*;

public class Histo

{

Page 64: Seismic Data Analysis-new[1]

public static void main(String args[])throws ArrayIndexOutOfBoundsException

{

try

{

//Read image from source(only jpg files)

Image img = Toolkit.getDefaultToolkit().getImage("temps.jpg");

MediaTracker media = new MediaTracker(new Container());

media.addImage(img,0);

media.waitForID(0);

int i=0,j=0,n=0;

//get width and height of the image

int imgwidth = img.getWidth(null);

int imgheight = img.getHeight(null);

// Declare array

//pixel values

int[] pel = new int[imgwidth*imgheight];

//RGB values

int[][][] rgb=new int [imgheight][imgwidth][4];

Page 65: Seismic Data Analysis-new[1]

//gray values

int[][] rock = new int[imgheight][imgwidth];

int[][] rock1 = new int[imgheight][imgwidth];

int[][] rock2 = new int[imgheight][imgwidth];

int[][] rock3 = new int[imgheight][imgwidth];

//int[][] gray = new int[imgheight][imgwidth];

System.out.println(imgheight+":"+imgwidth);

Thread.currentThread().sleep(1000);

int[] gra=new int[256];

int[] grb=new int[imgheight*imgwidth];

//Initialize rock

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

rock[i][j]=0;

}

}

System.out.println("Init");

// System.out.println("pixels"+rock1[][]);

/*int mm[]=new int[imgheight*imgwidth];

for(i=0;i<imgheight;i++)

{

for(j=0;j<imgwidth;j++)

Page 66: Seismic Data Analysis-new[1]

{

mm[n++]=((255<<24)|((int)rock1[i][j]<<16)|((int)rock1[i][j]<<8)|((int)rock1[i][j]));

//System.out.println("pixels"+mm[n]);

}

}

*/

//read pixel values to pel array from the readed image

PixelGrabber pg = new

PixelGrabber(img,0,0,imgwidth,imgheight,pel,0,imgwidth);

pg.grabPixels();

int h=imgheight;

int w=imgwidth;

System.out.println("height"+h);

System.out.println("width"+w);

//Convert the image in to gray scale using the intensity value of each pixel

for(i=0;i<imgheight;i++)

{

for(j=0;j<imgwidth;j++)

{

Page 67: Seismic Data Analysis-new[1]

rgb[i][j][0] = (pel[n] >> 24) & 0xff;

rgb[i][j][1] = (pel[n] >> 16) & 0xff;

rgb[i][j][2] = (pel[n] >> 8) & 0xff;

rgb[i][j][3] = (pel[n] ) & 0xff;

rock[i][j]=(rgb[i][j][1]+rgb[i][j][2]+rgb[i][j][3])/3;

rock1[i][j]=rock[i][j];

grb[n++]=rock[i][j];

}

}

System.out.println("Comes");

int[][][] bp=new int[270][7][8];

int m=0;int k,l,q,r;

for(k=0;k<imgheight;k=k+7)

{

for(l=0;l<imgwidth;l=l+8)

{

q=k;

for(i=0;i<7;i++)

{

r=i;

for(j=0;j<8;j++)

{

bp[m][i][j]=rock[q][r];

//System.out.println("Hello"+q+":"+r+":"+m+":"+i+":"+j+":"+rock.length);

r++;

Page 68: Seismic Data Analysis-new[1]

}

//q++;

}

m++;

System.out.println("Here"+m+":"+bp.length);

}

// System.out.println("Here");

}

int[] a1=new int[270];

System.out.println("Comes again");

for(m=0;m<270;m++)

{

l=0;

for(i=0;i<7;i++)

{

for(j=0;j<8;j++)

{

l=l+bp[m][i][j];

}

}

a1[m]=l/56;

//System.out.println(a1[m]);

}

int[][][] bp1=new int[270][7][8];

for(m=0;m<270;m++)

{

for(i=0;i<7;i++)

{

for(j=0;j<8;j++)

Page 69: Seismic Data Analysis-new[1]

{

bp1[m][i][j]=a1[m];

}

}

}

m=0;

System.out.println("Herere");

for(k=0;k<imgheight;k=k+7)

{

for(l=0;l<imgwidth;l=l+8)

{

q=k;

for(i=0;i<7;i++)

{

r=i;

for(j=0;j<8;j++)

{

rock[q][r]=bp1[m][i][j];

rock2[q][r]=rock[q][r];

r++;

}//q++;

}m++;

}

}

System.out.println("Hep");

for(i=1;i<imgheight-1;i++)

{

for(j=1;j<imgwidth-1;j++)

{

Page 70: Seismic Data Analysis-new[1]

q=(rock[i-1][j-1]+rock[i-1][j+1]+rock[i+1][j-1]+rock[i+1][j+1])/4;

rock2[i][j]=q;

}

}

double b;

int[] a2=new int[9];

for(i=1;i<imgheight-1;i++)

{

for(j=1;j<imgwidth-1;j++)

{

b=0;

a2[0]=rock2[i-1][j-1];a2[1]=rock2[i-1][j]*2;a2[2]=rock2[i-1]

[j+1];a2[3]=2*rock2[i][j-1];a2[4]=4*rock2[i][j];a2[5]=2*rock2[i][j+1];a2[6]=rock2[i+1]

[j-1];a2[7]=2*rock2[i+1][j];a2[8]=rock2[i+1][j+1];

for(k=0;k<9;k++)

{

b=b+a2[k];

}

b=b*0.0625;

rock[i][j]=(int)b;

}

}

for(i=1;i<imgheight-1;i++)

{

for(j=1;j<imgwidth-1;j++)

{

b=0;

a2[0]=rock[i-1][j-1];a2[1]=rock[i-1][j]*2;a2[2]=rock[i-1]

[j+1];a2[3]=2*rock[i][j-1];a2[4]=4*rock[i][j];a2[5]=2*rock[i][j+1];a2[6]=rock[i+1][j-

1];a2[7]=2*rock[i+1][j];a2[8]=rock[i+1][j+1];

for(k=0;k<9;k++)

Page 71: Seismic Data Analysis-new[1]

{

b=b+a2[k];

}

b=b*0.0625;

rock2[i][j]=(int)b;

}

}

n=0;

for(i=0;i<imgheight;i++)

{

for(j=0;j<imgwidth;j++)

{

pel[n++]=((255<<24)|(rock2[i][j]<<16)|(rock2[i][j]<<8)|(rock2[i]

[j]));

}

}

writeImage("bilinear(010_2_2(65)).jpg",pel,imgwidth,imgheight,img);

System.out.println("Over");

double mean=0;

double m1=0;

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

m1+=rock1[i][j];

}}

Page 72: Seismic Data Analysis-new[1]

mean=m1/(imgheight*imgwidth);

System.out.println("mean value:"+mean);

/*int t=0;

int[][]m2=new int[imgheight][imgwidth];

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

m2[i][j]=sum[t++];

// System.out.print(m2[i][j]+ " ");

}

}*/

/*

int mm[]=new int[imgheight*imgwidth];

for(i=0;i<imgheight;i++)

{

for( j=0;j<imgwidth;j++)

{

mm[n++]=((255<<24)|((int)m2[i][j]<<16)|((int)m2[i][j]<<8)|((int)m2[i][j]));

//System.out.println("meanfinal" +mm[n0]);

Page 73: Seismic Data Analysis-new[1]

}

}

writeImage("meanimg.jpg",mm,imgwidth,imgheight,img);*/

//rock2[x][y] = f[0][0];

//background subtraction

int[] bi=new int[imgheight*imgwidth];

int t=0;

n=0;

for(i=0;i<imgheight;i++)

for(j=0;j<imgwidth;j++)

{

t=rock1[i][j]-rock2[i][j]+100;

//System.out.println("new"+t);

if(t>255)

bi[n++]=255;

else if(t<0)

bi[n++]=0;

else

bi[n++]=t;

}

System.out.println("Done");

for(i=0;i<(imgwidth*imgheight)-1;i++)

{

Page 74: Seismic Data Analysis-new[1]

pel[i]=((255<<24)|(bi[i]<<16)|(bi[i]<<8)|(bi[i]));

}

writeImage("subt(010_2_2(65)).jpg",pel,imgwidth,imgheight,img);

//histogram

int k1;

for(i=0;i<256;i++)

gra[i]=0;

for(i=0;i<(imgheight*imgwidth)-1;i++)

{

k1=bi[i];

gra[k1]=gra[k1]+1;

}

for(i=1;i<256;i++)

{

gra[i]=gra[i-1]+gra[i];

//System.out.println(gra[i]);

}

double i1=0,i2=0,j1=0;

for( i=0;i<(imgheight*imgwidth);i++)

{

k1=bi[i];

i1=gra[k1]*0.000066137;

//System.out.println(i1);

i2=i1*255;

bi[i]=(int)i2;

}

Page 75: Seismic Data Analysis-new[1]

int pell[]=new int[imgheight*imgwidth];

for(i=0;i<(imgwidth*imgheight);i++)

pell[i]=((255<<24)|(bi[i]<<16)|(bi[i]<<8)|(bi[i]));

writeImage("histo(010_2_2(65)).jpg",pell,imgwidth,imgheight,img);

n=0;

System.out.println("Monochrome Image");

/* for(int x=0;x<imgheight;x++)

{

for(int y=0;y<imgwidth;y++)

{

pel[n++] = ((255<<24)|(rock3[x][y]<<16)|(rock3[x][y]<<8)|(rock3[x][y]));

//pel1[n++] = ((255<<24)|(rock2[x][y]<<16)|(rock2[x][y]<<8)|(rock2[x]

[y]));

}

}

writeImage("Monochrome.jpg",pel,imgwidth,imgheight,img);

// writeImage("Mono.jpg",pel1,imgwidth,imgheight,img); */

}

catch(Exception e)

{

System.out.println(e);

Page 76: Seismic Data Analysis-new[1]

}

}

static void writeImage(String name,int[] pel,int imgwidth,int imgheight,Image img)

{

try

{

PixelGrabber pgr = new PixelGrabber(img,0,0,1,1,false);

pgr.grabPixels();

ColorModel cm = pgr.getColorModel();

MemoryImageSource m = new

MemoryImageSource(imgwidth,imgheight,cm,pel,0,imgwidth);

Image img1 = Toolkit.getDefaultToolkit().createImage(m);

BufferedImage bi = new

BufferedImage(imgwidth,imgheight,BufferedImage.TYPE_INT_RGB);

Graphics2D g2d = bi.createGraphics();

g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION,RenderingHints

.VALUE_INTERPOLATION_BILINEAR);

Page 77: Seismic Data Analysis-new[1]

g2d.drawImage(img1,0,0,imgwidth,imgheight,null);

BufferedOutputStream out = new BufferedOutputStream(new

FileOutputStream(name));

JPEGImageEncoder jiee = JPEGCodec.createJPEGEncoder(out);

JPEGEncodeParam par = jiee.getDefaultJPEGEncodeParam(bi);

par.setQuality(100/100.0f,false);

jiee.setJPEGEncodeParam(par);

jiee.encode(bi);

out.close();

}

catch(Exception e)

{

System.out.println(e);

}

}

}