1
Chapter 1
1.1. Introduction
The thesis is related to time series analysis of Indian rainfall data and data of
Indian stock markets BSE 100 and Nifty 50. In order to understand any physical
phenomena we use mathematical tools and techniques. There are two prevalent
methods for studying such problems. One is to write the phenomenon in terms of
mathematical equation such as differential equation, partial differential equation,
stochastic differential equation with boundary conditions. The other way is to observe
the phenomenon and write it in the form of time series and then study the behavior of
the phenomenon by analyzing the time series. The present thesis deals with the second
approach. We consider the physical phenomena represented by the time series and
study its behavior through well known mathematical techniques.
Time series analysis approach for analyzing and understanding real-world
problems such as climate and financial data, is quite popular in scientific world.
Before the invention of wavelet and fractal methods in 1980‟s, Fourier analysis and
statistical methods were used for studying the behavior of time series (see for example
Bloomfield (1976)). Statistical and Fourier analysis methods were used to detect long-
time trends, time of abrupt changes, seasonality, and study of compression and
denoising of such time series. However, now a days, wavelets and wavelet-based
mutli-fractal formalism are becoming quite popular throughout developed and
developing countries for scientific study of real-world problems particularly climatic
data. However in this thesis we have focused our attention to the application of
wavelet and fractal methods for studying meteorological data and stock market
fluctuations. We analyze the data using the wavelet toolbox in MATLAB, Fraclab and
Benoit besides using our codes.
The thesis is divided into six chapters. This chapter has been divided into eight
sections. In the first section, a brief introduction to the thesis topic is presented.
Second section is subdivided into two subsections. In the first subsection we give the
historical review of wavelets. Wavelet concepts such as definitions of wavelets
(continuous and discrete) along with examples, wavelet coefficients, scaling functions,
2
decomposition and reconstruction algorithms and multiresolution analysis, the heart of
wavelet theory are described in the second subsection. In the third section basic results
of fractal theory such as fractals, multifractals, and various measures of fractal
dimension such as box dimension, fractal dimension and Hausdorff dimension have
been presented. In the fourth section we give a brief introduction to the time series
where as Hurst exponent, the other parameter of fractal dimension and its utility is
discussed in the fifth section. In the sixth section we present a brief introduction to
meteorology, stocks and stock market. ANFIS (Adaptive Network Based Fuzzy
Inference System) has been introduced in the seventh section. In the last section we
present the chapter wise summary of the thesis.
1.2. Historical Development and Basic Concepts of Wavelet Theory
In this section we review the literature and basic concepts of wavelets.
1.2.1. Review of Literature
Wavelet theory is the outcome of multi-disciplinary endeavor that brought
together mathematicians, physicists and engineers. This relationship created a flow of
ideas that goes well beyond the construction of new bases or transforms.
The theory of wavelets is a refinement of Fourier analysis which enables to simplify
the description of a cumbersome function in terms of a small number of coefficients.
The wavelet transform has been found to be particularly useful for analyzing signals
which can best be described as non periodic, noisy, intermittent, and transient and so
on. Its ability to examine the signal simultaneously in both time and frequency domain
is a distinct feature of wavelet analysis.
Fourier analysis has been used in diverse fields of science and technology such
as earthquake prediction, voltage fluctuations, sharp variations in temperature, air
pressure and stock market variations. Fourier analysis breaks down a signal into
constituent sinusoids of different frequencies. Another way to think of Fourier
analysis is as a mathematical technique for transforming our view of the signal from
time-based to frequency-based. However Fourier analysis does not provide
appropriate method to study trends, abrupt changes and breakdown points etc.
Moreover the standard Fourier analysis is inadequate because Fourier transform of the
signal does not contain any local information. It gives only one information at a time,
3
either time or location. As a realization of this weakness, as far back as 1946, Dennis
Gabor first introduced the windowed Fourier transform (Short-Time Fourier
Transform-STFT), commonly known as Gabor transform, using a Gaussian
distribution function as the window function. The drawback of STFT is that once we
choose a particular size for the time window, that window is the same for all
frequencies. Many signals require a more flexible approach where we can vary the
window size to determine more accurately either time or frequency. Wavelet analysis
may be understood as localized Fourier analysis. Fourier transform gives only time or
location while wavelet transform gives both.
A formal history of wavelets began in the early 1980‟s. In 1982 Jean Morlet, a
French geophysicist, introduced the concept of a “wavelet”, meaning a small wave,
and studied the concept of wavelet transform as a new tool for seismic signal analysis.
Immediately after this, Alex Grossman, a French theoretical physicist, studied inverse
formula for the wavelet transform. In 1984, the joint collaboration of Morlet and
Grossman yielded a detailed mathematical study of the continuous wavelet transforms
and their various applications, of course, without realizing that similar results had
already been obtained 20 to 50 years earlier by Calderson, Littlewood, Paley and
Franklin. In 1985, Yves Meyer, a French Mathematician found the then existing
literature of wavelets by chance and realized the potentiality of wavelet technique.
Meyer discovered a new kind of wavelet, with a mathematical property called
orthogonality that made the wavelet transforms easy to work with and manipulate.
During 1985-1986, the collaboration of Meyer and Lemarie yielded the construction
of smooth orthonormal wavelet bases on and . The anticipation of Meyer (1993)
and Mallat (1989) that the orthogonal wavelet bases could be constructed
systematically from a general formalism led to the invention of multiresolution
analysis in 1989. It was also Mallat who constructed the wavelet decomposition and
reconstruction algorithms, applying multiresolution analysis.
Inspired by the work of Meyer, in 1988, Ingrid Daubechies (1988), a physicist
by training and now a Professor of Mathematics, made a remarkable contribution to
wavelet theory by constructing families of compactly supported orthonormal wavelets
with some degree of smootheness.
4
Daubechies used the idea of multiresolution analysis to create her own family
of wavelets named as the Daubechies wavelets. Daubechies wavelet family satisfies a
number of wavelet properties. They have compact support, orthogonality, regularity,
and continuity. Her wavelets have surprising features such as intimate connections
with the theory of fractals. Ronald R. Coifman and Yves Meyer constructed a huge
library of wavelets of various duration, oscillation and other behavior. With a clear
algorithm developed by Coifman and Victor Wickerhauser, it became possible to do
very rapidly, computerized searches through an enormous range of signal
representations in order to quickly find the most economical transcription of measured
data (see Coifman and Wickerhauser (1992) and Wickerhauser (1994)). A
generalization of wavelets called wavelet packet has been introduced by Coifman et
al. (1992a, 1992b) and Coifman and Wickerhauser (1992) in 1992. More recently a
system generated by all the three operations namely translation, modulation and
dilation has been introduced which is called wave packet.
The theory of wavelets and its applications to diverse fields has been
extensively studied in the last two decades, for updated references we refer to Addison
(2002), Urban (2002), Walnut (2002) and Siddiqi (2004). Particularly, for wavelet
based multifractal formalism and their applications to real world problems we refer to
a large number of research papers by Alain Arneodo and his collaborators (Arneodo et
al. (1992, 1995, 1998, 1999a, 1999b, 2002, 2003), Audit et al. (2002), Meyer (1998)
and Mallat (1999). Recently Nekka and Li (2002, 2003) have introduced a new
concept named Hausdorff measure spectrum to study singularities in a signal. This
seems to be a better concept than fractal dimension for understanding of some real
world problems (it helps to distinguish between sets having same fractal dimension).
Modulus maxima wavelet transform method (Jaffard (2004)) developed by Arneodo
and his collaborators has been used to compute the spectrum of singularities which in
turn gives the fractal dimension of a signal.
Wavelets and wavelet based multifractal formalism have become very popular
for analyzing meteorological data and related problems in different parts of the world,
see for example Hu and Nitta (1996) for wavelet analysis in north China and India;
Baline et al. (1997) for temperature data in central England; Chapa et al. (1998) for
South America data; Labat et al. (2000) for a rain fall related problem in France and
5
Tokgozlu et al. (2002) for a metrological data of Isparta region in Turkey. Seather et
al. (1994) have studied oil-water interfaces and Redondo et al. (1994) have worked to
investigate the microscopically airborne particles and identification of some of their
chemical components, sizes and shapes, relating them to the wind speed and direction
and to the atmospheric stability. A new wavelet based technique to study atmospheric
wind, turbulence fluid is presented in Yamda and Sasaki (1998).
Kumar and Foufoula-Georgio (1997) have given comprehensive account of
applications of wavelet transform methods to climatic data. Efi-Foufoula and Kumar
(1994) contains research papers devoted to meteorological problems.
Recent papers by Zayed (2000, 2002) and Zayed and Dettori (2006) provide
quite useful information on application of wavelets. Wickerhauser (2004) deals with
image compression. Fournier (2000) has studied orthonormal wavelet analysis with
shift invariance to observe atmospheric blocking spatial structure. We refer to Abbry
et al. (1998, 2003), Dimri (2000), Furati et al. (2006), Can et al. (2004, 2005), Turiel
et al. (2006), Kumar et al. (2006), Manchanda et al. (2007a), Rangarajan and Sant
(1997, 2004), Rangarajan and Ding (2003), Siddiqi et al. (2004, 2006) and Venugopal
et al. (2006), for applications of wavelet and fractal methods to meteorological
problems. Vincent and Mekis (2006) have examined the trends and variations in
several indices of daily and extreme temperature and precipitation in Canada for the
periods 1950-2003 and 1900-2003 respectively. Similar studies have been carried out
for other parts of the world.
1.2.2. Basic Concepts of Wavelet Theory
Notations and Definitions
Throughout, will denote the set of real numbers, denotes the n-
dimensional Euclidean space, for there exists, for every, for belongs to, ⌌ ⌍ for
inner product, ⌊ ⌋ for largest integer , ( ) for space of all finite energy
functions for which ∫| ( )| , ( ) for finite energy discrete signals that
is ∑ | ( )| , STFT for the short time Fourier transform, ( ) for the scaling
function and ( ) for the wavelet, CWT for continuous wavelet transform and DWT
for discrete wavelet transform.
6
Definition 1.2.1. The name wavelet means small wave (the sinusoids used in Fourier
analysis are big waves), and in brief, a wavelet is an oscillation that decays quickly.
In order to be classified as a wavelet, a function ( ) ( ) must satisfy certain
mathematical criteria. These are
∫ | ( )|
( )
∫ ( )
( )
∫| ( )|
| |
( )
where is called the admissibility constant.
Definition 1.2.2. The wavelet transform of a continuous signal ( ) with respect to
the wavelet function ( ) is defined as
( ) ⁄ ∫
( ) (
*
where is the complex conjugate of the wavelet function ( ). It is clear that
, for a real valued function. The parameters and are called translation
(shifting) and dilation parameters respectively.
The normalized wavelet functions are often written more compactly as
( )
√ .
/ where the normalization is in the sense of wavelet energy. So
the transform integral is
( ) ∫ ( )
( ) ( )
In more compact form wavelet transform as an inner product is written as
( ) ⌌ ( ) ( )⌍
Definition 1.2.3. A function ( ) ( ) is a discrete wavelet if the family of
functions
( ) ⁄ ( ) ( )
where and are arbitrary integers, is an orthonormal basis in the Hilbert space
( ).
7
We call ( ) ( ) as mother wavelet.
Definition 1.2.4. The inverse wavelet transform is defined as
( )
∫ ∫ ( )
( )
The inverse transform allows the original signal to be recovered from its wavelet
transform by integrating over all scales and locations, and
We note that for the inverse wavelet transform the original wavelet function is used,
rather than its conjugate which is used in the forward transformation.
Definition 1.2.5. The total energy contained in a signal ( ), is defined as
∫| ( )| ‖ ( )‖
For example the total energy of the Mexican hat wavelet is given by
∫| ( )|
∫[( ) ⁄ ]
√
A plot of the squared magnitude of the Fourier transform against frequency for the
wavelet gives its energy spectrum. For example, the Fourier energy spectrum of the
Mexican hat wavelet is given by
( ) | ( )|
where the subscript F is used to denote the Fourier spectrum.
Definition 1.2.6. The plot of combination of various vectors of coefficients at
different scales (wavelengths) is called the scalogram. In other words scalogram is the
energy density of the wavelet. It provides a good space-frequency representation of
the signal. It provides us a graphical representation of the square of wavelet
coefficients for the different scales.
With a scalogram, one can clearly see more details, identify the exact location at a
particular depth, and detect low frequency cyclicity in the signal (function). The
scalogram surface highlights the location (depth) and scale (wavelength) of dominant
energetic features within the signal.
8
Definition 1.2.7. Wavelet ( ) is an orthonormal wavelet if
⌌ ( ) ( )⌍
∫ ( ) ( ) {
( )
i.e. the product of each wavelet with all other wavelets in the same dyadic system is
zero.
Definition 1.2.8. Wavelet coefficients of a function ( ), denoted by (also
called detail coefficients), are defined as the inner product of ( ) with ( ) ; that
is
⌌ ( )⌍ ∫ ( ) ( ) ( )
The series
∑∑⌌ ( )⌍
( )
is called the wavelet series of .
The expression
∑∑⌌ ( )⌍
( )
is called the wavelet representation of
( ) is more suited for representing finer details of a signal as it oscillates rapidly.
The wavelet coefficients measure the amount of fluctuation about the point
with a frequency determined by the dilation index .
Definition 1.2.9. A sequence of vectors * + in a Hilbert space H is called a Riesz
basis if the following conditions are satisfied.
( ) There exist constants and such that
‖ ‖ ‖∑
‖ ‖ ‖
where ‖ ‖ (∑ | |
) ⁄ ( ).
( ) * + that is is spanned by * +.
A sequence * + in H satisfying ( ) is called Riesz sequence.
9
Definition 1.2.10. A sequence { } of closed subspaces of ( ) is a
multiresolution analysis (MRA) if the following properties are satisfied:
( ) for all
( ) ( )
( ) * +
( ) ( ) ( )
( ) ( ) .
( ) There exists a function called scaling function, such that the system
* ( )+ is an orthonormal basis in .
For a given MRA { } in ( ) with the scaling function , a wavelet is obtained in
the following manner.
Let the subspace of ( ) be defined by the condition
For an integer j , we define ( ( )) ( )
Then ( ) .
Thus ( ) ( ) ( ) ( )
This gives
( ) for all
From conditions ( ) to ( ) of above definition we obtain an orthogonal
decomposition ( ) ∑
Let be such that * ( )+ is an orthonormal basis in Any such
function is a wavelet (this follows directly from above two equations).
If a wavelet is obtained from a multiresolution analysis in the way described above
then we say that it is associated with the multiresolution analysis.
The following theorems provide a relationship between scaling functions, MRA, and
wavelets.
Theorem 1.2.1. (Wojtaszczyk (1997)) Let ( ) satisfy
( ) * ( )+ is a Riesz sequence of ( )
( ) ( ⁄ ) ∑ ( ) converges on ( ) and
10
( ) ( ) is continuous at 0 and ( ) where denotes the Fourier transform of
Then the spaces { ( )}
with form an MRA.
Theorem 1.2.2. (Siddiqi (2004)) Let { } be an MRA with a scaling function
The function
( )
is a wavelet if and only if
( ) ⁄ ( ) ( ⁄ ) ( ⁄ )
for some -periodic function ( ) such that | ( )| ,
where ( )
∑ Each such wavelet has the property that
span{ }
Orthonormal dyadic discrete wavelets are associated with scaling functions and their
dilation equations.
Theorem 1.2.3. (Walnut (2002)) There exists an sequence of coefficients * ( )+
such that
( ) ∑ ( ) ⁄ ( ) ( )
in ( ).
Definition 1.2.11. Let ( ) be the scaling function associated with an MRA { }.
Then sequence * ( )+ (or * +) satisfying (1.8) is called the scaling filter associated
with ( ).
There is a close connection between MRA and wavelets.
The following theorem provides an algorithm for constructing a wavelet orthonormal
basis given an MRA.
Theorem 1.2.4. (Walnut (2002)) Let { } be an MRA with scaling function ( ) and
scaling filter ( )
11
Define the wavelet filter ( ) by
( ) ( ) ( )
and the wavelet ( ) by
( ) ∑ ( ) ⁄ ( )
Then
{ ( )}
is a wavelet orthonormal basis on .
Alternatively, given any
{ ( )} { ( )}
is an orthonormal basis on .
Definition 1.2.12. We define, for each as
( ) ⁄ ( ) ( )
It has the property
∫ ( )
where ( ) ( ) is sometimes referred to as the father scaling function or father
wavelet.
The scaling function is associated with the smoothing of the signal.
Definition 1.2.13. The scaling function can be convolved with the signal to give
scaling (approximation) coefficients as
∫ ( ) ⌌ ( ) ( )⌍
( )
Scaling coefficients must satisfy the constraint ∑
Remark 1.2.1. In addition to create an orthogonal system, we require
∑ {
12
The same coefficients are used in the reverse order with alternate signs to produce the
differencing of the associated wavelet equation, i.e.
( ) ∑( ) ( )
This construction ensures that the wavelets and their corresponding scaling functions
are orthogonal.
Mallat Algorithm for decomposition gives approximation and detail coefficients as
∑
( )
∑( )
( )
where { } is a sequence characteristic of the associated wavelet. For example, for
Haar wavelet
{
√
It is important to note that given scaling coefficients at any level , all lower level
scaling function coefficients for can be computed recursively using (1.11) and
all lower level wavelet coefficients ( ) can be computed from the scaling function
coefficients applying (1.12).
Definition 1.2.14. By choosing an orthonormal wavelet basis ( ), we can
reconstruct the original signal in terms of wavelet coefficients as
( ) ∑ ∑ ⌌ ( ) ( )⌍
( )
The discrimination of the continuous wavelet transform, required for its
practical implementation, involves a discrete approximation of the transform integral
(i.e. a summation) computed on a discrete grid of scales and locations. The
inverse continuous wavelet transform is also computed as a discrete approximation.
How close an approximation to the original signal is recovered depends mainly on the
resolution of discrimination used.
13
Mallat reconstruction algorithm is given by
∑
( ) ( )
The scaling function coefficients at any level can be computed from only one set of
low level scaling function coefficients and all the intermediate wavelet coefficients by
applying (1.13).
For wavelets with compact support, the sequences { } and { } will contain only
finitely many non zero elements. For wavelets with support on all of R, each sequence
element is in general, non zero, but elements decay exponentially. Discrete wavelet
transform (DWT) is commonly introduced using a matrix or computation form.
In matrix form we can represent the discrete wavelet transform (DWT) through an
orthogonal matrix
[
]
where is a scaling function, is the largest level of transform and t indicates
transpose.
A DWT is applied to a vector X of observations as and decomposes the data
into sets of wavelet coefficients
[
]
Definition 1.2.15. A wavelet ( ) is said to have vanishing moments of order if
∫
( )
If { ( )} is an orthonormal system on R and if ( ) is smooth, then it will have
vanishing moments. The smoother the ( ), the greater the number of vanishing
moments.
The more scaling coefficients the wavelet has, the higher the number of vanishing
moments and hence higher the degree of polynomial it can suppress. However the
more scaling coefficients, that a wavelet has, the larger its support length and hence
the less compact it becomes. This makes it less localized in the time domain and hence
less able to isolate the singularities in the signal.
14
Remark 1.2.2. A wavelet with many vanishing moments does a good job of
approximating smooth functions. By a smooth function, we mean one with a large
number of continuous derivatives.
Examples of Wavelets
Example 1.2.1. The Haar wavelet is
( ) { ⁄ ⁄
Haar wavelet is the simplest case of an orthonormal wavelet. Its scaling equation
( ) ( ) ( )
contains only two non zero scaling coefficients given by
Haar scaling function is
( ) {
Here domain of ( ) is [0, 1].
The corresponding Haar wavelet equation is
( ) ( ) ( )
It is clear that, ∫ ( )
and ( ) has compact support [0, 1].
Haar wavelet
Example 1.2.2. The function defined by the equation
( ) ( ) ⁄
15
is the Mexican Hat wavelet. This wavelet is, in fact, the negative of second derivative
of the Gaussian distribution function ⁄ i.e.
( )
⁄ ( ) ⁄
Mexican Hat wavelet
Mexican Hat wavelet has no discontinuity. This wavelet has no scaling function .
Example 1.2.3. The Morlet wavelet has the closed formula
( ) ⁄ ( )
The constant is used for the normalization in view of reconstruction. As the scaling
function for this wavelet does not exist, the analysis is not orthogonal.
Morlet wavelet
16
Both the above two wavelets, Mexican hat and Morlet, do not have compact support.
They both have exponential decay and are appropriate for the continuous wavelet
transform.
Example 1.2.4. The Daubechies wavelets (named after its inventor the mathematician
Ingrid Daubechies) are a family of orthogonal wavelets. This family defines a discrete
wavelet transform which is characterized by a maximal number of vanishing moments
for some given support. With each wavelet type of this class, there is a scaling
function (also called father wavelet) which generates an orthogonal multiresolution
analysis. Daubechies wavelets are widely used in solving a broad range of problems,
e.g. self-similarity properties of a signal or fractal problems, signal discontinuities, etc.
The Daubechies wavelets are not defined in terms of the resulting scaling and wavelet
functions; in fact, they are not possible to write down on closed form. Daubechies
wavelets have compact support, that is, a finite number of, , of scaling coefficients.
Daubechies orthogonal wavelets D 2_D 20 are commonly used. The index number
refers to the number of coefficients. Each wavelet has a number of zero moments or
vanishing moments equal to half the number of coefficients. For example, D2 (which
is nothing but the Haar wavelet) has one vanishing moment, D4 has two, etc. A
vanishing moment limits the wavelet's ability to represent polynomial behaviour or
information in a given signal. For example, D2 with one moment, easily encodes
polynomials of one coefficient, or constant signal components. Daubechies wavelet
D 4 encodes polynomial with two coefficients, i.e. constant and linear signal
components; and D6 encodes 3-polynomials, i.e. constant, linear and quadratic signal
components.
The Daubechies D4 transform has four wavelet and scaling function coefficients. The
scaling function coefficients are
( √ )
√
( √ )
√
( √ )
√
( √ )
√
Note that Daubechies wavelets are also often defined by the number of zero moments
they have, in which case the sequence runs as (which is Haar wavelet in this
case), etc.
17
Daubechies wavelets
Daubechies wavelets are quite asymmetric. To improve the symmetry while retaining
the simplicity, Daubechies proposed symmlets as a modification to her original
wavelets. She came up with symmlets by juggling with their phase during their
constructions. They have ⁄ vanishing moments, support length and
filter length . However, true symmetry (or anti-symmetry) can not be achieved for
orthonormal bases with compact support with one exception: the Haar wavelet which
is anti symmetric.
Symmlets
Example 1.2.5. Daubechies gave another family of wavelets called Coiflets. Coiflets
(denoted by Coif ) are nearly symmetrical and have vanishing moments for both the
scaling function and and wavelet.
18
An orthonormal wavelet system with compact support is called a Coifman wavelet
system of degree if the moments of the associated function and wavelet satisfy
the conditions
( ) ∫ ( )
( ) ∫ ( )
( ) ∫ ( )
The wavelet has moments equal to zero, they have support length and
filter length .
Coif wavelets
In Coif denotes the order of the Coifman wavelets. The wavelet function has
moments equal to and the scaling function has moments equal to .
The two functions have a support of length 6 -1. The Coif and are much more
symmetrical than .
If is a smooth continuous time signal, for large , the coefficient
⌌ ⌍ ⁄ ( )
If is a polynomial of degree the approximation becomes an equality.
Example 1.2.6. The Meyer wavelet and the scaling function are defined in the
frequency domain.
Meyer wavelet
19
For certain applications, real symmetric wavelets are required. Biorthogonal family of
wavelets (which comes in pairs) exhibits the property of linear phase, which is needed
for signal and image reconstruction by using two wavelets, one for decomposition and
the other for reconstruction instead of the same single one. Using Biorthogonal
wavelets allows us to have perfectly symmetric and antisymmetric wavelets.
Biorthogonal wavelets satisfy the biorthogonality condition namely
∫ ( ) ( ) {
1.3. Fractals
The word fractal was coined by Mandelbrot (1982, 1997) in his fundamental
essay from the latin fractus, meaning broken, to describe objects that were too
irregular to fit into a traditional geometrical setting. A fractal object is a geometrical
structure which is everywhere continuous but nowhere differentiable. Broadly
speaking, fractal is an object which appears self-similar under varying degrees of
magnification. In effect, possessing symmetry across scale, with each small part of the
object, replacing the structure of the whole. Van Koch curve, Sierpinski tree and
Cantor set are among very common examples of fractals. Fractals occur as graph of
functions. Various phenomena like atmospheric pressure, levels of reservoir and
prices in the stock market display fractal features when plotted as functions of time
over fairly long time spans.
Methods of classical geometry and calculus are not suitable for the study of fractals.
So there was a need for the better techniques. The main tool of the fractal geometry is
dimension in its many forms. Roughly, a dimension provides a description of how
much space a set fills. It is the measure of prominence of the irregularities of a set
when viewed at very small scales. A dimension contains a lot of information about the
geometrical properties of a set. It has become a common routine to assign a fractal
dimension to irregular surfaces in various fields such as topography, defect and
fracture studies, growth phenomena, erosion and corrosion processes, catalysis and a
large number of other areas in physics, chemistry, biology, geology, meteorology and
material sciences.
20
Heterogeneous media such as composites, porous materials and blend polymers are
known to have complex microstructures. Although these physical structures look
different in appearance, they share the same fractal dimension. There are many
definitions of dimension which give a non integer, or fractal, dimension. These
dimensions are particularly useful in characterizing fractal objects. The concept of
dimension is closely associated with that of scaling. The two most common and most
comprehensive definitions of dimension are Euclidean dimension and topological
dimension. The Euclidean dimension is simply the number of coordinates required to
specify the object. The topological dimension derives from the ability to cover the
object with discs of small radius. Similarity dimension is used to characterize the
construction of regular fractal objects. In general if the similarity dimension is greater
than the topological dimension of the object, then the object is a fractal and, more
often than not, the fractal dimension is a non integer value.
There are, however, many more concepts of dimension which produce fractal
dimensions. The box counting dimension is widely used in practice for estimating the
dimension of a variety of fractal objects. The technique is not confined to estimating
the dimension of objects in the plane, such as coastline curve. It may be extended to
probe the fractal objects of high fractal dimension in multidimensional spaces. The
box dimension, also called capacity, was introduced by Kolmogorov in 1958.
Definition 1.3.1. Let be any bounded set and let ( ) be the smallest
number of sets of diameter at most which can cover .
Then the lower and upper box counting dimensions of respectively are defined as
( )
( )
If these two dimensions are equal then the common value is called box counting
dimension or box dimension of that is
( )
The box dimension quantifies how the size of a set varies when one changes the unit
measure.
21
The special form of curves gives rise to several definitions of dimension.
Definition 1.3.2. We define a curve or Jordan curve to be the image of an interval
, - under a continuous bijection , - . If is a curve and , we
define ( ) to be the maximum number of points on the curve in
that order, such that | | for Thus ( ( ) ) may be
thought of as the length of the curve measured using a pair of dividers with points
set a distance apart.
The divider dimension is defined as
( )
assuming that the limit exists.
Remark 1.3.1. The divider dimension of a curve is at least equal to the box dimension
(assuming that they both exist). For example in case of self-similar structure of Von
Koch curve, these both dimensions are equal.
Definition 1.3.3. Let be any non-empty subset of n-dimensional Euclidean space,
the diameter of U, denoted by | |, is defined as
| | *| | +,
i.e. the greatest distance apart of any pair of points in . If * + is a countable (or
finite) collection of sets of diameter almost that covers , that is
⋃ | |
We say that * + is a -cover of
Definition 1.3.4. Suppose that is a subset of and s is a non-negative number.
For any we define
( ) {∑| |
* + } ( )
Thus one looks at all covers of by sets of diameter at most and seek to minimize
the sum of the powers of the diameter. As decreases, the class of permissible
covers of F in ( ) is reduced. Therefore, the infimum ( ) increases, and so
approaches a limit as .
22
We write,
( )
( )
This limit exists for any subset of , though the limiting value can be or .
( ) is called the -dimensional Hausdorff measure of .
Definition 1.3.5. The Hausdorff dimension of , denoted by , is defined as
* ( ) +
* ( ) + ( )
so that
( ) {
If then ( ) may be zero or infinite, or may satisfy ( )
Dimension of the middle third Cantor set is , if
. In general Hausdorff
dimension of the middle cantor set lies between 0.5 and 1.
Multifractals are fractal objects which cannot be completely described using a
single fractal dimension (monofractals). They have in fact an infinite number of
dimension measures associated with them. Signals that are singular at almost every
point are multi-fractals and they are encountered in the maintenance of economic
records, physiological data including heart records, electromagnetic fluctuations in
galactic radiation noise, textures in images of natural terrain, variations of traffic flow,
etc. Geometrically, the only qualitative difference between a fractal and a multifractal
is that a multifractal appears to be equally complex, but different at different
resolutions. The wavelet transform takes advantage of multifractal self-similarities, in
order to compute the distribution of their singularities. This singularity spectrum is
used to analyze multifractal properties. The zooming capability of the wavelet
transform not only locates isolated singular events, but can also characterize more
complex multi-fractal signals having non-isolated singularities.
Definition 1.3.6. A function is pointwise Lipschitz at if there exists
and a polynomial of degree ⌊ ⌋ such that
| ( ) ( )| | | ( )
23
A function is uniformly Lipschitz over , - if it satisfies ( ) for all , -
with a constant that is independent of .
The Lipschitz regularity of at or over , - is the sup. of the such that is
Lipschitz
If then ( ) ( ) and the Lipschitz condition ( ) becomes
| ( ) ( )| | | ( )
A function that is bounded but discontinuous at is Lipschitz 0 at . If Lipschitz
regularity is at , then is not differentiable at and characterizes the
singularity type.
The following theorem gives a necessary and sufficient condition on the wavelet
transform for estimating the Lipschitz regularity of at a point
Theorm 1.3.1. (Mallat (1999)) If ( ) is Lipschitz at then there exists
such that
( ) | ( )| ⁄ ( |
|
*
Conversely, if is not an integer and there exist and such that
( ) | ( )| ⁄ ( |
|
)
then is Lipschitz at
Definition 1.3.7. A set is said to be self similar if it is the union of disjoint
subsets that can be obtained from with a scaling, translation and
rotation. This self similarity often implies an infinite multiplication of details, which
creates irregular structures. The triadic Cantor set and the Van Koch curve are simple
examples.
Definition 1.3.8. Let be the set of all points where the pointwise Lipschitz
regularity of is equal to . The spectrum of singularity ( ) of is defined as the
fractal dimension of . The support of ( ) is the set of such that is not
empty.
24
The spectrum of singularity measures the global repartition of singularities having
different Lipschitz regularity.
This spectrum was originally introduced by Frisch and Parisi (1985) to analyze
the homogeneity of multifractal measures that model the energy dissipation of
turbulent fluids. It was then extended by and Muzy, Bacry and Arneodo (Muzy et al.
(1994)) to multifractal signals. The fractal dimension shows that if we make a disjoint
cover of the support of with intervals of size then the number of intervals that
intersect is ( ) ( ).
The singularity spectrum gives the proportion of Lipschitz singularities that appear
at any scale .
One cannot compute pointwise Lipschitz regularity of a multifractal because its
singularities are not isolated, and the finite numerical resolution is not sufficient to
discriminate them. It is however possible to measure the singularity spectrum of
multifractals from the wavelet transform local maxima, using a global partition
function introduced by Arneodo, Bacry and Muzy (Muzy et al. (1994)).
Let be a wavelet with vanishing moments. Mallat states that if has a pointwise
Lipschitz regularity at then the wavelet transform ( ) has a sequence
of modulus maxima that converges towards at fine scales. The set of maxima at the
scale „ ‟ can thus be interpreted as a covering of the singular support of with
wavelets of scale .
At these maxima locations
| ( )| ⁄ .
Let { ( )} be the position of all local maxima of | ( )| at a fixed scale .
The partition function measures the sum at power of all these wavelet modulus
maxima:
( ) ∑| ( )|
( )
For each the scaling exponent ( ) measures the asymptotic decay of ( )
at fine scales :
( )
( )
( )
This typically means that ( ) ( ).
25
The following theorem relates ( ) to the Legendre transform of ( ) for self similar
signals. This result was established in Bacry et al. (1993) for a particular class of
fractal signals and generalized by Jaffard (1997).
Theorem 1.3.2. (Arneodo, Bacry, Jaffard, Muzy) (Mallat (1999))
Let , - be the support of ( ). Let be a wavelet with
vanishing moments. If is a self similar signal then
( )
( ( ⁄ ) ( )) ( )
Theorem 1.3.3. (Mallat (1999)) (a) The scaling exponent ( ) is a convex and
increasing function of
(b) The Legendre transform (1.20) is invertible if and only if ( ) is convex, in
which case
( )
( ( ⁄ ) ( )) ( )
The spectrum ( ) of self-similar signals is convex.
Theorem 1.3.4. (Mallat (1999)) Let ( ) where Gaussian. For any
( ), the modulus maxima of ( ) belong to connected curves that are
never interrupted when the scale decreases.
To compute ( ), we first calculate ( ) ∑ | ( )|
then derive
the decay scaling component ( ), and finally compute ( ) with a Legendre
transform. If then the value of ( ) depends mostly on the small amplitude
maxima | ( )| Numerical calculations may then become unstable. To avoid
introducing spurious modulus maxima created by numerical errors in regions where
is nearly constant, wavelet maxima are chained to produce maxima curve across
scales. If ( ) ( ) where a Gaussian, then by Theorem 1.3.4., all maxima
lines ( ) define curves that propagate up to the limit . All maxima lines that
do not propagate up to the finest scale are thus removed in the calculation of ( ).
The calculation of spectrum ( ) proceeds as follows.
1. For maxima, we compute ( ) and the modulus maxima at each scale .
Then chain the wavelet maxima across scales.
26
2. Partition function is computed as
( ) ∑| ( )|
3. For scaling, compute ( ) with a linear regression of ( ) as a function
of :
( ) ( ) ( )
4. Spectrum is computed as
( ) ( ( ⁄ ) ( )).
Definition 1.3.9. Let be a multifractal whose spectrum of singularity ( ) is
calculated from ( ). If a regular signal is added to then the singularities are not
modified and the singularity spectrum of remains unchanged. We study the
effect of this smooth perturbation on the spectrum calculation.
The wavelet transform of is
( ) ( ) ( )
Let ( ) and ( ) be the scaling exponent of the partition functions ( ) and
( ) calculated from the modulus maxima respectively of ( ) and
( ). We denote by ( ) and ( ) the Legendre transforms of ( ) and ( )
respectively.
The following theorem gives a relation between ( ) and ( )
Theorem 1.3.5. (Arneodo, Bacry, Muzy) (Mallat 1999) Let be a wavelet with
exactly vanishing moments. Suppose that is a self similar function.
If is a polynomial of degree then
( ) ( ) for all
If ( ) is almost everywhere non zero then
( ) { ( )
( ⁄ )
where is defined by ( ) ( ⁄ )
27
1.4. Time Series
A time series (or time sequence) is a sequence of real numbers, each number
representing a value at a time point. In its simplest form, a time series is a collection
of numerical observations made at discrete and equal time intervals. Usually each
observation is associated with a particular instant of time or interval of time, and it is
this that provides the ordering. The observations could equally be well associated with
points along a line, but whenever they are ordered by a single variable, we refer to it
conventionally as time series. We generally assume that time series values are equally
spaced. One may observe the various properties of the time series signals like trends,
abrupt changes, drifts and self similarities etc. Some of the important situations where
time series occurs are, in the field of economics where one is exposed to stock market
quotations, weekly inflations rates or foreign exchange rates. In medical sciences one
needs to study time series of influenza cases during certain period of time, blood
pressure measurements observed over time, electrocardiogram data and functional
magnetic resonance imaging of brain wave time series patterns to study how the brain
reacts to certain stimuli under various experimental situations. The other important
applications are in the field of physical engineering and environmental sciences, for
example time series measurements acquired in the atmospheric boundary layer, of
rainfall for agriculture and flood control, of temperature variation and wind pressure.
Time series are also observed in case of nuclear reactor, global warming, earthquakes
in certain area, fish population, EI Nino and speech data etc. In brief time series has
varied applications in practically all the fields of modern life.
The analysis of the experimental data that is observed at different points in time is
called time series analysis. Wavelet analysis of the time series (signal) is the study of
the properties such as trends, abrupt changes, drift, denoising (removing unwanted
components, seasonality (to look for periodicity) and self similarity etc, by breaking
the time series signal into the scaled and shifted version of the original (mother)
wavelet.
1.5. Hurst Exponent
There are many processes, which have a random (stochastic) component, but
also exhibit some predictability between an element and the next. In statistics, this is
sometimes described by the autocorrelation function (the correlation of data set with
28
shifted version of the data set). The autocorrelation is one measure of whether a past
value can be used to predict a future value. A random process that has some degree of
autocorrelation is referred to as long memory process (or long range dependence).
River flow exhibits this kind of long-term dependence. A hydrologist, named Hurst
(1951), studied Nile river flows and reservoir modeling. In the recent past,
applications of the Hurst exponent have attracted attention of researchers working in
different fields.
Estimation of the Hurst exponent for a time series data set provides a measure of
whether the data is a pure random walk or has some underlying trends. Another way
to state this is that a random process with an underlying trend has some degree of
autocorrelation. When the autocorrelation has a very long (or mathematically infinite)
decay, this kind of Gaussian process is sometimes referred to as a long memory
process.
The values of the Hurst exponent range between 0 and 1. A value of 0.5 indicates a
true random walk (a Brownian time series). In a random walk there is no correlation
between any element and future element. A Hurst exponent value H, 0.5 < H < 1,
indicates “persistent behavior” (or a positive autocorrelation). If there is any increase
from step to , there will probably be an increase from time step to . The
same is true for decrease, where a decrease will tend to follow a decrease. A Hurst
exponent value 0 < H < 0.5 will exist for a time series with “anti-persistent behavior”
(or negative autocorrelation). Here an increase will tend to be followed by a decrease
or decrease will be followed by an increase. This behavior is also sometimes called
“mean reversion”.
The Hurst exponent is also directly related to the fractal dimension, which gives a
measure of the roughness of a surface. The fractal dimensions and their refinements
have been used in diverse fields. The relationship between the Hausdorff fractal
dimension and the Hurst exponent is . The larger the Hurst Exponent,
the smaller is fractal dimension and the smoother is the surface.
Cross correlation and auto-correlation are closely related to Hurst exponent and
analysis of real world systems.
Definition 1.5.1. We say that a data set exhibits auto correlation if the value at time
is correlated with the value at time where is some time increment in the
29
future. In a long memory process autocorrelation decays over time and this decay
follows a power law namely
( )
where is a constant and ( ) is the autocorrelation function with
For the given data at time the autocorrelation
function is defined as
∑( )( )
∑( )
⁄ ( )
where
The data observations used here are equally spaced.
However, instead of correlation between two different variables, autocorrelation is the
correlation between two values of the same variable at time and
The autocorrelation function is used to detect the non randomness in the data and to
identify an appropriate time series model if data is not random. When autocorrelation
is used to detect non randomness, it is only usually the first ( ) autocorrelation that
is of interest. When the autocorrelation is used to identify appropriate time series
model, the autocorrelations are plotted for many .
In general the autocorrelation function (Mantegna and Stanely (2000)) ( ) is defined
as
( )
∫ ( ) ( )
⁄
( )
where as in discrete it is defined as
( )
∑ ( ) ( )
⁄
( )
where are positive numbers. Here represents the total number of data points.
The exponent (in the above definition of autocorrelation) is related to the Hurst
exponent by the equation ⁄ .
30
Definition 1.5.2. The cross-correlation coefficient which is a measure of linear
association between two variables is defined as
∑ ( )
( )
√∑ ( ) ∑ ( )
( )
A positive value of coefficient indicates that as one value increases the other tends to
increase where as a negative value of indicates that as one variable increases the
other tends to decrease.
1.6. Meteorology and Stock Market
In this section, we briefly discuss the concepts of meteorology and stock market.
1.6.1. Meteorology
Meteorology is a branch of science that deals with the atmosphere of a planet,
particularly that of the Earth. It is essentially an inter-disciplinary science as the
atmosphere, land and ocean constitute an integrated system. The most important
application of meteorology is the analysis and prediction of weather state of the
atmosphere at a given time and place with regard to temperature, air pressure, wind,
humidity, cloudiness and precipitation. The term weather is restricted to conditions
over short periods of time. Conditions over long periods, generally at least 30-50
years, are referred as climate. In other words the climate is defined as average
condition of the atmosphere near the earth‟s surface over a long period of time, taking
into account temperature, precipitation, humidity, wind speed, barometric pressure
and other phenomena.
There are different kinds of routine meteorological observations. Some of them are
made with simple instruments like the thermometer for measuring temperature or the
anemometer for recording wind speed. The observing techniques have become
increasingly complex in recent years and satellites have now made it possible to
monitor the weather globally. Countries around the world exchange the weather
observations through fast telecommunication channels. These are plotted on weather
charts and analyzed by meteorologists at forecasting centers. Weather forecasts are
then made using modern computers and supercomputers. Weather information and
forecasts are of vital importance to many activities like agriculture, aviation, shipping,
31
fisheries, tourism, defence, industrial projects, water management and disaster
mitigation. Recent advances in satellite and computer technology have led to
significant progress in meteorology and its utility in various fields.
A dynamic meteorology deal with a wide range of hydro dynamical equations from a
global scale to small turbulent eddies. Dynamic meteorology attempts to describe the
atmospheric processes through mathematical equations which together are called a
numerical model. After defining the initial state of the atmosphere and ocean, the
equations are solved to derive a final state, thus enabling a weather prediction to be
made. The process of solving the equations is very complicated and requires powerful
computers.
Like agriculture, there are many human activities which are affected by weather and
for which meteorologists can provide valuable inputs. Applied meteorologists use
weather information and adopt the findings of theoretical research to suit a specific
application; for example, design of aircraft, control of air pollution, architectural
design, urban planning, exploitation of wind and solar energy, air-conditioning,
development of tourism etc. Individual studies within meteorology include aeromony-
the study of Physics of the upper atmosphere, aerology- the study of free air not
adjacent to the earth‟s surface, applied meteorology- the application of weather data
for specific practical problems, dynamic meteorology- the study of atmospheric
motions (which also includes the meteorology of other planets and satellites in the
solar system), and physical meteorology, which focuses on physical properties of the
atmosphere.
The research on meteorology since its inception in Aristotle‟s meteorologica-I (340
B.C.) has reached new heights through various organizations carrying research in this
field. The National Oceanic and Atmospheric Administration (NOAA) has the major
government responsibility in the United States for monitoring and forecasting the
weather and conducting meteorological research. The air weather service and the Fleet
Numerical Weather Control have similar responsibilities within the U.S. Air Force
and U.S. Navy, respectively; space applications to meteorology are researched by the
National Aeronautics and Space Administration (NASA) as well as by the National
Environmental Satellite Service, which is under the auspices of NOAA. In addition to
a host of universities conducting meteorological research, there is the National Center
32
for Atmospheric Research, which is operated by an affiliation of universities and
sponsored by the U.S. National Science Foundation. The Abdus Salam International
Center of Theoretical Physics, Trieste, Italy is organizing various kinds of activities
for promoting teaching and research of meteorology and study of climatic changes in
collaboration with institutions such as Center for Ocean-Land-Atmosphere studies
(COLA), Institute for Global Environment and Society, meteorological U.S.A.,
European Centre for Medium-Range Weather Forecasts (ECMWF), U.K., Center for
Global Atmospheric Modeling, U.K., Institutional Research Institute for Climate and
Society, palisades, U.S.A., School of Earth and Environmental Sciences, National
University, Republic of Korea, Indian Institute of Tropical Meteorology, Pune, India,
etc. An interesting account of applications of emerging areas in Mathematics to
meteorology is given by A. Fourier, Yale University, American Meteorological
Society AMS Journals online 57, Issue 23(D), 3856-3880. An updated account of
seasonal predictability regions can be seen in the report of activity SMR: 1767, ICTP,
Trieste, Italy held during 7 to 18 August 2006. A number of private companies also
engage in operational and research meteorological activities.
Agricultural industry and hydroelectric power generation in India are mainly
dependent on rainfall, particularly during the rainy season, June to September and post
monsoon season October to December. Using wavelets approach, we have conducted
a detailed study of the rainfall time series data of India for different seasons from 1813
to 1995 using different wavelets in the Matlab wavelet toolbox.
1.6.2. Stocks and Stock Market
„Finance‟ is one of the fastest developing areas of the modern corporate world.
This, together with the sophistication of modern financial products, provides a rapidly
growing impetus for the new mathematical models and modern mathematical
methods; the area is an expanding source for novel and relevant real world
mathematics. There are many kinds of financial markets but the most important ones
are:
Stock markets, which deal in shares;
Bond markets, which deal in government and other bonds;
Currency markets or foreign exchange markets, where currencies are bought and sold;
33
Commodity markets, where physical assets such as oil, gold, copper, wheat etc. are
traded;
Futures and options market, on which the derivative stocks are traded.
Stocks are shares (also known as equities) in the ownership of a company. A
company‟s shares are the many equal parts into which its ownership is divided. A
company that needs to raise money can do so by selling shares in itself to investors.
The company is then „owned‟ by its shareholders; if the company makes a profit, part
of this may be paid out to shareholders as a dividend of so much per share, and if the
company is taken over or otherwise wound up, the proceeds (if any) are distributed to
the shareholders. Shares thus have a value that reflects the views of the investors
about the likely future dividend payments and capital growth of the company; this
value is quantified by the price at which they are bought and sold on the stock
exchanges. Shares can be bought by people called share holders as an investment.
Share index is an indicator of the state of the stock market. It is based on the combined
share prices of a set of companies.
As markets have become more sophisticated, more complex contracts than simple
buy/sell trades have been introduced. Known as financial derivatives, derivative
securities, derivative products, contingent claims or just derivatives, they can give
investors of all kind, a great range of opportunities to tailor their dealings to their
investment needs. The financial options are the most common example of derivative
trade.
The Stock Exchange, “share market” or a “bourse” is a mutual organization for traders
or “stock brokers” who trade in different company securities and stocks. Companies
or businesses have to be “listed” in the bourses in order for any trading or exchange in
their “shares” or equities to be carried out. Stock markets are also the place for trading
in units and bonds issued by the government. New York, Chicago, Frankfurt, London
and Bombay have well known share markets.
Bombay Stock Exchange (BSE) index and National Stock Exchange (NSE) index
belong to India. The BSE is the oldest stock exchange in Asia. It was started in 1875.
The BSE index is a market capitalization weighted index of 30 stocks of sound Indian
and multinational companies. The NSE was established to provide access to investors
from all over India. The NSE started equity operations in November 1994 and
34
operations in derivatives in June 2000. The NSE index, also known as Nifty 50, is
determined from 50 stocks of companies taken from 23 sectors of economy. Of
India‟s 23 stock exchanges, equity trading is most active in the National Stock
Exchange (NSE) and the Bombay Stock Exchange (BSE).
Like most other markets, the Stock market economy also depends on a number of
factors with investor confidence being one of the keys. The amount of money that an
investor will put on a share of a particular company depends on his perception of the
company doing well in future or has been doing so for the past period. By putting in
his money in the share of a company, the person becomes entitled to a share of the
profit or loss the company makes. The initial offering of stocks and bonds is carried
on at the primary market whereas trading of securities happens at the secondary
market. Exchange of stocks, however, is the most important function of the Stock
Market.
The Stock Exchange Index is a barometer of the economy. At the Stock Exchange,
share prices rise and fall depending, largely, on market forces. Share prices tend to rise
or remain stable when companies and the economy in general show signs of stable
growth. Therefore the movement of share prices can be an indicator of the general
trend in the economy.
The development of India‟s equity capital markets has taken a sharp progressive
trajectory, largely reflecting the government‟s laissez faire approach in the segment.
At 90% of GDP, its size is comparable to stock market of other emerging countries.
Benchmarking the risk/return characteristics of India‟s stock markets against the
world average shows that India‟s stock market has historically been more volatile
while its returns have underperformed. Only from 2006, India‟s stock market has
begun to outperform the world‟s index as momentum to liberalize the economy
gathered pace and investors began to take notice. India‟s economy is expected to
benefit enormously from the process of gradual capital market liberalization.
Empirical evidence has shown that emerging market economies that have heralded
changes in their financial markets experienced higher growth and investment. India is
no exception, with per-capita GDP and domestic investment rising post-liberalization.
India‟s growth potential can experience a sustained pick-up if it stays on the path of
reforming its capital markets. Improving macroeconomic fundamentals, a sizeable
35
skilled labour force and greater integration with the world economy have increased
India‟s global competitiveness, placing the country on the radar screens of investors
the world over. There are strong indications that India‟s economic transformation is
irreversible.
The distribution of price fluctuations is important from theoretical point of view and
helpful in understanding dynamics of the stock market. We have studied the behavior
of daily closings of two Indian stock markets BSE 100 and Nifty 50 along with the
stocks of British bank and bank of Riyadh.
1.7. ANFIS (Adaptive Network Based Fuzzy Inference System)
System modeling based on conventional mathematical tools (e.g., differential
equations) is not well suited for dealing with ill-defined and uncertain systems. On the
other hand, a fuzzy inference system employing fuzzy if-then rules can model the
qualitative aspects of human knowledge and reasoning processes without employing
precise quantitative analyses. This fuzzy modeling or fuzzy identification, first
explored systematically by Takagi and Sugeno (1985), has found numerous practical
applications in control, prediction and inference (Kandel (1988, 1992)).
Fuzzy if-then rules or fuzzy conditional statements are expressions of the form IF A
THEN B, where A and B are labels of fuzzy sets characterized by appropriate
membership functions. Due to their concise form, fuzzy if-then rules are often
employed to capture the imprecise modes of reasoning that plays an essential role in
the human ability to make decisions in an environment of uncertainty an imprecision.
For example in the statement „If pressure is high, then volume is small‟, the pressure
and volume are linguistic variables (Zadeh (1973)), high and small are linguistic
values or labels that are characterized by membership functions.
An adaptive network, as its name implies, is a network structure consisting of nodes
and directional links through which the nodes are connected. Moreover, part or all of
the nodes are adaptive, which means each output of these nodes depends on the
parameter(s) pertaining to this node, and the learning rule specifies how these
parameters should be changed to minimize a prescribed error measure.
An adaptive network is a multi-layer feed forward network in which each node
performs a particular function (node function) on incoming signals as well as a set of
36
parameters pertaining to this node. The nature of the node functions may vary from
node to node, and the choice of each node function depends on the overall input-
output function which the adaptive network is required to carry out. To reflect
different adaptive capabilities, both circle and square nodes are used in an adaptive
network. The parameter set of an adaptive network is the union of parameter sets of
each adaptive node. In order to achieve a desired input-output mapping, these
parameters are updated according to given training data.
The fuzzy modeling or fuzzy identification has a large number of practical
applications in prediction and inference. We have studied the financial time series of
BSE 100 on these lines using the concepts of wavelets and neuro-fuzzy.
1.8. Chapter Wise Summary
We now give a brief chapter wise summary of the results contained in the thesis.
Chapter 2 is devoted to the analysis of the Indian rainfall data using different
wavelets in the Matlab wavelet toolbox. Using wavelets approach, we have conducted
a detailed study of the rainfall time series data from 1813 to 1995. This chapter has
been divided into four sections. In the first section we view the importance of rainfall
and other meteorological parameters in day today life. We also present briefly the
work done by various researchers in the study and analysis of rainfall data. In the
second section the core theory of the wavelet concepts is presented. How wavelet
breaks a time series signal into approximation coefficients and detail coefficients at
different levels through the application of multiresolution analysis is discussed in this
section. The third section gives the experimental results of the various wavelets
(continuous and discrete) used for the Indian rainfall data (annual as well as seasonal).
In the last section of this chapter we present the conclusions made on our experimental
observations. The results of this chapter are given in Kumar, J. et al. (2006).
Chapter 3 of this thesis describes the work done on use of mathematical
methods for modeling price fluctuations of financial time series. In this chapter, we
focus our study on distribution of daily closing prices, distribution of normalized
prices, distribution of daily returns, signal to noise ratio, autocorrelation function, and
autocorrelation length for BSE 100, Nifty 50, stock of Riyadh Bank and stock of
British Bank. This chapter has been divided into six sections. The first one is an
37
introductory section in which we recapitulate the contributions by various researchers
on the financial markets. In the second section we explain some basic terms like stock
price distribution and daily returns of the stock market. In the third section we discuss
wavelet approximation to a continuous function, the discrete case and the signal to
noise ratio of a signal. In the fourth section, we perform MATLAB wavelet toolbox
analysis of BSE 100, Nifty 50, British Bank and Bank of Riyadh using Haar-3 and db-
4 wavelets where as in the second part of this section we have discussed
autocorrelation function and correlation length. The correlation length is calculated
after removing the noise element in the signal. Discussion on the results obtained for
signal to noise ratio, autocorrelation function and correlation length has been
presented in the fifth section. In the last section we give conclusion to the analysis
performed. Results of this chapter are presented in Manchanda, P. et al. (2007b).
Chapter 4 deals with wavelet based multifractal analysis of Indian rainfall
data. We have divided this chapter into four sections. The first section is an
introductory section which describes the work done by various researchers to study
the behavior of time series by using wavelet and fractal methods. The second section
starts with the introduction to some preliminaries such as multifractal, singularity,
self-affine function, Hurst exponent and its connection to fractal dimension, and
wavelet based multifractal formalism. In this section, we further describe singularity
spectrum, Legendre transform, partition function and numerical procedure for
calculating the singularity spectrum. The third section deals with the analysis of
rainfall data. Here we present wavelet analysis, multifractal analysis, regularity
analysis and wavelet based multifractal analysis of the time series data under
consideration. For the wavelet analysis (annual and seasonal) we have used db2, db3
and Coif 5 wavelets. For multifractal analysis of Indian rainfall data we have
calculated box dimension and regularization dimension for different seasons. For the
regularity analysis we have used wavelet transform based parametric and non
parametric approach. For the wavelet based multifractal analysis of the rainfall time
series data (of four different seasons), we obtain CWT (Morlet wavelet) based
estimation of Legendre spectrum which represents an approximation of the Hausdroff
spectrum. Conclusions to the analysis are made in the last section this chapter. The
results of this chapter are presented in Manchanda, P. et al. (2007a).
38
In chapter 5, we study the behavior of the Indian stock market by calculating
the Hurst exponent. This chapter has been divided into five sections. In the first
section we review some literature and the work done in the field of econophysics to
study the behavior of financial markets by the application of Hurst exponent.
Important terms such as self-similar set and daily return related to the financial
markets are discussed in the second section. In the third section, we have estimated the
Hurst exponent for two Indian stock markets BSE 100 and Nifty 50 by using two
different approaches, one with the Rescaled Range (R/S) method and secondly, by
using the Benoit software. In the fourth section we have compared the results obtained
for Hurst exponent and fractal dimension. We give the post analysis conclusion in the
last section. The results of this chapter are presented in Kumar, J. and Manchanda, P.
(2009).
Chapter 6 is devoted to the study of financial time series using the concepts of
wavelets and neuro fuzzy. The work in this chapter is an attempt towards forecasting
the closing of Indian stock market BSE 100 using wavelet decomposition & neuro
fuzzy approach based on the past available data. The forecasting approach applying
the neural network of back propagation type for generalization, Sugeno inferencing
technique for specialization & wavelet decomposition has been proposed. We have
divided this chapter into five sections. The first section contains a brief introduction to
the stock market and time series. The elements of wavelet analysis, neuro-fuzzy and
ANFIS (adaptive network based fuzzy inference system) are described in the second
section. In the third section we discuss the procedure adopted for the prediction of
BSE closings using wavelets and neuro fuzzy. Results obtained with the procedural
application on the time series data are given in the fourth section. In the last section of
this chapter we make the concluding remarks. The results with the above procedure
are presented in Kumar, J. and Manchanda, P. (accepted).
*************