20
7/21/2019 Report_Nitesh.pdf http://slidepdf.com/reader/full/reportniteshpdf 1/20 1 | Page  I.INTRODUCTION Time frequency analysis studies are essential for signal analysis. There are a number of traditional time-frequency methods such as Short Time Fourier Transform (STFT), wavelet transform, Wigner-Ville distribution. All of these traditional transforms are based upon the Fourier transform concepts. These methods fail to self-adjust according to the characteristics of the signal itself in order to perform optimally. Moreover these methods are also restricted by the uncertainty principal. The STFT is the most basic method, which is nothing but a limited time window-width Fourier spectral analysis. By successively sliding the window along the time axis, one can get a time– frequency distribution. Since it relies on the traditional Fourier spectral analysis, one has to assume the data to be piecewise stationary. This assumption is not always justified in non- stationary data. Even if the data are piecewise stationary how can we guarantee that the window size adopted always coincides with the stationary time scales? What can we learn about the variations longer than the local stationary time scale? Will the collection of the locally stationary  pieces constitute some longer period phenomena? Furthermore, there are also practical difficulties in applying the method: in order to localize an event in time, the window width must  be narrow, but, on the other hand, the frequency resolution requires longer time series. These conflicting requirements render this method of limited usage. It is, however, extremely easy to implement with the fast Fourier transform; thus, it has attracted a wide following. Most applications of this method are for qualitative display of speech pattern analysis [1] . The wavelet approach is essentially an adjustable window Fourier spectral analysis with the following general definition: W (a,b;X,ψ)=|a| −1/2  X(t)ψ    dt  In which ψ (·) is the basic wavelet function that satisfies certain very general conditions, a is the dilation factor and b is the translation of the origin. Although time and frequency do not appear explicitly in the transformed result, the variable 1/a gives the frequency scale and b, the temporal location of an event. An intuitive physical explanation of above equation is very simple: W(a,b;X,ψ) is the ‘energy’ of X of scale a at t = b. The problem with the most commonly used Morlet wavelet (the Morlet wavelet is defined as Gaussian enveloped sine and cosine wave groups with 5.5 waves) is its leakage generated by the limited length of the basic wavelet function, which makes the quantitative definition of the energy–frequency–time distribution difficult. Sometimes, the interpretation of the wavelet can also be counterintuitive. For example, to define a change occurring locally, one must look for the result in the high-frequency range, for the higher the frequency the more localized the basic wavelet will be. If a local event occurs only in the low-frequency range, one will still be forced

Report_Nitesh.pdf

Embed Size (px)

Citation preview

Page 1: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 1/20

1 | P a g e  

I.INTRODUCTION

Time frequency analysis studies are essential for signal analysis. There are a number of

traditional time-frequency methods such as Short Time Fourier Transform (STFT), wavelet

transform, Wigner-Ville distribution. All of these traditional transforms are based upon the

Fourier transform concepts. These methods fail to self-adjust according to the characteristics of

the signal itself in order to perform optimally. Moreover these methods are also restricted by the

uncertainty principal.

The STFT is the most basic method, which is nothing but a limited time window-width Fourier

spectral analysis. By successively sliding the window along the time axis, one can get a time– 

frequency distribution. Since it relies on the traditional Fourier spectral analysis, one has to

assume the data to be piecewise stationary. This assumption is not always justified in non-

stationary data. Even if the data are piecewise stationary how can we guarantee that the window

size adopted always coincides with the stationary time scales? What can we learn about thevariations longer than the local stationary time scale? Will the collection of the locally stationary

 pieces constitute some longer period phenomena? Furthermore, there are also practical

difficulties in applying the method: in order to localize an event in time, the window width must

 be narrow, but, on the other hand, the frequency resolution requires longer time series. These

conflicting requirements render this method of limited usage. It is, however, extremely easy to

implement with the fast Fourier transform; thus, it has attracted a wide following. Most

applications of this method are for qualitative display of speech pattern analysis [1] .

The wavelet  approach is essentially an adjustable window Fourier spectral analysis with the

following general definition:

W (a,b;X,ψ)=|a|−1/2   X(t)ψ ∗ 

  dt∞∞  

In which ψ∗ (·) is the basic wavelet function that satisfies certain very general conditions, a is

the dilation factor and b is the translation of the origin. Although time and frequency do not

appear explicitly in the transformed result, the variable 1/a gives the frequency scale and b, the

temporal location of an event. An intuitive physical explanation of above equation is very

simple: W(a,b;X,ψ) is the ‘energy’ of X of scale a at t = b.

The problem with the most commonly used Morlet wavelet (the Morlet wavelet is defined as

Gaussian enveloped sine and cosine wave groups with 5.5 waves) is its leakage generated by the

limited length of the basic wavelet function, which makes the quantitative definition of the

energy–frequency–time distribution difficult. Sometimes, the interpretation of the wavelet can

also be counterintuitive. For example, to define a change occurring locally, one must look for the

result in the high-frequency range, for the higher the frequency the more localized the basic

wavelet will be. If a local event occurs only in the low-frequency range, one will still be forced

Page 2: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 2/20

2 | P a g e  

to look for its effects in the high-frequency range. Such interpretation will be difficult if it is

 possible at all. Another difficulty of the wavelet analysis is its non-adaptive nature. Once the

 basic wavelet is selected, one will have to use it to analyze all the data. Since the most commonly

used Morlet wavelet is Fourier based, it also suffers the many shortcomings of Fourier spectral

analysis: it can only give a physically meaningful interpretation to linear phenomena; it can

resolve the inter wave frequency modulation provided the frequency variation is gradual, but it

cannot resolve the intra wave frequency modulation because the basic wavelet has a length of 5.5

waves. In spite of all these problems, wavelet analysis is still the best available non-stationary

data analysis method so far; therefore, we will use it in this paper as a reference to establish the

validity and the calibration of the Hilbert spectrum.[1]

The Wigner–Ville distribution  is sometimes also referred to as the Heisenberg wavelet. By

definition, it is the Fourier transform of the central covariance function. For any time series, X(t),

we can define the central variance as

Cc(τ,t)=X(t−1/2τ)X∗(t+1/2τ)

Then the Wigner–Ville distribution is

V (ω,t)=   C(τ,t)eωτ dτ∞∞  

The difficulty with this method is the severe cross terms as indicated by the existence of negative

 power for some frequency ranges.

The Hilbert–Huang transform (HHT) [1], which was proposed by N. E. Huang in 1998, is a

time-frequency analysis method that is applicable to both nonlinear and non stationary signals.

Unlike traditional signal processing methods, the HHT does not depend on any a priori

assumptions before the signal processing and is free from the uncertainty principle. Many

successful applications have been presented, such as fault detection in machines,

electrocardiogram analysis of power quality and electromagnetic transients and acoustic analysis.

The HHT consists of empirical mode decomposition (EMD) and the Hilbert transform. As the

foundation of the HHT, EMD can decompose a signal into scaled signals with different features.

Each scaled signal is called an intrinsic mode function (IMF), which consists of the essential

features of the signal. However, IMF requires many iterative calculations, which cannot be made

in parallel. Hence, realizing a real-time environment using software for developing EMD is

difficult. Most HHT-based studies involve the analysis of non real-time signals using personal

computers. Many applications require real-time EMD results. For example, the detection of

faults in an operating machine must be carried out in real time. With a real-time HHT, a faulted

machine can be identified and stopped when a fault occurs. However, achieving fast EMD using

hardware remains a major challenge in the development of a real-time HHT.

Page 3: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 3/20

3 | P a g e  

II. EMD THEORY

EMD decomposes a signal into different scaled data sequences of distinct features. Each

sequence is called an IMF, which must satisfy the following two conditions [1].

1) In the whole data set, the numbers of extreme points (local maxima and minima) andzero-crossing points must be equal to each other or differ by at most one.

2) At any point of the signal, the mean values of the envelope defined by the local

maxima and the envelope defined by the local minima must be equal to zero.

Traditional algorithmic steps for applying EMD to a signal x (t) (a function of time) are

described as follows.

Step 0: let i and j denote the outer and inner iterative indices, respectively. Initially, i =1

, j =1, and x(i,j)(t)=x(t).

Step 1: identify all local maxima and minima of the signal x(i,j)(t), and then, interpolate

these extreme points by cubic spline curve fitting or saw tooth transform (ST) to generate

the upper and lower envelopes.

Step 2: calculate the mean values of the upper and lower envelopes, which are defined

as m(i,j)(t).

Step 3: calculate the difference between x(i,j)(t) and m(i,j)(t): h(i,j)(t)=x(i,j)(t)−m(i,j)(t).

Step 4:  if a convergence criterion (e.g., S number criterion [18] in this paper: j<S) is

satisfied, then h(i,j)(t) is an IMF which is defined as c i(t): ci(t)=h(i,j)(t); otherwise,x(i,j+1)(t)=h(i,j)(t), j = j +1, and go to step 1.

Step 5: estimate the residual signal r i(t) by using r i(t)= x(i,1)(t)−ci(t).

Step6: if r i(t) fulfills the termination criterion, stop; otherwise, let x(i+1,1)(t)=r i(t), i = i +1

, j =1, and go to step 1.

The aforementioned algorithmic steps yield the following equation

() = c(t) + r(t)

 

Where n denotes the number of extracted IMFs and r n(t) is the residue component. Fig. 1 shows

the flowchart of EMD.

Page 4: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 4/20

4 | P a g e  

Fig. 1 Flow Chart for EMD 

Convergence Criteria

The Convergence criterion determines the number of sifting steps to produce an IMF. Following

are the four existing stoppage criterion:

  Standard Deviation

This criterion is proposed by Huang et al. (1998). It similar to the Cauchy convergence test, and

we define a sum of the difference, SD, as

SD = |h(t) − h(t)|h   (t)

 

Then the sifting process stops when SD is smaller than a pre-given value.

Page 5: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 5/20

5 | P a g e  

  S Number Criterion

This criterion is based on the so-called S-number, which is defined as the number of consecutive

siftings for which the number of zero-crossings and extrema are equal or at most differing by

one. Specifically, an S-number is pre-selected. The sifting process will stop only if, for S

consecutive siftings, the numbers of zero-crossings and extrema stay the same, and are equal or

at most differ by one.

  Threshold Method

Proposed by Rilling, Flandrin and Gonçalvés, threshold method set two threshold values to

guaranteeing globally small fluctuations in the mean while taking in account locally large

excursions.

  Energy Different Tracking

Proposed by Cheng, Yu and Yang, energy different tracking method utilized the assumption that

the original signal is a composition of orthogonal signals, and calculate the energy based on the

assumption. If the result of EMD is not an orthogonal basis of the original signal, the amount of

energy will be different from the original energy.

Once a stoppage criterion is selected, the first IMF, c1, can be obtained. Overall, c1 should

contain the finest scale or the shortest period component of the signal. We can, then, separate

c1 from the rest of the data by x(t)-c1=r 1  Since the residue, r 1, still contains longer periodvariations in the data, it is treated as the new data and subjected to the same sifting process as

described above.

This procedure can be repeated for all the subsequent r  j's, and the result is r n-1-cn=r n 

The sifting process finally stops when the residue, r n, becomes a monotonic function from which

no more IMF can be extracted. From the above equations, we can induce that

() = c + r

 

Thus, a decomposition of the data into n-empirical modes is achieved. The components of the

EMD are usually physically meaningful, for the characteristic scales are defined by the physical

data. Flandrin et al. (2003) and Wu and Huang (2004) have shown that the EMD is equivalent to

a dyadic filter bank

Page 6: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 6/20

6 | P a g e  

Saw to oth transf orm method

To overcome difficulty of finding the upper and lower envelopes in the original data space,

alternative spaces are explored where the envelopes are easy to find, and the points between the

original and the alternative spaces must have one to one mapping. The saw tooth space described

 below meets those requirements.[3]

Transform time series data to sawtooth function

The required saw tooth (triangle wave) function can be constructed by connecting the successive

extrema of the original data function with straight line segment, those extrema are alternating

maxima and minima on the original data.

On each segment, the two extremities coincides the maxima and minima of the original data, in

 between, the variation of the original data value is monotone, it has one on one mapping to the

straight line segment of the sawtooth function.

The original time series data f(t) has m extrema:

E(t j) t0 ≤ tj ≤tm-1 

There are k maxima among m extrema on the upper envelope U(t):

E(t j-1)<U(t j)=E(t j)>E(t j+1) 0<i<k

And l minima among m extrema on the lower envelope L(t):

E(t j-1)>U(t j)=E(t j)<E(t j+1) 0<i<l

Where k is either equals to l or differs by one, and

k +l = m

s(t) = E(ti) + (E(ti+1) - E(ti))    0 ≤ i ≤ m-1

Besides the points on the original data function, all the points in the time interval is applied the

same transform. In the data space, a point is designated by (t, x), while in the sawtooth space, a point has the coordinate (u, s). The sawtooth transform on each segment is defined as:

u(t) = ti + () ()

() ()(t-ti) ti ≤ t ≤ ti+1 0 ≤ i ≤ m-1

s(u) = x(t)

Page 7: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 7/20

7 | P a g e  

Fig. 2: Original data and corresponding sawtooth (triangle wave) function 

The effect of this transform does not change the value on the vertical direction (signal value), it

compress or expand the space on the horizontal direction (time) to make the data function into

 piecewise linear sawtooth function.

Finding env elope s

The sawtooth function varies linearly between alternating maxima U(ti) and minima L(ti). The

upper envelope can be constructed by connecting the successive maxima:

U(u) = U(ui) +() ()

 (u-ui) 0 ≤ i < k-1

And the similarly to build the lower envelope:

L(u) = L(ui) +() ()

 (u-ui) 0 ≤ i < l-1

Page 8: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 8/20

8 | P a g e  

III. LITERATURE REVIEW

• A FPGA based method was proposed for use as a hardware accelerator to achieve real-time

EMD in [5]. The FPGA-based hardware accelerator can significantly enhance the

computational performance to obtain the upper and lower envelopes. However, in this

method, FPGA serves only as an ancillary accelerator. The core function of the studied

system in [5] is still conducted by software.

• The merits and limitations of software-based and hardware based EMD implementations

have been discussed in relation to a theoretical analysis of EMD in [6]. However, relevant

experimental results were not provided in [6].

• A technique to implement real-time EMD using a DSP chip and an FPGA chip was presented

in [7]. The FPGA chip was adopted as a controller, in which the sampled data were

incorporated into the ping-pong buffer whose length was 1000. One thousand sampled data

were simultaneously incorporated into the DSP chip. Also, the iteration loops of EMD were

calculated using the DSP chip. However, this method can only implement real-time EMD for

those signals whose frequencies are below 1 kHz. Moreover, this method suffers from the

end effect.

• A technique to implement real time EMD using only FPGA was presented in [8]. However

this method suffers from end effect at starting and ending means end effect at two sides. Also

the experiential result related to power, area, delay etc is not provided.

Page 9: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 9/20

9 | P a g e  

IV. DESIGN IMPLEMENTATION

A. Review of Existing Sy stem

In [8], for computing the upper lower envelope and lower envelope, separate module has been

used. The local maxima is detected by one module and the local minima is detected by theanother module. Also in the module, used for the calculation of upper and lower envelope they

are popping the data from the circular after each bm clock cycle. If the calculation of the points of

the upper envelope is completed within bm  clock cycle then the system has to wait for the

remaining time because as per the per the next data can be popped from the circular fifo only

after bm clock cycle. 

Fig.3. Flow chart of computing the upper envelope

Fig.4. Block Diagram of computing the upper envelope 

Page 10: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 10/20

10 | P a g e  

B. Propo sed Method

In any signal at an instant there can be either maxima or minima. Both maxima and minima can’t

occur at same instant. So instead of using separate module for finding maxima and minima, in

our system we are using the single module which can detect the maxima and minima both.

Fig.5. Block diagram of proposed system for computing the upper & lower envelope

According to proposed system in this work, first, input x (t p) is sent to a buffer, which comprises

three registers: Register A, Register B, and Register C. These three registers constitute a register

queue. During each clock cycle, a datum is moved to the next register from the current one. The

datum in Register B is compared with those in Register A and Register C by two comparators,

whose results pass through an AND gate. If the datum in Register B is found to be larger than

those in both Register A and Register C, then the datum in Register B is a maximum. The

maximum is placed at the end of first Circular Buffer, and the rear pointer of Circular Buffer

 points to the next node. If the datum in Register B is found to be smaller than those in both

Page 11: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 11/20

11 | P a g e  

Register A and Register C, then the datum in Register B is a minimum. The minimum is placed

at the end of third Circular Buffer, and the rear pointer of Circular Buffer points to the next node.

Simultaneously, the Counter starts counting (in every clock cycle, the value of the Counter

increases by 1). When the next maximum or minimum arrives, the Counter is cleared to zero,

and a new round of counting starts. The last counting result is placed at the rear of second

Circular Buffer if maximum occurs otherwise stored in fourth circular buffer. The values of the

maximum/minimum points are stored temporarily in first and third Circular Buffer, and the time

intervals between every two maximum points/two minimum points are stored temporarily in

second and fourth Circular Buffer respectively. The upper envelope can be obtained by a1 +(a2

−a1)*i/b1 (where a1 and a2 refer to the first and second data in front of first Circular Buffer; and

 b1 is the first datum in front of second Circular Buffer) using the data in the front nodes of the

two circular buffers.

The proposed method is implemented by a bottom-up strategy. The proposed systems consist of

following modules:

  PIPO Registers

  Comparator

  Circular Buffer

  Counter

 

RAM

  Module to calculate the

  Compute “a1 + (a2 −a1)*i/b1,”

These basic modules are implemented by using Verilog-HDL and are combined to build entire

system.

Page 12: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 12/20

12 | P a g e  

V. EXPERIMENTAL SETUP

A. Software Devel opment

The software Xilinx ISE Design Suite 14.2 is used herein to develop EMD code, in which EMD

is implemented to decompose a signal into number of IMFs and the residue function r. Also, theS number criterion is adopted as a convergence criterion, where S is set to 3[6]. The studied data

may be embedded sequentially and consecutively into the FPGA chip. The FPGA outputs the

results consecutively to achieve real-time computation. Therefore, the proposed method can deal

with any amount of data. In the experiment of this work, FPGA only deals with integers. Hence,

the input and output data are integers. After the Verilog code for implementing EMD is

synthesized and configured, simulation-based experiment is carried out. The simulation

experiment was performed by using ISIM Simulator. The input waveform was generated using

MATLAB software. The waveform simulation was then studied in ISIM Simulator. Next, the

simulation result was saved and imported to MATLAB, which can plot the resulting waveform.

B. Characteristics in the cas e s tudies

Two case studies are carried out in this work. The first one involves simulations of signal

decomposition and the second one involves the high-frequency interference filtering.

Page 13: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 13/20

13 | P a g e  

VI. ANALYSIS OF SIMULATION AND EXPERIMENTAL RESULTS

A.  Application to Si gnal Decomp o sition

X1(t) = 4000*sin (2*pi*50000*t) +1000*sin (2*pi*500000*t)

X2(t)=500*sin(2*pi*2500000*t)

X(t)=4000*sin(2*pi*50000*t)+1000*sin(2*pi*500000*t))+500*sin(2*pi*2500000*t)

Page 14: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 14/20

14 | P a g e  

Signal 1(IMF1)

Signal 2(Residue 1)

B. 

Signal Filtering:

Original Signal

Page 15: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 15/20

15 | P a g e  

Unfiltered Signal

Reconstructed Signal

C.  Results

  FPGA Des ign R e sult

Desi gn Summary:

Page 16: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 16/20

16 | P a g e  

Device Utilization Summary: 

Top Module:

Simulation: 

Page 17: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 17/20

17 | P a g e  

  ASIC Desi gn R esult 

Power:

Instance Cells Leakage Power

(nW)

Dynamic

Power(nW)

Total Power(nw)

EMD 301365 22785.708 83794170.249 83816955.957

Area:

Instance Cells Cell Area Net Area Total Area

EMD 301365 1447902 0 1447902

D. Comparison with previous works 

FPGA implementation for real-

time empirical mode

decomposition

Our Propo sed Method

Time Required 0.0001 s for processing 1000 data 0.152 s for processing 1000 data

Sampling rate 12.5Mhz 12.5Mhz

End Effect Occurrence near two end of data set Occurrence near one end of dataset

ASIC

Implementation

Compatible

 No Yes

Obtaining Envelope Saw tooth Transform Saw tooth Transform

Page 18: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 18/20

18 | P a g e  

VII. Conclusio n

This work has presented a real-time EMD method using FPGA. The developed FPGA-based

method is applicable to high-frequency signals, and the end effect occurs only in the initial

stages.

The proposed method is characterized by the following:

  The design of the modules for continuous calculation of the upper and lower envelopes

and

  The design of the serial pipeline (space-based implementation) to substitute the

complicated iteration loops (time-based implementation). The calculation of envelopes is

simplified by using ST rather than the cubic spline. Moreover, the proposed method is

validated by Xilinx ISE Design Suite software.

Page 19: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 19/20

19 | P a g e  

VIII. FUTURE SCOPE

  Future works can improve the proposed method by using the cubic spline to calculate the

upper/lower envelopes, thereby increasing accuracy.

 

Use of floating point arithmetic instead of fixed point arithmetic

  Speed can be enhanced by using high speed divider.

Page 20: Report_Nitesh.pdf

7/21/2019 Report_Nitesh.pdf

http://slidepdf.com/reader/full/reportniteshpdf 20/20

20 | P a g e  

REFERENCES

1.   N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N.C.Yen ,C.C.Tung,

and H.H.Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear

and non stationary time series analysis,” Proc. R. Soc. Lond. A, Math. Phys. Sci., vol.

454, no. 1971, pp. 903–995, Mar. 1998.

2.   N. E. Huang, M. C. Wu, S. R. Long, S. S. P. Shen, W. Qu, P. Gloersen, and K. L. Fan,

“A confidence limit for the empirical mode decomposition and Hilbert spectrum

analysis,” Proc. R. Soc. Lond. A, Math. Phys. Sci., vol. 459, no. 2037, pp. 2317–2345,

Sep. 2003

3.  L. Y. Lu, “Fast intrinsic mode decomposition of time series data with sawtooth

transform,” ORACLE, Redwood Shores, CA, pp. 1–13, Nov. 2007, Tech. Rep.

4.  B. Xuan, Q. Xie, and S. Peng, “EMD sifting based on bandwidth,” IEEE Signal Process.

Lett., vol. 14, no. 8, pp. 537–540, Aug. 2007

5.  L. Wang, M. I. Vai, P. U. Mak, and C. I. Ieon, “Hardware-accelerated implementation of

EMD,” in Proc. 3rd Int. Conf. Biomed. Eng. Inf., Oct. 2010, vol. 2, pp. 912–915

6.  J. D. Jonesa, J. S. Peib, P. J. Wright, and M. P. Tull, “Embedded EMD algorithm within

an FPGA-based design to classify nonlinear SDFO systems,” in Proc. SPIE, Mar. 2010,

vol. 7647, pp. 76470E-1–76470E-6

7.  M. Lee, K. Shyu, P. Lee, C. Huang, and Y. Chiu, “Hardware implementation of EMD

using DSP and FPGA for on-line signal processing,” IEEE Trans. Ind. Electron., vol. 58,

no. 6, pp. 2473–2481, Jun. 2011

8.  Ying-Yi Hong and Yu-Qing Bao,” FPGA Implementation for Real-Time Empirical Mode

Decomposition ”, IEEE transactions on instrumentation and measurement, vol. 61, no.

12, December 2012