5
New Adaptive Algorithms for Identification of Sparse Impulse Responses - Analysis and Comparisons Mariane R. Petraglia #1 , Diego B. Haddad 2 # Program of Electrical Engineering Federal University of Rio de Janeiro Rio de Janeiro, Brazil 21945-970 1 [email protected] Telecommunications Department CEFET/RJ - Unidade Nova Iguac ¸u Rio de Janeiro, Brazil 26041-271 2 [email protected] Abstract—The convergence of the classical adaptive filtering algorithms becomes slow when the number of coefficients is very large. However, in many applications, such as digital network and acoustical echo cancelers, the system being modeled presents sparse impulse response, that is, most of its coefficients have small magnitudes. In order to improve the convergence for these applications, several algorithms have been proposed recently, which employ individual step-sizes for the updating of the different coefficients. The adaptation step-sizes are made larger for the coefficients with larger magnitudes, resulting in a faster convergence for the most significant coefficients. In this paper, we give an overview of the most important adaptive algorithms developed for the fast identification of systems with sparse im- pulse responses. Their convergence rates are compared through computer simulations for the identification of the channel impulse responses in a digital network echo cancellation application. A theoretical analysis of an improved version of the PNLMS algorithm is presented. I. I NTRODUCTION It is well known that the convergence of the adaptive filtering algorithms becomes slow when the number of co- efficients is very large. However, in many applications, such as digital network and acoustical echo cancelers, the system being modeled presents sparse impulse response, that is, most of its coefficients have small magnitudes. The classical adaptation approaches, such as the least-mean square (LMS) and recursive least squares (RLS) algorithms, do not take into account the sparseness characteristics of such systems. In order to improve the convergence for these applications, several algorithms have been proposed recently, which em- ploy individual step-sizes for the updating of the different coefficients. The adaptation step-sizes are made larger for the coefficients with larger magnitudes, resulting in a faster con- vergence for the most significant coefficients. Such idea was first introduced in [1] resulting in the so-called proportionate normalized least mean square (PNLMS) algorithm. Improved versions of the PNLMS, which employ extra parameters or non-linear functions, were proposed in [2],[3],[4] in order to improve the performance of the PNLMS algorithm for the identification of not very sparse impulse response. The well-known slow convergence of the gradient algo- rithms for colored input signals is also observed in the proportionate-type NLMS algorithms. Implementations that combine the ideas of the PNLMS and transform-domain adap- tive algorithms were proposed in [5] and [6] for accelerating the convergence for colored input signals. In this paper, we first give an overview of the most important adaptive algorithms developed for the fast identification of systems with sparse impulse responses. Their convergence properties are compared through computer simulations for the identification of the channel impulse responses in a digital network echo cancellation application. Then, a theoretical analysis of an improved PNLMS algorithm is developed in order to predict its behavior. II. PROPORTIONATE- TYPE NLMS ALGORITHMS Adaptive algorithms that take into account the sparseness of the unknown system impulse response have been recently developed. The convergence behavior of such algorithms de- pends on how sparse the modeled impulse response is. A sparseness measure of an N -length impulse response w was proposed in [7] as ξ w = N N N 1 ||w|| 1 N ||w|| 2 (1) where ||w|| l is the l-norm of the vector w. It should be observed that 0 ξ w 1, and that ξ w =0 when all elements of w are equal in magnitude (non-sparse impulse response) and ξ w =1 when only one element of w is non-zero (the sparsest impulse response). The proportionate-type NLMS algorithms employ a differ- ent step-size for each coefficient, such that larger adjustments are applied to the larger coefficients (or active coefficients), resulting in faster convergence rate when modeling systems 978-1-4244-6317-6/10/$26.00 © 2010 IEEE ISWCS 2010 384

[IEEE 2010 7th International Symposium on Wireless Communication Systems (ISWCS 2010) - York, United Kingdom (2010.09.19-2010.09.22)] 2010 7th International Symposium on Wireless Communication

  • Upload
    diego-b

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [IEEE 2010 7th International Symposium on Wireless Communication Systems (ISWCS 2010) - York, United Kingdom (2010.09.19-2010.09.22)] 2010 7th International Symposium on Wireless Communication

New Adaptive Algorithms for Identification ofSparse Impulse Responses - Analysis and

ComparisonsMariane R. Petraglia #1, Diego B. Haddad ∗2

# Program of Electrical EngineeringFederal University of Rio de JaneiroRio de Janeiro, Brazil 21945-970

1 [email protected]∗ Telecommunications DepartmentCEFET/RJ - Unidade Nova IguacuRio de Janeiro, Brazil 26041-271

2 [email protected]

Abstract—The convergence of the classical adaptive filteringalgorithms becomes slow when the number of coefficients is verylarge. However, in many applications, such as digital networkand acoustical echo cancelers, the system being modeled presentssparse impulse response, that is, most of its coefficients havesmall magnitudes. In order to improve the convergence for theseapplications, several algorithms have been proposed recently,which employ individual step-sizes for the updating of thedifferent coefficients. The adaptation step-sizes are made largerfor the coefficients with larger magnitudes, resulting in a fasterconvergence for the most significant coefficients. In this paper,we give an overview of the most important adaptive algorithmsdeveloped for the fast identification of systems with sparse im-pulse responses. Their convergence rates are compared throughcomputer simulations for the identification of the channel impulseresponses in a digital network echo cancellation application.A theoretical analysis of an improved version of the PNLMSalgorithm is presented.

I. INTRODUCTIONIt is well known that the convergence of the adaptive

filtering algorithms becomes slow when the number of co-efficients is very large. However, in many applications, suchas digital network and acoustical echo cancelers, the systembeing modeled presents sparse impulse response, that is,most of its coefficients have small magnitudes. The classicaladaptation approaches, such as the least-mean square (LMS)and recursive least squares (RLS) algorithms, do not take intoaccount the sparseness characteristics of such systems.In order to improve the convergence for these applications,

several algorithms have been proposed recently, which em-ploy individual step-sizes for the updating of the differentcoefficients. The adaptation step-sizes are made larger for thecoefficients with larger magnitudes, resulting in a faster con-vergence for the most significant coefficients. Such idea wasfirst introduced in [1] resulting in the so-called proportionatenormalized least mean square (PNLMS) algorithm. Improvedversions of the PNLMS, which employ extra parameters ornon-linear functions, were proposed in [2],[3],[4] in order to

improve the performance of the PNLMS algorithm for theidentification of not very sparse impulse response.The well-known slow convergence of the gradient algo-

rithms for colored input signals is also observed in theproportionate-type NLMS algorithms. Implementations thatcombine the ideas of the PNLMS and transform-domain adap-tive algorithms were proposed in [5] and [6] for acceleratingthe convergence for colored input signals.In this paper, we first give an overview of the most important

adaptive algorithms developed for the fast identification ofsystems with sparse impulse responses. Their convergenceproperties are compared through computer simulations for theidentification of the channel impulse responses in a digitalnetwork echo cancellation application. Then, a theoreticalanalysis of an improved PNLMS algorithm is developed inorder to predict its behavior.

II. PROPORTIONATE-TYPE NLMS ALGORITHMSAdaptive algorithms that take into account the sparseness

of the unknown system impulse response have been recentlydeveloped. The convergence behavior of such algorithms de-pends on how sparse the modeled impulse response is. Asparseness measure of an N -length impulse response w wasproposed in [7] as

ξw =N

N −√N

(1− ||w||1√

N ||w||2

)(1)

where ||w||l is the l-norm of the vector w. It should beobserved that 0 ≤ ξw ≤ 1, and that ξw = 0 when all elementsof w are equal in magnitude (non-sparse impulse response)and ξw = 1 when only one element of w is non-zero (thesparsest impulse response).The proportionate-type NLMS algorithms employ a differ-

ent step-size for each coefficient, such that larger adjustmentsare applied to the larger coefficients (or active coefficients),resulting in faster convergence rate when modeling systems

978-1-4244-6317-6/10/$26.00 © 2010 IEEE ISWCS 2010384

Page 2: [IEEE 2010 7th International Symposium on Wireless Communication Systems (ISWCS 2010) - York, United Kingdom (2010.09.19-2010.09.22)] 2010 7th International Symposium on Wireless Communication

with sparse impulse responses. The main algorithms of suchfamily are described next.In the proportionate normalized least mean-square

(PNLMS) algorithm, a time-varying step-size control matrix,whose elements are roughly proportional to the absolutevalues of the corresponding coefficients, is included in theupdate equation [1]. As a result, the large coefficients at agiven iteration get significantly more update energy than thesmall ones.In the improved proportionate normalized least mean-square

(IPNLMS) algorithm, the individual step-sizes are a compro-mise between the NLMS and the PNLMS step-sizes, resultingin a better convergence for different degrees of sparseness ofthe impulse response [2]. The amount of proportionality in thestep size normalization is controled by the introduction of anextra parameter.In the μ-law improved proportionate normalized least mean-

square (MPNLMS) algorithm, the step-sizes are optimal in thesense of minimizing the convergence rate (considering whitenoise input signal) [3]. The resulting algorithm employs a non-linear (logarithm) function of the coefficients in the step-sizecontrol. A simplified version of the MPNLMS, referred to asthe segmented PNLMS (SPNLMS) algorithm, also proposedin [3], employs a segmented linear function in order toreduce its computational complexity. The MPNLMS algorithmpresents significantly faster convergence, when compared tothe NLMS, PNLMS and IPNLMS algorithms, for sparsechannel. However, for dispersive channels, its convergence isseverely degraded, being much slower than that of the NLMSalgorithm.The variable-parameter improved μ-law PNLMS

(IMPNLMS) algorithm was proposed in [4]. In thisalgorithm, a channel sparseness measure was incorporatedinto the μ-law PNLMS algorithm in order to improve theadaptation convergence for dispersive channels. Since thereal channel coefficients are not available, the correspondingsparseness measure is estimated recursively using the currentadaptive filter coefficients.In the simulations presented throughout this paper, the

identification of the digital network channels of ITU-T Recom-mendation G.168 [8], by an adaptive filter with N = 512 coef-ficients, is considered. Figure 1 displays the experimental MSEevolutions of the PNLMS, MPNLMS, IPNLMS, IMPNLMSand NLMS algorithms with white Gaussian noise input forthe most and least sparse digital network channel models(gm1 and gm4, respectively) described in [8] and for the gm4channel with an additive white noise (uniformly distributedin [-0.05,0.05]), such as to simulate a non-sparse system.The corresponding sparseness measures are ξw = 0.8970 forthe gm1 channel, ξw = 0.7253 for the gm4 channel andξw = 0.2153 for the gm4 plus noise channel. A white Gaussianmeasurement noise of variance σ2

v = 10−6 was added to thedesired signal. It can be observed in Fig. 1 that the PNLMS-type algorithms converge much faster than the NLMS algo-rithm for the sparse channel gm1. However, for the dispersivechannel gm4+noise the PNLMS behaves much worse than the

y(n)+

d(n- )ΔD

e(n)

-

x(n)H (z)0

H (z)M-1

Analysis Bank

G (z )M-1

LM-1

x (n)0

x (n)M-1-ΔM-1z

H (z)1

x (n)1

H (z)2

x (n)2-Δ

2z

-Δ1z

-Δ0z

G (z )2

L2

G (z )1

L1

G (z )0

L0

y (n- )2 ΔD^

y (n- )1 ΔD^

y (n- )M-1 ΔD^

y (n- )0 ΔD^

Fig. 2. Adaptive subband structure composed of a wavelet transform andsparse subfilters.

NLMS. For channel gm4 the PNLMS algorithm presents afast initial convergence, which is significantly reduced after2000 iterations, becoming slower than that of the NLMSalgorithm. For the sparse channel gm1, the IPNLMS algorithmproduces similar performance as the PNLMS algorithm, thatis, significantly better than the NLMS algorithm. For thedispersive channel gm4+noise, the IPNLMS performance issimilar to that of the NLMS algorithm. For channel gm4,the IPNLMS algorithm does not present the performancedegradation (after the initial convergence period) observed inthe PNLMS algorithm; however, there is almost no gain inthe initial convergence speed when compared to the NLMSalgorithm. The MPNLMS algorithm presents significantlyfaster convergence, when compared to the NLMS, PNLMSand IPNLMS algorithms, mainly for the sparse channels gm1and gm4. However, for the dispersive channel gm4+noise, itsconvergence is severely degraded, being much slower than thatof the NLMS algorithm. The good convergence behavior of theIMPNLMS algorithm for the sparse and dispersive channelscan be observed in this experiment.Although the proportionate-type NLMS algorithms produce

better convergence than the NLMS algorithm when modelingsparse impulse responses with white noise inputs, they sufferfrom the same performance degradation as the NLMS whenthe excitation signal is colored. In order to improve theadaptation speed of these algorithms in dispersive channelswith colored input signals, the use of wavelet transform wasproposed independently in [5] and [6]. The wavelet-basedproportionate NLMS algorithm proposed in [6] employs awavelet transform and sparse adaptive filters. Illustrated inFig. 2, the wavelet transform is represented by a non-uniformfilter bank with analysis filters Hk(z), and sparse adaptivesubfilters Gk(zLk) [9]. The delays Δk in Fig. 2, introducedfor the purpose of matching the delays of the different lengthanalysis filters, are given by Δk = NH0 −NHk

, where NHk

is the order of the kth analysis filter. This structure yieldsan additional system delay (compared to a direct-form FIRstructure) equal to ΔD = NH0 . For the modeling of a lengthN FIR system, the number of adaptive coefficients of thesubfilters Gk(z) (non-zero coefficients of Gk(zLk)) should beat least Nk =

⌊N+NFk

Lk

⌋+1, where NFk

are the orders of the

385

Page 3: [IEEE 2010 7th International Symposium on Wireless Communication Systems (ISWCS 2010) - York, United Kingdom (2010.09.19-2010.09.22)] 2010 7th International Symposium on Wireless Communication

corresponding synthesis filters which, when associated to theanalysis filters Hk(z), lead to perfect reconstruction.Figure 3 illustrates the performance of the NLMS,

IMPNLMS and wavelet-IMPNLMS (WIMPNLMS) algo-rithms for a colored input signal, generated by passing a whiteGaussian noise with zero-mean and unit variance through thefilter with transfer function

H(z) =0.25

√3

1− 1.5z−1 − 0.25z−2. (2)

Such input signal has power spectrum similar to speechsignal. The WIMPNLMS algorithm used the Biorthogonal4.4 (Bior4.4) wavelet in a two-level decomposition (M = 3subbands). From this figure, we observe that, for colored inputsignals, the wavelet-domain IMPNLMS algorithm presentsfaster convergence than the time-domain IMPNLMS algo-rithm, since its step-size normalization strategy employs theinput power at the different frequency bands.

III. CONVERGENCE ANALYSIS OF IMPNLMS-TYPEALGORITHMS

Let the output d(n) of a sparse system with impulseresponse wopt be given by

d(n) = wToptx(n) + ν(n) (3)

where

x(n) = [ x(n) x(n− 1) · · · x(n−N + 1) ]T (4)

is the input vector and ν(n) is a zero-mean white noiseof variance σ2

ν . In the PNLMS-type algorithms, the systemimpulse response is estimated by

w(n + 1) = w(n) +βΓ(n)x(n)e(n)

xT (n)Γ(n)x(n) + δ(5)

wherey(n) = xT (n)w(n) (6)

e(n) = d(n)− y(n) (7)

Γ(n) = diag {g0(n), g1(n), ..., gN−1(n)} (8)

In this paper, we extend the analysis of the simplified PNLMSalgorithm presented in [10], to the IMPNLS algorithm, wherethe elements of the step-size control matrix are given by

gi(n) =1− α(n)

2N+

(1 + α(n))F (|wi(n)|)2

∑N−1j=0 F (|wj(n)|) + ε

(9)

where

ξw(n) =N

N −√N

⎛⎝1−

∑N−1j=0 |wj(n)|√

N∑N−1

j=0 |wj(n)|2

⎞⎠ (10)

ξ(n) = (1− λ)ξ(n − 1) + λξw(n) (11)

α(n) = 2ξ(n)− 1 (12)

and

F (x) =

{400, x < 0.005

8.51|x|+ 1.96, x ≥ 0.005

From Eq. (10), considering that the step-size parameter β

is sufficiently small and that the recursion of Eq. (5) acts as alowpass filter, so that we can assume that the expectation ofthe ratio of random variables is the ratio of their expectations(hypothesis I), we obtain

E [ξw(n)] ≈ N

N −√N

⎛⎝1−

∑N−1j=0 E [|wj(n)|]√

N∑N−1

j=0 E [|wj(n)|2]

⎞⎠

whereas from Eqs. (11) and (12) we obtain

E [ξ(n)] = (1− λ)E [ξ(n− 1)] + λE [ξw(n)]

andE [α(n)] = 2E [ξ(n)]− 1

In order to proceed with the analysis, besides using hypothesisI, we also assume that the expectation of the product oftwo random variables is the product of their expectations(hypothesis II), thus obtaining, from Eq. (9):

E [gi(n)] ≈ 1− E [α(n)]

2N+

(1 + E [α(n)])E [F (|wi(n)|)]2

∑N−1j=0 E [F (|wj(n)|)] + ε

From Eq. (5), we can write the update equation for the i-thcomponent of w(n) as:

wi(n + 1) = wi(n) + βgi(n)x(n−i)ν(n)PN−1

j=0 gj(n)x2j(n−j)+δ

+βgi(n)x(n−i)

PN−1j=0 (wopt,j−wj(n))x(n−j)

PN−1j=0 gjx2

j(n−j)+δ

where wopt,i is i-th component of wopt. Using hypothesesI and II, considering white-noise input of zero-mean andvariance σ2

x, and assuming uncorrelated input signal andobservation noise, we obtain

E [wi(n + 1)] = E [wi(n)] +βE[gi(n)](wopt,i−E[wi(n)])σ2

x

σ2x

PN−1j=0 E[gj(n)]+δ

(13)For Gaussian input signals, the fourth-order moments areE[x4] = 3σ2

x, and therefore,

E[w2

i (n + 1)] ≈ E [

w2i (n)

]+

2βE[gi(n)](wopt,iE[wi(n)]−E[w2i (n)])σ2

x

σ2x

PN−1j=0 E[gj(n)]+δ

+3β2E[g2

i (n)]σ2x(w2

opt,i−2E[wi(n)]wopt,i+E[w2i (n)])

σ2x

PN−1j=0 E[gj(n)]+δ

+β2E[g2

i (n)]σ4x

PN−1j=0,j �=i(w2

opt,j−2E[wj(n)]wopt,j+E[w2j (n)])

σ2x

PN−1j=0 E[gj(n)]+δ

+β2σ2

xσ2νE[g2

i (n)](σ2

x

PN−1j=0 E[gj(n)]+δ)

2

where we assumed that E[g2

i (n)] ≈ E [gi(n)]

2 and that

E[(

σ2x

∑N−1j=0 gj(n) + δ

)2]≈

(σ2

x

∑N−1j=0 E [gj(n)] + δ

)2

.

Supposing now that the weights wi(n) have Gaussian distri-butions, with mean μ and variance σ2, the probability densityfunction (pdf) of |wi(n)| has the following form:

f(|wi(n)|) =1√2πσ

[e−

(|w|−μ)2

2σ2 + e−(|w|+μ)2

2σ2

]U(wi(n))

386

Page 4: [IEEE 2010 7th International Symposium on Wireless Communication Systems (ISWCS 2010) - York, United Kingdom (2010.09.19-2010.09.22)] 2010 7th International Symposium on Wireless Communication

where U(·) corresponds to the step function. It then followsthat

E [F |wi(n)|] =∫ 0.005

0400w√2πσ

[e−

(|w|−μ)2

2σ2 + e−(|w|+μ)2

2σ2

]dw

+∫ ∞0.005

8.51|w|+1.96√2πσ

[e−

(|w|−μ)2

2σ2 + e−(|w|+μ)2

2σ2

]dw

(14)Defining the functions:

Φ0(a, b, μ, σ) =∫ b

ae−

(w−μ)2

2σ2 dw

= σ√

π2

[erfc

(a−μ√

)− erfc

(b−μ√

)] (15)

andΦ1(a, b, μ, σ) =

∫ b

awe−

(w−μ)2

2σ2 dw

= σ2

[e−

(a−μ)2

2σ2 − e−(b−μ)2

2σ2

]+μσ

√π2

[erfc

(a−μ√

)− erfc

(b−μ√

)] (16)

we can writeE [F (|wi(n)|)] = 1√

2πσ

×[400Φ1(0, 0.005, μ, σ) + 400Φ1(0, 0.005,−μ, σ)+8.51Φ1(0.005,∞, μ, σ) + 8.51Φ1(0.005,∞,−μ, σ)+1.96Φ0(0.005,∞, μ, σ) + 1.96Φ0(0.005,∞,−μ, σ)]

(17)Finally, the MSE evolution can be estimated from

J(n) = σ2ν + σ2

x

N−1∑i=0

E[z2

i (n)]

(18)

where zi(n) = wopt,i − wi(n) and

E[z2

i (n)]

= w2opt,i − 2wopt,iE [wi(n)] + E

[w2

i (n)]

(19)Figure 4 shows the theoretical and experimental MSE

evolutions of the IMPNLMS algorithm for white input andfor the three channels. Such results show that the analysispresented in this paper can be used to predict the performanceof the IMPNLMS algorithm for white input signals. In a futurework, we plan to extend such theoretical analysis to coloredinput signals and to the wavelet IMPNLMS algorithm.

IV. CONCLUSIONIn this paper we described some of the most important

algorithms developed in the last years for improving theconvergence of adaptive filters when modeling sparse impulseresponses. The performances of the described techniques,known as proportionate-type LMS algorithms, were illustratedthrough computer simulations in the identification of thedigital network channels of ITU-T recommendation G.168. Itwas shown that the fast convergence rate of the proportionate-type algorithms was limited to white input signals. In order toextend their performance advantages to colored input signals,a wavelet-domain algorithm, whose step-size normalizationtakes into account the value of each coefficient, as well as theinput signal power in the corresponding frequency band, wasdescribed. Finally, a theoretical convergence analysis of theimproved μ-law PNLMS algorithm was presented. Computersimulations showed that the theoretical results were in goodagreement with the experimental ones.

ACKNOWLEDGMENTThe authors would like to thank CNPq/Brazil for partially

supporting this work.

REFERENCES[1] D. L. Duttweiler, “Proportionate normalized least mean square adap-

tation in echo cancelers,” IEEE Trans. Speech Audio Process., vol. 8,no. 5, pp. 508–518, September 2000.

[2] J. Benesty and S. L. Gay, “An improved PNLMS algorithm,” in Proc.IEEE Int. Conf. Acoust., Speech, Signal Process., 2002, pp. 1881–1884.

[3] H. Deng and M. Doroslovacki, “Proportionate adaptive algorithms fornetwork echo cancellation,” IEEE Trans. Signal Process., vol. 54, no. 5,pp. 1794–1803, May 2006.

[4] M. F. L. Liu and S. Saiki, “An improved mu-law proportionate nlmsalgorithm,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process.,2008, pp. 3793–3800.

[5] H. Deng and M. Doroslovacki, “Wavelet-based MPNLMS adaptivealgorithm for network echo cancellation,” EURASIP Journal on Audio,Speech, and Music Processing, 2007.

[6] M. Petraglia and G. Barboza, “Improved pnlms algorithm employingwavelet transform and sparse filters,” in Proc. 16th European SignalProcess. Conf., 2008.

[7] P. O. Hoyer, “Non-negative matrix factorization with sparseness con-straints,” Journal of Machine Learning Res., no. 5, pp. 1457–1469,November 2004.

[8] D. N. E. C. I.-T. R. G.168, 2004.[9] M. R. Petraglia and J. C. B. Torres, “Performance analysis of adaptive

filter structure employing wavelet and sparse subfilters,” IEE Proc. - Vis.Image Signal Process., vol. 149, no. 2, pp. 115–119, April 2002.

[10] K. T. Wagner and M. I. Doroslovacki, “Towards analyitical convergenceanalysis of proportionate-type nlms algorithms,” in Proc. IEEE Int. Conf.Acoust., Speech, Signal Process., 2008, pp. 3825–3827.

387

Page 5: [IEEE 2010 7th International Symposium on Wireless Communication Systems (ISWCS 2010) - York, United Kingdom (2010.09.19-2010.09.22)] 2010 7th International Symposium on Wireless Communication

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 104

−60

−40

−20

0 Model gm1

Samples

MS

E (d

B)

IMPNLMSPNLMSMPNLMSIPNLMSNLMS

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 104

−60

−40

−20

0 Model gm4

Samples

MS

E (d

B)

IMPNLMSPNLMSMPNLMSIPNLMSNLMS

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 104

−60

−40

−20

0 Model gm4+noise

Samples

MS

E (d

B)

IMPNLMSPNLMSMPNLMSIPNLMSNLMS

Fig. 1. MSE evolution for the PNLMS and NLMS algorithms for white noise input and channels (a) gm1, (b) gm4 and (c) gm4+noise.

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 104

−60

−40

−20

0 Model gm1

Samples

MS

E (d

B)

IMPNLMSWIMPNLMSNLMS

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 104

−60

−40

−20

0 Model gm4

Samples

MS

E (d

B)

IMPNLMSWIMPNLMSNLMS

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 104

−60

−40

−20

0 Model gm4+noise

Samples

MS

E (d

B)

IMPNLMSWIMPNLMSNLMS

Fig. 3. MSE evolution for the WIMPNLMS algorithm with Bior4.4 wavelet and M = 3, and for the IMPNLMS and NLMS algorithms, for colored noiseinput and channels (a) gm1, (b) gm4 and (c) gm4+noise.

1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

−60

−40

−20

Samples

MS

E (d

B)

Model gm1

SimulationTheory

1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

−60

−40

−20

Samples

MS

E (d

B)

Model gm4

SimulationTheory

1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

−30

−20

−10

0

Samples

MS

E (d

B)

Model gm4+noise

SimulationTheory

Fig. 4. Theoretical and experimental MSE evolutions for the IMPNLMS algorithm for white noise input and channels (a) gm1, (b) gm4 and (c) gm4+noise.

388