12
Nonlinear systems identification using dynamic multi-time scale neural networks Xuan Han, Wen-Fang Xie n , Zhijun Fu, Weidong Luo Concordia University, Mechanical & Industrial Engineering, 1455 De Maisonneuve W., Montreal, QC, Canada H3G 1M8 article info Article history: Received 22 November 2010 Received in revised form 5 May 2011 Accepted 5 June 2011 Communicated by J. Zhang Available online 25 June 2011 Keywords: Dynamic multi-time scale neural networks Nonlinear systems On-line identification Neural network identifiers abstract In this paper, two Neural Network (NN) identifiers are proposed for nonlinear systems identification via dynamic neural networks with different time scales including both fast and slow phenomena. The first NN identifier uses the output signals from the actual system for the system identification. The on-line update laws for dynamic neural networks have been developed using the Lyapunov function and singularly perturbed techniques. In the second NN identifier, all the output signals from nonlinear system are replaced with the state variables of the neuron networks. The on-line identification algorithm with dead-zone function is proposed to improve nonlinear system identification perfor- mance. Compared with other dynamic neural network identification methods, the proposed identifica- tion methods exhibit improved identification performance. Three examples are given to demonstrate the effectiveness of the theoretical results. & 2011 Elsevier B.V. All rights reserved. 1. Introduction Numerous systems in the industrial fields demonstrate non- linearities and uncertainties, which can be considered as partially or totally black-box. Dynamic neural networks have been applied in system identification and control for those systems for many years and due to the fast adaptation and superb learning cap- ability, they have transcendent advantages compared to the traditional neural network methods [2,46,21]. A wide class of nonlinear physical systems contains slow and fast dynamic processes that occur at different moments. Recent research results show that neural networks are very effective for modeling the complex nonlinear systems with different time scales when we have incomplete model information, or even when we consider the plant as a black-box [2,3,25]. Dynamic neural networks with different time scales can model the dynamics of the short-term memory of neural activity levels and the long-term memory on the dynamics of unsupervised synaptic modifications [1]. Different methods have been applied in this domain. The stability of equilibrium of competitive neural network with short- and long-term memory was analyzed in [10] by a quadratic-type Lyapunov function. Refs. [9,11,12] presented new methods of analyzing the dynamics of a system with different time scales based on the theory of flow invariance. The K-monotone system theory was used for analyzing the dynamics of a competitive neural system with different time scales in [13]. The past decade has witnessed great activities in stability analysis, identification and control with continuous time dynamic multi-time scale neural networks [8,14,15,17,18,20]. In [8], Sandoval et al. developed new stability conditions using the Lyapunov function and singularly perturbed technique. In [7], the passivity-based approach was used to derive stability conditions for dynamic neural networks with different time scales. The passivity approach was used to prove that a gradient descent algorithm for weight adjustment was stable and robust to any bounded uncer- tainties, including the optimal network approximation error [14]. Several papers proposed an adaptive nonlinear identification and trajectory [18] or velocity tracking [15] via dynamic neural net- works without considering the multiple time scales. Although dynamic neural networks have been used for nonlinear system identification and control in [14,15,18,24], two issues are worth mentioning with regard to these researches. First, in the dynamic neural network model [18], a strong assumption about the linear part matrices A and B was posed as a known Hurwitz matrix, which was sometimes unrealistic for the black-box nonlinear system identification. Second, the adaptive control and identification for dynamic systems with different time scales via multiple time scale neural networks has not yet been established in the literature. In this paper, we consider the nonlinear system with multiple time scales and model it via a continuous time dynamic neural network with different time scales including both fast and slow phenomena. Two NN identifiers are proposed for nonlinear systems identification. In the first NN identifier, the structure of the neural identifier depends on the output signals from the actual system. This may risk the stability of the neural network because it is related to that of the real system. In order to get rid of this flaw and also simplify the identification scheme, we replace all the Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2011.06.007 n Corresponding author. E-mail addresses: [email protected], [email protected] (W.-F. Xie). Neurocomputing 74 (2011) 3428–3439

Nonlinear systems identification using dynamic multi-time scale neural networks

Embed Size (px)

Citation preview

Neurocomputing 74 (2011) 3428–3439

Contents lists available at ScienceDirect

Neurocomputing

0925-23

doi:10.1

n Corr

E-m

journal homepage: www.elsevier.com/locate/neucom

Nonlinear systems identification using dynamic multi-time scaleneural networks

Xuan Han, Wen-Fang Xie n, Zhijun Fu, Weidong Luo

Concordia University, Mechanical & Industrial Engineering, 1455 De Maisonneuve W., Montreal, QC, Canada H3G 1M8

a r t i c l e i n f o

Article history:

Received 22 November 2010

Received in revised form

5 May 2011

Accepted 5 June 2011

Communicated by J. Zhangsingularly perturbed techniques. In the second NN identifier, all the output signals from nonlinear

Available online 25 June 2011

Keywords:

Dynamic multi-time scale neural networks

Nonlinear systems

On-line identification

Neural network identifiers

12/$ - see front matter & 2011 Elsevier B.V. A

016/j.neucom.2011.06.007

esponding author.

ail addresses: [email protected], wfxie@

a b s t r a c t

In this paper, two Neural Network (NN) identifiers are proposed for nonlinear systems identification via

dynamic neural networks with different time scales including both fast and slow phenomena. The first

NN identifier uses the output signals from the actual system for the system identification. The on-line

update laws for dynamic neural networks have been developed using the Lyapunov function and

system are replaced with the state variables of the neuron networks. The on-line identification

algorithm with dead-zone function is proposed to improve nonlinear system identification perfor-

mance. Compared with other dynamic neural network identification methods, the proposed identifica-

tion methods exhibit improved identification performance. Three examples are given to demonstrate

the effectiveness of the theoretical results.

& 2011 Elsevier B.V. All rights reserved.

1. Introduction

Numerous systems in the industrial fields demonstrate non-linearities and uncertainties, which can be considered as partiallyor totally black-box. Dynamic neural networks have been appliedin system identification and control for those systems for manyyears and due to the fast adaptation and superb learning cap-ability, they have transcendent advantages compared to thetraditional neural network methods [2,4–6,21].

A wide class of nonlinear physical systems contains slow andfast dynamic processes that occur at different moments. Recentresearch results show that neural networks are very effective formodeling the complex nonlinear systems with different timescales when we have incomplete model information, or evenwhen we consider the plant as a black-box [2,3,25]. Dynamicneural networks with different time scales can model thedynamics of the short-term memory of neural activity levelsand the long-term memory on the dynamics of unsupervisedsynaptic modifications [1]. Different methods have been appliedin this domain. The stability of equilibrium of competitive neuralnetwork with short- and long-term memory was analyzed in [10]by a quadratic-type Lyapunov function. Refs. [9,11,12] presentednew methods of analyzing the dynamics of a system withdifferent time scales based on the theory of flow invariance. TheK-monotone system theory was used for analyzing the dynamicsof a competitive neural system with different time scales in [13].

ll rights reserved.

me.concordia.ca (W.-F. Xie).

The past decade has witnessed great activities in stabilityanalysis, identification and control with continuous time dynamicmulti-time scale neural networks [8,14,15,17,18,20]. In [8],Sandoval et al. developed new stability conditions using theLyapunov function and singularly perturbed technique. In [7], thepassivity-based approach was used to derive stability conditions fordynamic neural networks with different time scales. The passivityapproach was used to prove that a gradient descent algorithm forweight adjustment was stable and robust to any bounded uncer-tainties, including the optimal network approximation error [14].Several papers proposed an adaptive nonlinear identification andtrajectory [18] or velocity tracking [15] via dynamic neural net-works without considering the multiple time scales. Althoughdynamic neural networks have been used for nonlinear systemidentification and control in [14,15,18,24], two issues are worthmentioning with regard to these researches. First, in the dynamicneural network model [18], a strong assumption about the linearpart matrices A and B was posed as a known Hurwitz matrix, whichwas sometimes unrealistic for the black-box nonlinear systemidentification. Second, the adaptive control and identification fordynamic systems with different time scales via multiple time scaleneural networks has not yet been established in the literature.

In this paper, we consider the nonlinear system with multipletime scales and model it via a continuous time dynamic neuralnetwork with different time scales including both fast and slowphenomena. Two NN identifiers are proposed for nonlinearsystems identification. In the first NN identifier, the structure ofthe neural identifier depends on the output signals from the actualsystem. This may risk the stability of the neural network because itis related to that of the real system. In order to get rid of this flawand also simplify the identification scheme, we replace all the

X. Han et al. / Neurocomputing 74 (2011) 3428–3439 3429

output signals from nonlinear system with the state variables of theneuron networks in the construction of the second NN identifier.For two NN identifiers, the Lyapunov function and singularlyperturbed techniques are used to develop the on-line update lawsfor both dynamic neural network weights and the linear partmatrices A and B. We also determine new stability conditions foridentification error by means of Lyapunov-like analysis, which areinspired from [14,15]. To the best of our knowledge, it is the firsttime that the identification of the linear part matrices is proposedfor the neural networks model with multi-time scales to provideflexibility and accuracy of nonlinear system identification. In addi-tion, new dead-zone indicators are introduced in the on-line updatelaws to prevent the weights of neural network from drifting whenthe modeling error presents for the second NN identifier. Comparedwith the other dynamic neural network identification methods, thetwo proposed NN identifiers with on-line update laws exhibitimproved identification performance. Three examples are alsogiven to demonstrate the effectiveness of the theoretical results.

The paper is organized as follows. Section 2 discusses the firststructure of dynamic neural networks with different time scalesand the identification algorithm. Section 3 introduces theimproved NN identifier with updating laws. The simulationresults and the conclusion of this paper are presented inSections 4 and 5, respectively.

2. Identification algorithm

In this section we consider the problem of identifying a class ofsingular perturbation [16] nonlinear systems with two differenttime scales described by

_x ¼ fxðx,y,u,tÞ

e _y ¼ fyðx,y,u,tÞ ð1Þ

where xARn and yARn are slow and fast state variables, respec-tively, uARp is the control input vector and e40 is a smallparameter.

In order to identify the nonlinear dynamic system (1), weemploy the dynamic neural networks with two time scales:

_xnn ¼ AxnnþW1s1ðx,yÞþW2f1ðx,yÞU

e _ynn ¼ BynnþW3s2ðx,yÞþW4f2ðx,yÞU ð2Þ

where xnnARn and ynnARn are the slow and fast state variables ofneural networks, respectively, W1,2ARn�2n and W3,4ARn�2n are theweights in the output layers, skðxnn,ynnÞ ¼ ½skðx1Þ. . .skðxnÞ, skðy1Þ. . .

skðynÞ�T AR2n ðk¼ 1,2Þ, fkðxnn,ynnÞ ¼ ½fkðx1Þ. . .fkðxnÞ,fk ðy1Þ. . .fk

Α

Α

W1

W2

W4

B

W3 1/ε

Xnn

Ynn

Σ

Σ

�1

�2

�2

�1 X

X

X

U

Y

Fig. 1. Structure of the first NN identifier.

ðynÞ�T AR2n ðk¼ 1,2Þ, U ¼ ½u1,u2,. . .,up,0,. . .,0�T AR2n is the control

input vector, AARn�n and BARn�n are the unknown matrices for thelinear part of neural networks, the parameter e is a small positivenumber, which is assumed to be known. When e is equal to 1 (e¼1),the dynamic neural network (2) becomes a normal one [16]. Thestructure of the NN identifier is shown in Fig. 1.

Generally speaking, when the dynamic neural network (2)does not match the given nonlinear system (1) exactly, thenonlinear system can be represented as

_x ¼ AnxþWn

1s1ðx,yÞþWn

2f1ðx,yÞU�Dfx

e _y ¼ BnyþWn

3s2ðx,yÞþWn

4f2ðx,yÞU�Dfy ð3Þ

where Wn

1 , Wn

2 , Wn

3 , Wn

4 are unknown nominal constantmatrices, the vector functions Dfx and Dfy can be regarded asmodeling error and disturbances, and An and Bn are the unknownnominal constant Hurwitz matrices.

We assume the system states are completely measurable, andthe number of the state variables of the plant is equal to that ofthe neural networks (2). The identification errors are defined by

Dx¼ xnn�x

Dy¼ ynn�y

From (2) and (3), we can obtain the error dynamics equations

D _x ¼ AnDxþ ~Axnnþ~W 1s1ðx,yÞþ ~W 2f1ðx,yÞUþDfx

eD _y ¼ BnDyþ ~Bynnþ~W 3s2ðx,yÞþ ~W 4f2ðx,yÞUþDfy ð4Þ

where ~W 1 ¼W1�Wn

1 , ~W 2 ¼W2�Wn

2 , ~W 3 ¼W3�Wn

3 , ~W 4 ¼W4�Wn

4

and ~A ¼ A�An, ~B ¼ B�Bn.The Lyapunov synthesis method is used to derive the stable

adaptive laws. Consider the Lyapunov function candidate:

V1I ¼ V1xþV1y

V1x ¼DxT PxDxþtrf ~WT

1Px~W 1gþtrf ~W

T

2Px~W 2gþtrf ~A

TPx~Ag

V1y ¼DyT PyDyþtrf ~WT

3Py~W 3gþtrf ~W

T

4Py~W 4gþtrf ~B

TPy~Bg ð5Þ

where matrices Px and Py are positive definite matrices.Since matrices An and Bn are nominal constant Hurwitz

matrices, for any positive definite matrices Qx and Qy, theredefinitely exist matrices Px and Py, which satisfy the followingLyapunov equations [27]:

AnT PxþPxAn ¼�Qx

BnT PyþPyBn ¼�Qy ð6Þ

Assumption 1. For the above matrices Px, Py, Qx and Qy, thereexist positive definite matrices Lx and Ly such that

lminðQxÞZlmaxðPxLxPxÞ

lminðQyÞZlmaxðPyLyPyÞ

Differentiating (5) and using (4) yield

_V 1x ¼�DxT QxDxþ2DxT P ~Axnnþ2DxT Px~W 1s1ðx,yÞ

þ2DxT Px~W 2f1ðx,yÞUþ2DxT PxDfx

þ2trf_~A

TPx~Agþ2trf _~W

T

1Px~W 1gþ2trf _~W

T

2Px~W 2g

_V 1y ¼�ð1=eÞDyT QyDyþð1=eÞ2DyT Py~Bynnþð1=eÞ2DyT Py

~W 3s2ðx,yÞ

þð1=eÞ2DyT Py~W 4f2ðx,yÞUþð1=eÞ2DyT PyDfy

þ2trf _~BTPy~Bgþ2trf _~W

T

3Py~W 3gþ2trf _~W

T

4Py~W 4g ð7Þ

Theorem 1. Consider the identification model (3) for (1). If themodel errors and disturbances Dfx and Dfy are bounded and with

Weights

Linear Parameter

Nonlinear

Dynamic

u x,y

+

-

Δ

xnn,ynn

Fig. 2. Identification scheme of the first NN identifier.

X. Han et al. / Neurocomputing 74 (2011) 3428–34393430

Assumption 1, the updating laws

_A ¼�DxxTnn

_W 1 ¼�DxsT1ðx,yÞ

_W 2 ¼�DxUTfT1ðx,yÞ

_B ¼�ð1=eÞDyyTnn

_W 3 ¼�ð1=eÞDysT2ðx,yÞ

_W 4 ¼�ð1=eÞDyUTfT2ðx,yÞ

ð8Þ

can guarantee the following stability properties:

Dx,DyAL1, W1,2,3,4, A,BAL1, A,BAL1

Proof. Since the neural network’s weights are adjusted as (8) andthe derivatives of the neural network weights and matrices satisfy

the following _W 1 ¼_~W 1, _W 2 ¼

_~W 2, _W 3 ¼_~W 3, _W 4 ¼

_~W 4, _A ¼_~A,

_B ¼ _~B, from (7), _V 1x and _V 1y become

_V 1x ¼�DxT QxDxþ2DxT PxDfx

_V 1y ¼�ð1=eÞDyT QyDyþð1=eÞ2DyT PyDfy ð9Þ

Since DxT PxDfx is scalar, using the following matrix inequality[28]:

XT YþðXT YÞT rXTL�1XþYTLY ð10Þ

where X,YARj�k are any matrices, L¼LT ARj�j is any positivedefinite matrix, we obtain:

2DxT PxDfxrDxT PxLxPxDxþDf Tx L�1x Dfx

ð1=eÞ2DyT PyDfyr ð1=eÞDyT PyLyPyDyþð1=eÞDf Ty L�1y Dfy

Using Assumption 1, Eq. (9) can be represented as

_V 1x ¼�DxT QxDxþ2DxT PxDfx

r�lminðQxÞ:Dx:2þDxT PxLxPxDxþDf T

x L�1x Dfx

r�axð:Dx:Þþbxð:Dfx:Þ

_V 1y ¼�ð1=eÞDyT QyDyþð1=eÞ2DyT PyDfy

r�ð1=eÞlminðQyÞ:Dy:2þð1=eÞDyT PyLyPyDyþð1=eÞDf T

y L�1y Dfy

r�ayð:Dy:Þþbyð:Dfy:Þ

where axð:Dx:Þ ¼ ðlminðQxÞ�lmaxðPxLxPxÞÞ:Dx:2, bxð:fx:Þ ¼ lmax

ðL�1x Þ:Dfx:

2, ayð:Dy:Þ ¼ ð1=eÞðlminðQyÞ�lmaxðPyLyPyÞÞ:Dy:2

, byð:fy:Þ ¼

ð1=eÞlmaxðL�1y Þ:Dfy:

2.

Since ax, bx, ay, by are KN functions, V1x and V1y are ISS-Lyapunov functions. Using Theorem 1 in [20], the dynamics of theidentification error (4) is input to state stable, which impliesDx, Dy, W1,2,3,4, A,BAL1. Such input to state stability means thatthe behavior of neural network identification should remainbounded when its inputs are bounded [14].

Remark 1. When e is very close to zero, both W3 and W4 exhibit ahigh-gain behavior, causing the instability of identification algo-rithm. The Lyapunov function (6) can be multiplied by anypositive constant a, i.e., BnT(aPy)þ(aPy)Bn

¼�aQy, the learninggains of W3 and W4 become (1/e)aPy. We can also choose a as avery small number, which is close to zero so that the learning gain(1/e)aPy does not become too large.

The structure of the identification scheme is illustrated in Fig. 2.

3. Improved system identification

In the proposed identification scheme, we use the signals fromthe actual system in the neuron networks to identify the non-linear system (1). This may simplify the identification and thecontrol procedure, but the updating laws depend on the actual

signals of the nonlinear system. Also, this may risk the stability ofthe neural network because it is related to the output of the realsystem. In order to conquer this flaw and also simplify theidentification scheme, we replace all the output signals fromnonlinear system with the state variables of the neural networksin the construction of NN identifier. In the on-line update laws,we introduce new dead-zone indicators to prevent the weights ofneural network from drifting when the modeling error presentsfor dynamic multi-time scales neural network.

Consider the nonlinear system (1). In order to identify the system,we employ the dynamic neural networks with two time scales:

_xnn ¼ AxnnþW1s1ðV1½xnn,ynn�T ÞþW2f1ðV3½xnn,ynn�

T ÞgðUÞe _ynn ¼ BynnþW3s2ðV2½xnn,ynn�

T ÞþW4f2ðV4½xnn,ynn�T ÞgðUÞ ð11Þ

where xnnARn and ynnARn are the slow and fast state variables of

neural networks. W1,2ARn�2n and W3,4ARn�2n are the weights in

the output layers, V1,2AR2n�2n and V3,4AR2n�2n are the weights inthe hidden layer and skðxnn,ynnÞ ¼ ½skðxnn_1Þ. . .skðxnn_nÞ, skðynn_1Þ

,. . .,skðynn_nÞ�T AR2n

ðk¼ 1,2Þ are diagonal matrices, fkðxnn,ynnÞ ¼

diag½fkðxnn_1Þ. . .fkðxnn_nÞ,fkðynn_1Þ,. . .,fkðynn_nÞ�T AR2n�2n

ð k¼ 1,

2 Þ, U ¼ ½u1,u2,. . .ui,0,. . .0�T AR2n is the control input vector, gðUÞ :Rn-Rn is a differentiable input–output mapping function. AARn�n

and BARn�n are the unknown matrices for the linear part of neuralnetworks and the parameter e is an unknown small positive number.

The activation functions skðUÞ and fkðUÞ are still kept as sigmoidfunction.

In order to simplify the analysis process, we consider thesimplest structure, which means

p¼ q¼ n, V1 ¼ V2 ¼ I, f1ðUÞ ¼f2ðUÞ ¼ I

_xnn ¼ AxnnþW1s1ðxnn,ynnÞþW2gðUÞe _ynn ¼ BynnþW3s2ðxnn,ynnÞþW4gðUÞ ð12Þ

where xnnARn and ynnARn are the slow and fast state variablesof neural networks. W1,2ARn�2n and W3,4ARn�2n are the weightsin the output layers, skðxnn,ynnÞ ¼ ½skðxnn_1Þ. . .skðxnn_nÞ, skðynn_1Þ. . .

skðynn_nÞ�T AR2n

ðk¼ 1,2Þ, gðUÞ : Rm-Rn is a differentiable input–output mapping function, U ¼ ½u1,u2,. . .up,0,. . .0�T AR2n is thecontrol input vector, AARn�n and BARn�n are unknown matricesfor the linear part of neural networks. The parameter e is a knownsmall positive number. When e is equal to 1 (e¼1), the dynamicneural network (12) becomes a normal one [16]. The structure ofthe improved NN identifier is shown in Fig. 3.

Generally speaking, when the dynamic neural network (12)does not match the given nonlinear system (1) exactly, the

∑ ∫Xnn

�1

∑ ∫Ynn�2 1/ε

U� (·)

W1

W2

W4

W3

A

B

Xnn

Ynn

Fig. 3. Structure of the second NN identifier.

X. Han et al. / Neurocomputing 74 (2011) 3428–3439 3431

nonlinear system can be represented as

_x ¼ AnxþWn

1s1ðx,yÞþWn

2gðUÞ�Dfx

e _y ¼ BnyþWn

3s2ðx,yÞþWn

4gðUÞ�Dfy ð13Þ

where Wn

1 , Wn

2 , Wn

3 , Wn

4 are unknown nominal constantmatrices, the vector functions Dfx and Dfy can be regarded asmodeling error and disturbances, which are assumed to bebounded as :Dfx:rDf x,:Dfy:rDf y and An and Bn are unknownnominal constant Hurwitz matrices.

Assumption 2. The nominal values Wn

1 , Wn

2 , Wn

3 , Wn

4 arebounded as

Wn

1L�11 WnT

1 rW1 Wn

3L�13 WnT

3 rW3

Wn

2L�12 WnT

2 rW2 Wn

4L�14 WnT

4 rW4

ð14Þ

where L�11 , L�1

2 , L�13 , L�1

4 are any positive definite symmetricmatrices, W1, W2, W3, W4 are prior known matrices bounds.

As we assumed in Section 2, the state variables in system (1)are completely measurable and still the number of the statevariables of the plant is equal to that of the neural networks (12).The identification errors are defined as

Dx¼ x�xnn

Dy¼ y�ynn ð15Þ

From (12) and (13), the error dynamic equations become

D _x ¼ AnDxþ ~AxnnþWn

1 ~s1þ~W 1s1ðxnn,ynnÞþ

~W 2gðUÞþDfx

eD _y ¼ BnDyþ ~BynnþWn

3 ~s2þ~W 3s2ðxnn,ynnÞþ

~W 4gðUÞþDfy ð16Þ

where ~W 1 ¼Wn

1�W1, ~W 2 ¼Wn

2�W2, ~W 3 ¼Wn

3�W3, ~W 4 ¼Wn

4�W4,~A ¼ An�A and ~B ¼ Bn�B.

Assumption 3. The difference of the activation function skðUÞ,which is ~sk ¼ skðx,yÞ�skðxnn,ynnÞ, satisfies the generalized Lipshitzcondition

~sT1L1 ~s1o

Dx

Dy

" #TD1 0

0 D1

" #Dx

Dy

" #¼DxT D1DxþDyT D1Dy

~sT2L2 ~s2o

Dx

Dy

" #TD2 0

0 D2

" #Dx

Dy

" #¼DxT D2DxþDyT D2Dy ð17Þ

where skðx,yÞ ¼ ½skðx1Þ,. . .,skðxnÞ, skðy1Þ,. . .,skðynÞ�T AR2n

ðk¼ 1,2Þ,

D1 ¼D1T 40 and D2 ¼D2

T 40 are known normalizing matrices.

Lemma 1. [12] AARn�n is a Hurwitz matrix, R,Q ARn�n,R¼ RT 40, Q ¼ QT 40 if ðA,R1=2Þ is controllable, ðA,Q1=2Þ is obser-

vable and

AT R�1A�Q Z14ðA

T R�1�R�1AÞRðAT R�1�R�1AÞT

is satisfied, the algebraic Riccati equation AT XþXAþXRXþQ ¼ 0 has

a unique positive definite solution X¼XT40.

Define Rx ¼W1, Ry ¼W , Qx ¼D1þð1=eÞD2þQxo and Qy ¼D2þ

eD1þQyo. If one can select proper Qxo and Qyo so that Qx and Qy

satisfy the conditions in Lemma 1, there exist matrices Px and Py

satisfying the following equations:

AnT PxþPxAnþPxRxPxþQx ¼ 0

BnT PyþPyBnþPyRyPyþQy ¼ 0 ð18Þ

Define

Rx ¼W 1þL�12 Qx ¼D1þð1=eÞðSy=SxÞD2þQxo

Ry ¼W 3þL�14 Qy ¼D2þeðSx=SyÞD1þQyo

ð21Þ

where functions Sx and Sy are defined as

Sx ¼ 1�Hx

:P1=2x Dx:

" #þ

, Sy ¼ 1�Hy

:P1=2y Dy:

" #þ

ð22Þ

where [�]þ¼max{�,0}, and

Hx ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffilmaxðPxÞ

lminðQxoÞlmaxðL2ÞDf

2

xþlmaxðD1ÞH2

y

lminðPyÞ

!vuut������

������Hy ¼

��ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffilmaxðPyÞ

lminðQyoÞlmaxðL4ÞDf

2

yþlmaxðD2ÞH2

x

lminðPxÞ

� �s ������ ð23Þ

Remark 2. If the positive function V(x), xARn is defined asVðxÞ : ¼ ½x�xn�m�2þ , where ½U�2þ : ¼ ð½U�þ Þ

2,½U�þ is defined as½U�þ ¼maxfU,0g, the function V(x) is differentiable and its gradient is

rVðxÞ ¼ 2 :x�xn:�m� �

þ

x�xn

:x�xn:

with the Lipschitz constant equal to one [26].

Theorem 2. Consider the nonlinear system (1) and the identification

model (12). The updating laws for the dynamic neural networks are

developed as following

_A ¼�SxkADxxTnn

_W 1 ¼�Sxk1DxsT1ðxnn,ynnÞ

_W 2 ¼�Sxk2DxgðUÞT

_B ¼�ð1=eÞSykBDyyTnn

_W 3 ¼�ð1=eÞSyk3DysT2ðxnn,ynnÞ

_W 4 ¼�ð1=eÞSyk4DygðUÞð24Þ

where kA, kB, k1, k2, k3, k4 are positive constants and the following

stability properties are guaranteed:

(1)

Dx, Dy, W1,2,3,4, A, BAL1 and limt-1_~W 1,2 ¼ 0,limt-1

_~W 3,4 ¼ 0

(2)

The identification error satisfies the following tracking per-formance:

Z T

oSxDxT QxoDxþð1=eÞ

Z T

oSyDyT QyoDy

rVoþT lmaxðL2ÞDf2

xþlmaxðD1ÞH

2y

lminðPyÞ

!

þð1=eÞ lmaxðL4ÞDf2

yþlmaxðD2ÞH

2x

lminðPxÞ

� ��ð25Þ

X. Han et al. / Neurocomputing 74 (2011) 3428–34393432

of. The Lyapunov synthesis method is used to derive the

Prostable adaptive laws. Consider the Lyapunov function candidate:

V2I ¼ V2xþV2y

V2x ¼ ½:P1=2x Dx:�Hx�

2þ þ

1

k1trf ~W

T

1Px~W 1gþ

1

k2trf ~W

T

2Px~W 2g

þ1

kAtrf ~A

TPx~Ag

V2y ¼ ½:P1=2y Dy:�Hy�

2þ þ

1

k3trf ~W

T

3Py~W 3gþ

1

k4trf ~W

T

4Py~W 4g

þ1

kBtrf ~B

TPy~Bg ð26Þ

Differentiating (26) and following Remark 2 yield

_V 2x ¼ 2½:P1=2x Dx:�Hx�þ

ðP1=2x DxÞT

:P1=2x Dx:

P1=2x

_Dx

þ2

kAtrf_~A

T

Px~Agþ

2

k1trf _~W

T

1Px~W 1gþ

2

kBtrf _~W

T

2Px~W 2g

¼ 2SxDxT PxD _xþ2

kAtrf_~A

T

Px~Agþ

2

k1trf _~W

T

1Px~W 1g

þ2

kBtrf _~W

T

2Px~W 2g

_V 2y ¼ 2½:P1=2y Dy:�Hy�þ

ðP1=2y DyÞT

:P1=2y Dy:

P1=2y

_Dy

þ2

kBtrf _~B

TPy~Bgþ

2

k3trf _~W

T

3Py~W 3gþ

2

k4trf _~W

T

4Py~W 4g

¼ 2SyDyT PyD _yþ2

kBtrf _~B

TPy~Bgþ

2

k3trf _~W

T

3Py~W 3g

þ2

k4trf _~W

T

4Py~W 4g ð27Þ

Since the neural network’s weights are adjusted as (24), thederivatives of the neural network weights and matrices satisfy

_W 1,2,3,4 ¼_~W 1,2,3,4, _A ¼

_~A, _B ¼ _~B.

Using error dynamics (16), Eq. (27) becomes

_V 2x ¼ Sx½DxT ðAnT PxþPxAnÞDxþ2DxT PxWn

1 ~s1þ2DxT PxDfx�

_V 2y ¼ Syð1=eÞ½DyT ðBnT PyþPyBnÞDyþ2DyT PyWn

3 ~s2þ2DyT PyDfy�

ð28Þ

Using the matrix inequality (10) and Assumptions 2 and 3, oneobtains

2DxT PxWn

1 ~s1rDxT PxWn

1L�11 Wn

1 PxDxþ ~s1TL1 ~s1

rDxT PxW1PxDxþDxT D1DxþDyT D1Dy

2DyT PyWn

3 ~s2rDyT PyWn

3L�13 Wn

3 PyDyþ ~s2TL3 ~s2

rDyT PyW3PyDyþDxT D2DxþDyT D2Dy ð29Þ

and

2DxT PxDfxrDxT PxL�12 PxDxþDf T

x L2Dfx

2ð1=eÞDyT PyDfyr ð1=eÞðDyT PyL�14 PyDyþDf T

y L4DfyÞ ð30Þ

Hence, from (28) one has

_V 2xrSxDxT ½AnT PxþPxAnþPxðW1þL�12 ÞPxþD1�Dx

þSxDyT D1DyþSxDf Tx L2Dfx

_V 2yrð1=eÞSyDyT ½BnT PyþPyBnþPyðW3þL�14 ÞPyþD2�Dy

þð1=eÞSyDxT D2Dxþð1=eÞSyDf Ty L4Dfy ð31Þ

Case (a): Both the identification errors are larger than thethresholds (i.e. Sx40, Sy40). One has

_V 2xþ_V 2yrSxDxT ½AnT PxþPxAnþPxðW1þL

�12 ÞPxþD1

þð1=eÞðSy=SxÞD2þQxo�Dx�SxDxT QxoDxþSxf Tx L2Dfx

þð1=eÞSyDyT ½BnT PyþPyBnþPyðW3þL�14 ÞPyþD2

þeðSx=SyÞD1þQyo�Dy

�ð1=eÞSyDyT QyoDyþð1=eÞSyf Ty L4Dfy ð32Þ

One can obtain

_V 2I ¼_V 2xþ

_V 2y

r�SxðDxT QxoDx�Df Tx L2DfxÞ�ð1=eÞSyðDyT QyoDy�Df T

y L4DfyÞ

r�SxðlminðQxoÞ:Dx:2�lmaxðL2Þ:Dfx:

�ð1=eÞSyðlminðQyoÞ:Dy:2�lmaxðL4Þ:Dfy:

r�SxðlminðQxoÞ:Dx:2�lmaxðL2ÞDf

2

x Þ

�ð1=eÞSyðlminðQyoÞ:Dy:2�lmaxðL4ÞDf

2

y Þ

¼�SxlminðQxoÞ

lmaxðPxÞlmaxðPxÞ:Dx:2

�lmaxðPxÞ

lminðQxoÞlmaxðL2ÞDf

2

x

� �

�ð1=eÞSylminðQyoÞ

lmaxðPyÞlmaxðPyÞ:Dy:2

�lmaxðPyÞ

lminðQyoÞlmaxðL4ÞDf

2

y

� �

r�SxlminðQxoÞ

lmaxðPxÞð:P1=2

x Dx:2�H2

x Þ�ð1=eÞSylminðQyoÞ

lmaxðPyÞ

�ð:P1=2y Dy:2

�H2y Þr0 ð33Þ

Case (b): The identification error of y is smaller than thethreshold (Sx40, Sy¼0).

From (31) one has :P1=2y Dy:rHy and _V 2y ¼ 0. One can obtain

_V 2I r�SxðDxT QxoDx�Df Tx L2Dfx�DyT D1DyÞ

r�SxðlminðQxoÞ:Dx:2�lmaxðL2Þ:Dfx:

2�lmaxðD1Þ:Dy:2

Þ

r�Sx lminðQxoÞ:Dx:2�lmaxðL2ÞDf

2

x�lmaxðD1ÞH

2y

lminðPyÞ

!

¼�SxlminðQxoÞ

lmaxðPxÞ

lmaxðPxÞ:Dx:2

�lmaxðPxÞ

lminðQxoÞ

� lmaxðL2ÞDf2

xþlmaxðD1ÞH

2y

lminðPyÞ

!!

r�SxlminðQxoÞ

lmaxðPxÞð:P1=2

x Dx:2�H2

x Þr0 ð35Þ

Case (c): The identification error of x is smaller than thethreshold (Sx¼0, Sy40).

From (31) one has :P1=2x Dx:rHx and _V 2x ¼ 0:

_V 2I ¼_V 2yr ð1=eÞSyDyT ½BnT PyþPyBnþPyðW3þL�1

4 ÞPyþD2

þeðSx=SyÞD1þQyo�Dy�ð1=eÞSyDyT QyoDy

þð1=eÞSyDf Ty L4Dfyþð1=eÞSyDxT D2Dx ð36Þ

One can obtain

_V 2I r�ð1=eÞSyðDyT QyoDy�Df Ty L4Dfy�DxT D2DxÞ

r�ð1=eÞSy

lminðQyoÞ:Dy:2

�lmaxðL4Þ:Dfy:2�lmaxðD2Þ:Dx:2

!

r�ð1=eÞSy

lminðQyoÞ:Dy:2

�lmaxðL4ÞDf2

y�lmaxðD2ÞH

2x

lminðPxÞ

!

¼�ð1=eÞSylminðQyoÞ

lmaxðPyÞ

lmaxðPyÞ:Dy:2

�lmaxðPyÞ

lminðQyoÞlmaxðL4ÞDf

2

yþlmaxðD2ÞH

2x

lminðPxÞ

� ��

r�ð1=eÞSylminðQyoÞ

lmaxðPyÞð:P1=2

y Dy:2�H2

y Þr0 ð37Þ

Case (d): Both the identification errors are smaller than thethresholds (Sx¼0, Sy¼0).

One has :P1=2x Dx:rHx, :P1=2

y Dy:rHy and _V 2I ¼ 0.In Case (a), one has

_V 2I r�SxðDxT QxoDx�Df Tx L2DfxÞ�ð1=eÞSyðDyT QyoDy�Df T

y L4DfyÞ

Weights

Updating Law

Nonlinear

System

Dynamic Neural Network

U X,Y

+

-

ΔX, ΔY

Linear Parameter

A B

Xnn, Ynn

Fig. 4. Identification scheme of the second NN identifier.

X. Han et al. / Neurocomputing 74 (2011) 3428–3439 3433

r�SxDxT QxoDxþSxlmaxðL2Þ:Dfx:2�ð1=eÞSyDyT QyoDy

þð1=eÞSylmaxðL4Þ:Dfy:2

r�SxDxT QxoDxþSx lmaxðL2ÞDf2

xþlmaxðD1ÞH

2y

lminðPyÞ

!

�ð1=eÞSyDyT QyoDyþð1=eÞSy lmaxðL4ÞDf2

yþlmaxðD2ÞH

2x

lminðPxÞ

� �ð38Þ

In Case (b), one has

_V 2I r�SxðDxT QxoDx�Df Tx L2Dfx�DyT D1DyÞ

r�SxDxT QxoDxþSxðlmaxðL2Þ:Dfx:2þlmaxðD1Þ:Dy:2

Þ

r�SxDxT QxoDxþSx lmaxðL2ÞDf2

xþlmaxðD1ÞH

2y

lminðPyÞ

!ð39Þ

In Case (c), one has

_V 2I r�ð1=eÞSyðDyT QyoDy�Df Ty L4Dfy�DxT D2DxÞ

r�ð1=eÞSyDyT QyoDyþð1=eÞSyðlmaxðL4Þ:Dfy:2þlmaxðD2Þ:Dx:2

Þ

�ð1=eÞSyDyT QyoDyþð1=eÞSy lmaxðL4ÞDf2

yþlmaxðD2ÞH

2x

lminðPxÞ

� �ð40Þ

From the analysis above, we can draw the conclusion that (38) canbe used to represent the derivative of the Lyapunov function for allthe Cases (a), (b), (c) and (d). Since 0rSxr1, 0rSyr1, one infers

_V 2I r�SxDxT QxoDxþ lmaxðL2ÞDf2

xþlmaxðD1ÞH

2y

lminðPyÞ

!

�ð1=eÞSyDyT QyoDyþð1=eÞ lmaxðL4ÞDf2

yþlmaxðD2ÞH

2x

lminðPxÞ

� �ð41Þ

Since V2x and V2y are non-increasing functions of the time andbounded, V2x(t), V2y(0), V2x(t), V2y(t) are bounded. Therefore byintegrating _V 2I on both sides from 0 to T, one obtains

VT2I�V0

2I r�Z T

oSxDxT QxoDxþT lmaxðL2ÞDf

2

xþlmaxðD1ÞH

2y

lminðPyÞ

!

�ð1=eÞZ T

oSyDyT QyoDyþð1=eÞT lmaxðL4ÞDf

2

yþlmaxðD2ÞH

2x

lminðPxÞ

� �ð42Þ

Hence, the following inequality is heldZ T

oSxDxT QxoDxþð1=eÞ

Z T

oSyDyT QyoDy

rV02I�VT

2IþT lmaxðL2ÞDf2

xþlmaxðD1ÞH

2y

lminðPyÞ

!

þð1=eÞT lmaxðL4ÞDf2

yþlmaxðD2ÞH

2x

lminðPxÞ

� �

rV02IþT lmaxðL2ÞDf

2

xþlmaxðD1ÞH

2y

lminðPyÞ

!

þð1=eÞ lmaxðL4ÞDf2

yþlmaxðD2ÞH

2x

lminðPxÞ

� ��ð43Þ

Remark 3. Sx and Sy are the dead-zone functions, which preventthe weights drifting into infinity when the modeling error pre-sents [22]. This is known as ‘‘parameters drift’’ [23] phenomenon.

It is noticed that Hx and Hy are thresholds for the identification

error. For case (a), where Sx40, Sy40, i.e. :P1=2x Dx:4Hx,

:P1=2y Dy:4Hy, smaller thresholds as in (33) could be used, but

we extend those to Hx and Hy to unify the thresholds for all thepossible Cases (a), (b), (c), (d) during the entire identification process.

In the all above 4 cases, since V2I ¼ V2xþV2y are positivedefinite, _V 2I ¼

_V 2xþ_V 2yr0 can be achieved using the update

laws (24). This implies Dx, Dy, W1,2,3,4, A, BAL1. Furthermore,

xnn ¼Dxþx,ynn ¼Dyþy are also bounded. From the error Eqs. (16),with the assumption that error and disturbances are bounded, we can

draw the conclusion that _Dx, _DyAL1. Since the control input gðUÞ

and s1,2ðUÞ are bounded, it is concluded that limt-1_~W 1,2 ¼ 0,

limt-1_~W 3,4 ¼ 0.

The structure of the improved identification scheme is illu-strated in Fig. 4.

4. Applications

To illustrate the theoretical results, we give the following threeexamples.

Example 1. Let us consider the nonlinear system in [18]

_x1 ¼ a1x1þb1 signðx2Þþu1

e _x2 ¼ a2x2þb2 signðx2Þþu2 ð44Þ

where we use the same parameter a1¼�5, a2¼�10, b1¼3, b2¼2,x1(0)¼�5, x2(0)¼�5, e¼0.2 and identification algorithm for NNweights as those in [18] except the additional linear matricesidentification algorithm and different activated functions, which isa sigmoid function skðUÞ ¼fkðUÞ ¼ 1=ð1þe�xÞ, k¼ 1,2 . The sameinput signals are adopted where u1 is a sinusoidal wave(u1¼8 sin(0.05t)) and u2 is a saw-tooth function with the amplitude8 and frequency 0.02 Hz.

To show the identification performance of the proposed algo-rithm, the performance index—Root Mean Square (RMS)—for thestates error has been adopted for the purpose of comparison:

RMS¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiXn

i ¼ 1

e2ðiÞ

!�n

vuutwhere n is number of the simulation steps, eðiÞ is the differencebetween the state variables in model and system at ith step.

X. Han et al. / Neurocomputing 74 (2011) 3428–34393434

In the first NN identifier, for state variable x1, the RMS value is0.139102 and the RMS for state variable x2 is 0.116635. In thesecond NN identifier, for state variable x1, the RMS value is0.047168 and the RMS for state variable x2 is 0.020158. TheRMS values of both state variables demonstrate that the secondNN identifier has better performance than the first one.

The results in Figs. 5–9 demonstrate that the identificationperformance has been improved compared to those in [18]. It canbe seen that the state variables of dynamic multi-time scale NNfollow those of the nonlinear system more accurately and quickly.The eigenvalues of the linear parameter matrix are shown in

0-3

-2

-1

0

1

2

3

0 100 200 300 400 500-3

-2

-1

0

1

2

3

t (second)

Δx

Fig. 6. Identification error for x1: (a) first NN

-3

-2

-1

0

1

2

3

0 100 200 300 400 500-3

-2

-1

0

1

2

3

X2 Ynn

Fig. 7. Identification result for x2: (a) first NN

0-3

-2

-1

0

1

2

3

0 100 200 300 400 500-3

-2

-1

0

1

2

3

t (second)

X1 Xnn

Fig. 5. Identification result for x1: (a) first NN

Fig. 9. The eigenvalues for both A and B are universally smallerthan zero, which means they are kept as stable during theidentification.

Example 2. In 1952, Hodgkin and Huxley proposed a system ofdifferential equations describing the flow of electric current throughthe surface membrane of a giant nerve fiber. Later this Hodgkin–Huxley (HH) model of the squid giant axon became one of the mostimportant models in computational neuroscience and a prototype ofa large family of mathematical models quantitatively describing

100 200 300 400 500t (second)

Δx

identifier and (b) second NN identifier.

0 100 200 300 400 500

t (second)

X2 Ynn

identifier and (b) second NN identifier.

100 200 300 400 500t (second)

X1 Xm

identifier and (b) second NN identifier.

0 100 200 300 400 500-6

-5

-4

-3

-2

-1

0

1

2

t (second)

γA γB

0 100 200 300 400 500-3

-2

-1

0

1

0 100 200 300 400 500-10.2

-10

-9.8

-9.6

γA

γB

t (second)

Fig. 9. Eigenvalues of the linear parameter matrices: (a) first NN identifier and (b) second NN identifier

0 100 200 300 400 500-3

-2

-1

0

1

2

3

t (second)

∆y

0 100 200 300 400 500-3

-2

-1

0

1

2

3

t (second)

∆Y

Fig. 8. Identification error for x2: (a) first NN identifier and (b) second NN identifier.

050

100150

V (m

V)

050

100

Δy1

(for V

)

0.2

0.8

n

0

0.5

1

m

0 100 200 300 400 500

0

0.5

time (ms)

h

V (m

V)

Δy1

(for V

) n

mh

050

100150

050

100

0.2

0.8

0 50 100 150 200 250 300 350 400 450 500

00.5

1

time (ms)

00.5

Fig. 10. Identification results: (a) first NN identifier and (b) second NN identifier. Solid line: state V, n, m, h; dotted lines: identified states of the NN model.

X. Han et al. / Neurocomputing 74 (2011) 3428–3439 3435

X. Han et al. / Neurocomputing 74 (2011) 3428–34393436

electrophysiology of various living cells and tissues [16,19]:

dVdt ¼

1CMðIext�gK n4ðVþEw�EK Þ

�gNam3hðVþEw�ENaÞ�g lðVþEw�ElÞÞ

dndt ¼

n1�ntn

e dmdt ¼

m1�mtm

e dhdt ¼

h1�hth

8>>>>>>>><>>>>>>>>:

ð45Þ

where time t is measured in ms, variable V is the membranepotential in mV, and n, m and h are dimensionless gating variablescorresponding to Kþ , Naþ and leakage current channels, respec-tively, which can vary in the range of [0,1]:

n1 ¼an

anþbn, m1 ¼

am

amþbm, h1 ¼

ah

ahþbh

tn ¼1

anþbn, tm ¼

1amþbm

, th ¼1

ahþbh

an ¼0:01ð10�VÞ

eð10�VÞ=10�1, am ¼

0:1ð25�VÞ

eð25�VÞ=10�1, ah ¼ 0:07e�ðV=20Þ

050

100150

V (m

V)

050

100

Δy1

(for V

)

0.8

0.2

n

00.5

1

m

0 100 200 300 400 500

0

0.5

time (ms)

h

V (m

V)

Δy (f

or V

)n

mh

Fig. 12. Identification results: (a) first NN identifier and (b) second NN identifier

0 100 200 300 400 500-4

-2

0

-50

0

50

time (ms)

λB1

λA2λA1

λB2

0 100 200 300 400 500

-1-

Fig. 11. Eigenvalues of the linear matrices A and B: (a

bn ¼ 0:125e�ðV=80Þ, bm ¼ 4e�ðV=18Þ, bh ¼1

eð30�VÞ=10þ1

gK ¼ 36mS=cm2, gNa ¼ 120mS=cm2, gl ¼ 0:3mS=cm2

EK ¼�12mV, ENa ¼ 115mV, El ¼ 10:599mV, CM ¼ 1mF=cm2

Remark 4. From the electrophysiology point of view, the mostimportant state of the HH system is the membrane potential V,which exhibits multifarious electro-physic phenomena and is alsothe core of the numerous former researches. Instead of using theoriginal HH model, we used the (2, 2) asymptotic embeddedsystem [16].

Remark 5. We took the modified HH model with the effect ofextremely low frequency (ELF) external electric field Ew, whichserved as the other control input besides the external appliedstimulation current Iext.

050

100150

050

100

1

0.8

0.2

00.5

1

0

0.5

0 50 100 150 200 250 300 350 400 450 500time (ms)

. Solid line: state V, n, m, h; dotted lines: identified states of the NN model.

0 50 100 150 200 250 300 350 400

0 50 100 150 200 250 300 350 400

-4

-20

0050

050

time (ms)

λA1

λA2

λB2

λB1

) first NN identifier and (b) second NN identifier.

X. Han et al. / Neurocomputing 74 (2011) 3428–3439 3437

Numerous researchers have carried out research on applyingvarious stimulations to HH model. In this research, we use theinput signals with which the states of NN can still follow those ofthe HH system:

Iext¼ 12 AIðcosoItþ1Þ, Ew ¼

12AE cosoEt

where oI,E¼2pfI,E, and all the initial conditions for the HH systemare the equilibrium (quiescent), V0¼0.00002, m0¼0.05293,h0¼0.59612, n0¼0.31768.

We pick two typical stimulations, which can result in signifi-cant and classical neuron excitation:

Fig. 14. Identification result for or: (a) first N

Fig. 15. Identification error for or: (a) first NN

0 100 200 300 400 500-4

-2

0

0 100 200 300 400 500-50

0

50

time (ms)

λB2

λA2

λB1-10

-5

5

λA1

Fig. 13. Eigenvalues of the linear matrices A and B: (a

(I)

N ide

ide

0

-4

-2

0

00

0

0

0

) firs

Ew¼0, AI¼30 mA/cm2, fI¼10 Hz, e¼0.2

(II) Iext¼0, AE¼10 mV, fE¼115 Hz, e¼0.2

The identification results are presented in Figs. 10–13 for (2, 2)asymptotic embedded HH model. For the first NN identifier, in Case I,System is in 8/1 phase locked oscillation periodic bursting. RMSvalues of the state variables are RMSn¼0.333511, RMSh¼0.335092,RMSV¼0.639899, RMSm¼0.322476. In Case II, system is in thesame frequency periodic spiking. RMS values of the state variablesare RMSn¼0.140449, RMSh¼0.147785, RMSV¼0.784985, RMSm¼

0.07245. The time scale is considered by putting e¼0.2. The

ntifier and (b) second NN identifier.

ntifier and (b) second NN identifier.

50 100 150 200 250 300 350 400

50 100 150 200 250 300 350 400time (ms)

λA1λB2

λA2

λB1

t NN identifier and (b) second NN identifier.

Fig. 17. Identification error for ir: (a) first NN identifier and (b) second NN identifier.

Fig. 16. Identification result for ir: (a) first NN identifier and (b) second NN identifier.

X. Han et al. / Neurocomputing 74 (2011) 3428–34393438

flexibility of linear part matrix A and B enhance the identificationability of the NN identifier. Even the single layer structure ispowerful enough to successfully follow the complicated electro-physic phenomena from HH model.

For the second NN identifier, in simulation Case I, System is in8/1 phase locked oscillation periodic bursting. RMS values of thestate variables are RMSn¼0.074642, RMSh¼0.083497, RMSV¼

0.438275, RMSm¼0.035473. In Case II, the system is in the samefrequency periodic spiking. The RMS values of the state variables areRMSn¼0.05695, RMSh¼0.061458, RMSV¼0.86327, RMSm¼

0.060288. The time scale is considered by putting e¼0.2. FromFigs. 10–13, we can see that the states of the NN model can followthose of the HH model very closely. The identification performance ofthe proposed algorithm is better than that of the first NN identifier,especially for the membrane potential. The eigenvalues of A and B for(a) and (b) converge to the same steady values since the nominallinear matrices A* and B* do not change with different inputs.

Example 3. To further illustrate the theoretical results, we testtwo NN identifiers on a DC servomotor, which is a typical exampleof two time-scales system. DC motor modeling can be separatedinto two electrical and mechanical subsystems. It is well knownthat the time constant of the electrical system is much smallerthan that of the mechanical system. Hence, the electrical sub-system is the fast subsystem and the mechanical system is theslow subsystem. The model of DC motor [8] is shown as follows:

_or ¼ ir

e_ir ¼�or�irþur ð46Þ

where or is related to the speed of motor, ir is related to circuitcurrent, e is the time scale and ur is related to the input voltage tothe circuit.

The parameter of the time scale is e¼0.5 and the input signalis ur ¼ 3sin0:5t. The sigmoid functions s1,2ðUÞ and f1,2ðUÞ arechosen as 1=ð1þe�xÞ. In the first NN identifier, for state variableor, the RMS value is 0.02742 and RMS for state variable ir is0.2325. In the second NN identifier, for state variable or, the RMSvalue is 0.02324 and RMS for state variable ir is 0.04108. The RMSvalues of both state variables demonstrate that the second NNidentifier has better performance than the first one. The results inFigs. 14–17 demonstrate that the state variables of dynamicmulti-time scale NN follow those of the nonlinear system moreaccurately and quickly.

The simulation results of three nonlinear systems demonstratethat the states of dynamic multi-scale neural networks can trackthe nonlinear system state variables on-line. The identificationerrors approach to the thresholds. The eigenvalues of A and Bconverge to the steady values in the system identifications.

5. Conclusions

In this paper we propose two new NN identifiers for nonlinearsystems with multi-time scales. The on-line identification algo-rithms for dynamic neural networks have been developed. TheLyapunov synthesis method is used to prove the stability ofidentification algorithms. The proposed algorithms are applied

X. Han et al. / Neurocomputing 74 (2011) 3428–3439 3439

to three multi-time scale systems. The identification results showthe effectiveness of the proposed identification algorithms.

References

[1] A. Meyer Base, F. Ohl, H. Scheich, Singular perturbation analysis of compe-titive neural networks with different time-scales, Neural Computation 8 (8)(1996) 545–563.

[2] K.J. Hunt, D. Sbarbaro, R. Zbikowski, P.J. Gawthrop, Neural networks forcontrol systems—a survey, Automatica 28 (1992) 1083–1112.

[3] S. Mohanty, Artificial neural network based system identification and modelpredictive control of a flotation column, Journal of Process Control 19 (2009)991–999.

[4] R.K. Al Seyab, Y. Cao, Nonlinear system identification for predictive controlusing continuous time recurrent neural networks and automatic differentia-tion, Journal of Process Control 18 (2008) 568–581.

[5] J.C. Atuonwu, Y. Cao, G.P. Rangaiah, M.O. Tade, Identification and predictivecontrol of a multistage evaporator, Control Engineering Practice 18 (2010)1418–1428.

[6] C. Wang, K.-U. Klatt, G.D. Unnebier, S. Engell, F. Hanisch, Neural network-based identification of SMB chromatographic processes, Control EngineeringPractice 11 (2003) 949–959.

[7] Wen Yu, Xiaoou Li, Passivity analysis of dynamic neural networks withdifferent time-scales, Neural Processing Letters (2007) 143–155.

[8] Alejandro Cruz Sandoval, Wen Yu, Xiaoou Li, Some stability properties ofdynamic neural networks with different time-scales, in: Proceedings of the2006 International Joint Conference on Neural Networks, Vancouver, BC,Canada, July 16–21, 2006, pp. 4218–4224.

[9] A. Meyer-Base, Flow invariance for competitive neural networks withdifferent timescales, in: Proceedings of the 2002 International Joint Con-ference on Neural Networks, vol. 1, 2002, pp. 858–861.

[10] A. Meyer-Base, F. Ohl, H. Scheich, Quadratic-Type Lyapunov functions forcompetitive neural networks with different time-Scales, IEEE Transactions onAutomatic Control 29 (6) (1984) 542–550.

[11] A. Meyer-Base, S.S. Pilyugin, Y. Chen., Global exponential stability of compe-titive neural networks with different time scales, IEEE Transactions on NeuralNetworks 14 (3) (2003) 716–719 May.

[12] A. Meyer-Base, S.S. Pilyuginand A. Wismuller, Stability analysis of a self-organizing neural network with feedforward and feedback dynamics, in:Proceedings IEEE International Joint Conference on Neural Networks, vol. 2,25–29 July 2004, pp. 1505–1509.

[13] A. Meyer-Base, F. Ohl, H. Scheich, Stability analysis techniques for competi-tive neural networks with different time-scales, in: Proceedings of the IEEEInternational Conference on Neural Networks, vol. 6, 27 November–1December 1995, pp. 3215–3219.

[14] Wen Yu, Xiaoou Li, Some new results on system identification with dynamicneural networks, IEEE Transactions on Neural Networks 12 (2) (March 2001)412–417.

[15] G.A. Rovithakis, M.A. Christodoulou, Adaptive control of unknown plantsusing dynamical neural networks, IEEE Transactions on Systems, Man, andCybernetics 24 (3) (1994) 400–412.

[16] Rebecca Suckley, Vadim N. Biktashev, The asymptotic structure of theHodgkin-Huxley equations, International Journal of Bifurcation and Chaos(IJBC) 2003 13 (12) (2003) 3805–3825.

[17] Hassan K. Khall, Nonlinear systems, 3rd ed., Prentice Hall, Inc., 2002 (p. 323).[18] Alexander S. Poznyak, Wen Yu, Edgar N. Sanchez, Jose P. Perez, Nonlinear

adaptive trajectory tracking using dynamic neural networks, IEEE Transac-tions on Neural Networks 10 (6) (1999) 1402–1411.

[19] Eugene M. Izhikevich, Dynamical Systems in Neuroscience: The Geometry ofExcitability and Bursting, The Neurosciences Institute: The MIT Press, 2005.

[20] E.D. Sontag, Y. Wang, On characterization of the input-to-state stabilityproperty, System Control Letters 24 (1995) 351–359.

[21] D.H. Rao, M.M. Gupta, H.C. Wood, Neural networks in control systems, in:Proceedings of the IEEE Communications, Computers and Power in theModern Environment Conference, 17–18 May, 1993, 282–290.

[22] W. Yu, A.S. Poznyak, Indirect adaptive control via parallel dynamic neuralnetworks, control theory and applications, IEE Proceedings 146 (1) (1999)25–30.

[23] P.A. Ioannou, J. Sun, Robust Adaptive Control, Prentice-Hall, Upper SaddleRiver, NJ, 1996.

[24] X.M. Ren, A.B. Rad, P.T. Chan, W.L. Lo, Identification and control of contin-uous-time nonlinear systems via dynamic neural networks, IEEE Transac-tions on Industrial Electronics 50 (3) (2003) 478–486.

[25] W.F. Xie, Y.Q. Zhu, Z.Y. Zhao, Y.K. Wong, Nonlinear system identificationusing optimized dynamic neural network, Neurocomputing 72 (13–15)(2009) 3277–3287.

[26] Boris T. Polyak, Introduction to Optimization, Optimization Software, Inc,New York, 1987.

[27] William J. Terrell, Stability and Stabilization: An Introduction, PrincetonUniversity Press, 2009.

[28] A.S. Poznyak, E.N. Sanchez, W. Yu, Differential Neural Networks for RobustNonlinear Control, identification, State estimation and Trajectory Tracking,World Scientific Publishing Co. Pte. Ltd., Singapore, 2001.

Xuan Han received his B.S. Degree from Department ofElectrical & Automation Engineering, Tianjin Univer-sity, P.R. China, in 2006 and his master degree fromConcordia University in 2010. His research interestincludes neural network, nonlinear system identifica-tion and control, mechatronics.

Wen-Fang Xie is an associate professor with theDepartment of Mechanical and Industrial Engineeringat Concordia University, Montreal, Canada. She was anIndustrial Research Fellowship holder from NaturalSciences and Engineering Research Council of Canada(NSERC) and served as a senior research engineer in In-CoreTec, Inc., Canada before she joined ConcordiaUniversity in 2003. She received her Ph.D. from theHong Kong Polytechnic University in 1999 and hermaster’s degree from Beijing University of Aeronauticsand Astronautics in 1991. Her research interestsinclude nonlinear control and identification in mecha-

tronics, artificial intelligent control and advanced pro-

cess control and system identification.

Zhijun Fu received his B.S. degree in electrical infor-mation engineering from Henan Normal University,China, in 2005. He obtained his master degree from theScience and Technology University of Beijing, China.He is currently a Ph.D. student in the Science andTechnology University of Beijing. He is carrying out hisPh.D. research as a visiting scholar in the ConcordiaUniversity (Canada). His research interests includecontrol theory applications, electrical machine drives,power electronics, induction motor control and elec-tric vehicle control.

Weidong Luo received his B.S. degree from the Scienceand Technology University, Beijing, China, in 1982 andthe master degree in mechanical engineering fromChina University of Petroleum in 1985. He is a profes-sor with the Department of Mechanical and IndustrialEngineering at the Science and Technology University,Beijing, China. His research interests includemechanics of complex systems, fault diagnosis anddetection, vehicle intelligent control, modern designtheory and technology.