Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2012, Article ID 829594, 12 pagesdoi:10.1155/2012/829594
Research ArticleRobust Stochastic Stability Analysis for UncertainNeutral-Type Delayed Neural Networks Driven byWiener Process
Weiwei Zhang1 and Linshan Wang2
1 College of Information Science and Engineering, Ocean University of China, Qingdao 266100, China2 Department of Mathematics, Ocean University of China, Qingdao 266100, China
Correspondence should be addressed to Linshan Wang, [email protected]
Received 9 July 2011; Revised 20 September 2011; Accepted 27 September 2011
Academic Editor: Shiping Lu
Copyright q 2012 W. Zhang and L. Wang. This is an open access article distributed under theCreative Commons Attribution License, which permits unrestricted use, distribution, andreproduction in any medium, provided the original work is properly cited.
The robust stochastic stability for a class of uncertain neutral-type delayed neural networks drivenby Wiener process is investigated. By utilizing the Lyapunov-Krasovskii functional and inequalitytechnique, some sufficient criteria are presented in terms of linear matrix inequality (LMI) toensure the stability of the system. A numerical example is given to illustrate the applicability ofthe result.
1. Introduction
In the past few years, neural networks and their various generalizations have drawn muchresearch attention owing to their promising potential applications in a variety of areas, such asrobotics, aerospace, telecommunications, pattern recognition, image processing, associativememory, signal processing, and combinatorial optimization [1β3]. In such applications, itis of prime importance to ensure the asymptotic stability of the designed neural networks.Because of this, the stability of neural networks has been deeply investigated in the literature[4β14].
It is known that time delays and stochastic perturbations are commonly encounteredin the implementation of neural networks, and may result in instability or oscillation. Soit is essential to investigate the stability of delayed stochastic neural networks [15, 16].Moreover, uncertainties are unavoidable in practical implementation of neural networksdue to modeling errors and parameter fluctuation, which also cause instability and poorperformance [15, 17, 18]. Therefore, it is significant to introduce such uncertainties intodelayed stochastic neural networks.
2 Journal of Applied Mathematics
On the other hand, because of the complicated dynamic properties of the neural cellsin the real world, it is natural and important that systems will contain some informationabout the derivative of the past state. Practically, such phenomenon always appears in thestudy of automatic control, circuit analysis, chemical process simulation, and populationdynamics, and so forth. Recently, there has been increasing interest in the study of delayedneural networks of neutral type, see [6β15, 18β24]. In [6, 8], the authors developed the globalasymptotic stability of neutral-type neural networks with delays by utilizing the Lyapunovstability theory and LMI technique. In [9, 10], the global exponential stability of neutral-typeneural networks with distributed delays is studied. However, the stochastic perturbationswere not taken into account in those delayed neural networks [6β10].
In [23, 24], the authors discussed the robust stability for uncertain stochastic neuralnetworks of neutral-type with time-varying delays. However, the distributed delays werenot taken into account in the models. So far, there are only a few papers that not only dealwith the stochastic stability analysis for delayed neural networks of neutral-type, but alsoconsider the parameter uncertainties.
To the best of our knowledge, there are very few results on the stochastic stabilityanalysis for uncertain neutral-type neural networks with both discrete and distributed delaysdriven by Wiener process. This motivates the research in this paper.
In this paper, a class of uncertain neutral-type delayed neural networks driven byWiener process is considered. By constructing a suitable Lyapunov functional, some newstability criteria to guarantee the system to be stochastically asymptotically stable in the meansquare are given, which are less conservative than some existing reports. The structure of theaddressed system is more general than in the other papers. The criteria can be checked easilyby the LMI control toolbox in MATLAB. Moreover, a numerical example is given to illustratethe effectiveness and improvement over some existing results.
2. Preliminaries
Notations. A < 0 denotes that A is a negative definite matrix. The superscript βTβ stands forthe transpose of a matrix. (Ξ©, F, P) denotes a complete probability space, E(Β·) stands for themathematical expectation operator. βΒ·β stands for the Euclidean norm. I is the identity matrixof appropriate dimension, and the symmetric terms in a symmetric matrix are denoted by β.
Consider the following class of uncertain neutral-type delayed neural networks drivenby Wiener process:
d[x(t) β Cx(t β h(t))] =
[βA(t)x(t) + B(t)f(x(t β Ο(t))) +D(t)
β« t
tβr(t)f(x(s))ds
]dt
+[H0(t)x(t) +H1(t)x(t β Ο(t))]dw(t),
x(t0 + s) = Ο(s), s β [t0 β Ο, t0],
(2.1)
where x = (x1, x2, . . . , xn)T is the neuron state vector, A(t) = A + ΞA(t), B(t) = B + ΞB(t),
D(t) = D + ΞD(t), H0(t) = H0 + ΞH0(t), H1(t) = H1 + ΞH1(t), A = diag(ai)nΓn is apositive diagonal matrix, B,C,D β RnΓn are the connection weight matrices, H0,H1 βRnΓn are known real constant matrices, ΞA(t), ΞB(t), ΞD(t), ΞH0(t), ΞH1(t) represent thetime-varying parameter uncertain terms. f(x) = (f1(x1), f2(x2), . . . , fn(xn))
T is the neuron
Journal of Applied Mathematics 3
activation function with f(0) = 0.w(t) = (w1(t), w2(t), . . . , wn(t))T is an n-dimensionalWiener
process defined on a complete probability space (Ξ©, F, P). r(t), Ο(t), h(t) are nonnegative,bounded, and differentiable time varying delays satisfying
0 < r(t) β€ r < β,
0 < Ο(t) β€ Ο < β, Ο(t) β€ Ξ·1 < β,
0 < h(t) β€ h < β, h(t) β€ Ξ·2 < β.
(2.2)
The admissible parameter uncertain terms are assumed to be the following form:
[ΞA(t),ΞB(t),ΞD(t),ΞH0(t),ΞH1(t)] = UF(t)[M1,M2,M3,M4,M5], (2.3)
where U,Mi, i = 1, . . . , 5 are known real constant matrices, F(t) is the time-varying uncertainmatrix satisfying
FTF β€ I. (2.4)
Suppose that f(Β·) is bounded and satisfies the following condition:
βf(x)β β€ βGxβ, (2.5)
where G β RnΓn is a known constant matrix.Assume that the initial value Ο : [βΟ, 0] β Rn is F0-measurable and continuously
differentiable, we introduce the following norm:
β₯β₯Οβ₯β₯2Ο = max
β§β¨β© sup
βΞ±β€sβ€0Eβ£β£Οi(s)
β£β£2, supβhβ€sβ€0
Eβ£β£Οβ²
i(s)β£β£2β«β¬β < β, (2.6)
where Ο = max{Ο, h, r}, Ξ± = max{Ο, r}.Under the above assumptions, it is easy to verify that there exists a unique equilibrium
point of system (2.1) (see [25]).
Definition 2.1. The equilibrium point of (2.1) is said to be globally robustly stochasticallyasymptotically stable in the mean square, if the following condition holds:
limtβ+β
Eβ£β£x(t, t0, Ο)β£β£2 = 0, t β₯ t0, (2.7)
where x(t, t0, Ο) is any solution of model (2.1) with initial value Ο.
Lemma 2.2 (Schur complement [26]). Given constant matrices Ξ©1, Ξ©2, Ξ©3 with appropriatedimensions, whereΞ©T
1 = Ξ©1 and Ξ©T2 = Ξ©2 > 0, then
Ξ©1 +Ξ©T3Ξ©
β12 Ξ©3 < 0, (2.8)
4 Journal of Applied Mathematics
if and only if
(Ξ©1 Ξ©T
3
β βΞ©2
)< 0, or
(βΞ©2 Ξ©3
β Ξ©1
)< 0. (2.9)
Lemma 2.3 (see [26]). Given matricesD, E, and F with FTF β€ I and a scalar Ξ΅ > 0, then
DFE + ETFTDT β€ Ξ΅DDT + Ξ΅β1ETE. (2.10)
Lemma 2.4 (see [27]). For any constant matrix M β RnΓn, M = MT > 0, a scalar Ξ³ > 0, vectorfunction x(t) : [0, Ξ³] β Rn such that the integrations are well defined, then
[β« Ξ³
0x(s)ds
]TM[β« Ξ³
0x(s)ds
]β€ Ξ³
β« Ξ³
0xT (s)Mx(s)ds. (2.11)
3. Main Results
Theorem 3.1. System (2.1) is globally robustly stochastically asymptotically stable in the meansquare, if there exist symmetric positive definite matrices P, Q, R, S, U1, U2 and positive scalarsΞ΄, Ξ΅1, Ξ΅2 > 0 such that LMI holds:
Ξ =
ββββββββββββββββββββ
Ξ1 ATPC PB β Ξ΅1MT
1M2 PD β Ξ΅1MT1M3 Ξ΅2MT
4M5 HT
0P βPU 0
β Ξ2 βCTPB βCTPD 0 0 βCTPU 0
β β Ξ3 Ξ΅1MT2M3 0 0 0 0
β β β Ξ4 0 0 0 0
β β β β Ξ5 HT
1P 0 0
β β β β β βP 0 PU
β β β β β β βΞ΅1I 0
β β β β β β β βΞ΅2I
ββββββββββββββββββββ
< 0, (3.1)
where Ξ1 = βPA β ATP + Q + R + rGTSG + Ξ΅1MT
1M1 + Ξ΅2MT4M4, Ξ2 = βU1 β (1 β Ξ·2)R, Ξ3 =
βΞ΄I + Ξ΅1MT2M2, Ξ4 = βrβ1S + Ξ΅1MT
3M3, Ξ5 = βU2 β (1 β Ξ·1)Q + Ξ΄GTG + Ξ΅2MT5M5.
Journal of Applied Mathematics 5
Proof. Using Lemma 2.2, the matrix Ξ < 0 implies that
ββββββββββββββββ
Ξ1 ATPC PB β Ξ΅1MT
1M2 PD β Ξ΅1MT1M3 Ξ΅2MT
4M5 HT
0P
β Ξ2 βCTPB βCTPD 0 0
β β Ξ3 Ξ΅1MT2M3 0 0
β β β Ξ4 0 0
β β β β Ξ5 HT
1P
β β β β β βP
ββββββββββββββββ
+
βββββββββββββββ
PU 0
βCTPU 0
0 0
0 0
0 0
0 PU
βββββββββββββββ
ββΞ΅β11 I 0
β Ξ΅β12 I
ββ ββUTP βUTPC 0 0 0 0
0 0 0 0 0 UTP
ββ
=
ββββββββββββββββ
Ξ¦1 ATPC PB PD 0 H
T
0P
β Ξ2 βCTPB βCTPD 0 0
β β βΞ΄I 0 0 0
β β β βrβ1S 0 0
β β β β Ξ¦2 HT
1P
β β β β β βP
ββββββββββββββββ
+ Ξ΅1
βββββββββββββββ
βMT1
0
MT2
MT3
0
0
βββββββββββββββ
βββββββββββββββ
βMT1
0
MT2
MT3
0
0
βββββββββββββββ
T
+ Ξ΅2
βββββββββββββββ
MT4
0
0
0
MT5
0
βββββββββββββββ
βββββββββββββββ
MT4
0
0
0
MT5
0
βββββββββββββββ
T
+ Ξ΅β11
βββββββββββββββ
PU
βCTPU
0
0
0
0
βββββββββββββββ
βββββββββββββββ
PU
βCTPU
0
0
0
0
βββββββββββββββ
T
+ Ξ΅β12
βββββββββββββββ
0
0
0
0
0
PU
βββββββββββββββ
βββββββββββββββ
0
0
0
0
0
PU
βββββββββββββββ
T
< 0,
(3.2)
where Ξ¦1 = βPA βATP +Q + R + rGTSG, Ξ¦2 = βU2 β (1 β Ξ·1)Q + Ξ΄GTG.
6 Journal of Applied Mathematics
From (2.3), (2.4), using Lemma 2.3, we have
βββββββββββββ
βPΞA(t) βΞAT (t)P ΞAT (t)PC PΞB(t) PΞD(t) 0 ΞHT0 (t)P
β 0 βCTPΞB(t) βCTPΞD(t) 0 0
β β 0 0 0 0
β β β 0 0 0
β β β β 0 ΞHT1 (t)P
β β β β β 0
βββββββββββββ
=
βββββββββββββ
βMT1
0
MT2
MT3
0
0
βββββββββββββ
FT (t)
βββββββββββββ
PU
βCTPU
0
0
0
0
βββββββββββββ
T
+
βββββββββββββ
PU
βCTPU
0
0
0
0
βββββββββββββ
F(t)
βββββββββββββ
βMT1
0
MT2
MT3
0
0
βββββββββββββ
T
+
βββββββββββββ
MT4
0
0
0
MT5
0
βββββββββββββ
FT (t)
βββββββββββββ
0
0
0
0
0
PU
βββββββββββββ
T
+
βββββββββββββ
0
0
0
0
0
PU
βββββββββββββ
F(t)
βββββββββββββ
MT4
0
0
0
MT5
0
βββββββββββββ
T
β€ Ξ΅1
βββββββββββββ
βMT1
0
MT2
MT3
0
0
βββββββββββββ
βββββββββββββ
βMT1
0
MT2
MT3
0
0
βββββββββββββ
T
+ Ξ΅β11
βββββββββββββ
PU
βCTPU
0
0
0
0
βββββββββββββ
βββββββββββββ
PU
βCTPU
0
0
0
0
βββββββββββββ
T
+ Ξ΅2
βββββββββββββ
MT4
0
0
0
MT5
0
βββββββββββββ
βββββββββββββ
MT4
0
0
0
MT5
0
βββββββββββββ
T
+ Ξ΅β12
βββββββββββββ
0
0
0
0
0
PU
βββββββββββββ
βββββββββββββ
0
0
0
0
0
PU
βββββββββββββ
T
.
(3.3)
Journal of Applied Mathematics 7
Together with (3.2), we get
βββββββββββββ
Ξ¨ AT (t)PC PB(t) PD(t) 0 HT0 (t)P
β Ξ2 βCTPB(t) βCTPD(t) 0 0
β β βΞ΄I 0 0 0
β β β βrβ1S 0 0
β β β β Ξ¦2 HT1 (t)P
β β β β β βP
βββββββββββββ
< 0, (3.4)
where Ξ¨ = βPA(t) βAT (t)P +Q + R + rGTSG.Utilizing Lemma 2.2 again, we obtain
Ξ£ =
ββββββββββ
Ξ¨ AT (t)PC PB(t) PD(t) 0
β Ξ2 βCTPB(t) βCTPD(t) 0
β β βΞ΄I 0 0
β β β βrβ1S 0
β β β β Ξ¦2
ββββββββββ
+
ββββββββββ
HT0 (t)
0
0
0
HT1 (t)
ββββββββββ
P
ββββββββββ
HT0 (t)
0
0
0
HT1 (t)
ββββββββββ
T
< 0. (3.5)
Constructing a positive definite Lyapunov-Krasovskii functional as follows:
V (t, x(t)) = yT (t)Py(t) +β« t
tβΟ(t)xT (s)Qx(s)ds +
β« t
tβh(t)xT (s)Rx(s)ds
+β«0
βr(t)
β« t
t+ΞΈfT (x(s))Sf(x(s))dsdΞΈ +
β«T
t
xT (s β h(s))U1x(s β h(s))ds
+β«T
t
xT (s β Ο(s))U2x(s β Ο(s))ds,
(3.6)
where y(t) = x(t) β Cx(t β h(t)), T > t is a constant.By Itoβs differential formula, we get
dV (t, x(t)) β€{2yT (t)P
[βA(t)x(t) + B(t)f(x(t β Ο(t))) +D(t)
β« t
tβr(t)f(x(s))ds
]
+ xT (t)Qx(t) β (1 β Ο(t))xT (t β Ο(t))Qx(t β Ο(t)) + xT (t)Rx(t)
β (1 β h(t))xT (t β h(t))Rx(t β h(t)) + rfT (x(t))Sf(x(t))
ββ« t
tβr(t)fT (x(s))Sf(x(s))ds
β xT (t β h(t))U1x(t β h(t)) β xT (t β Ο(t))U2x(t β Ο(t))
8 Journal of Applied Mathematics
+[H0(t)x(t) +H1(t)x(t β Ο(t))]TP[H0(t)x(t) +H1(t)x(t β Ο(t))]
}dt
+ 2yT (t)P[H0(t)x(t) +H1(t)x(t β Ο(t))]dw(t)
β€{2[x(t) β Cx(t β h(t))]T
Γ P
[βA(t)x(t) + B(t)f(x(t β Ο(t))) +D(t)
β« t
tβr(t)f(x(s))ds
]
+ xT (t)Qx(t) β (1 β Ξ·1)xT (t β Ο(t))Qx(t β Ο(t)) + xT (t)Rx(t)
β (1 β Ξ·2)xT (t β h(t))Rx(t β h(t)) + rfT (x(t))Sf(x(t))
ββ« t
tβr(t)fT (x(s))Sf(x(s))ds
β xT (t β h(t))U1x(t β h(t)) β xT (t β Ο(t))U2x(t β Ο(t))
+[H0(t)x(t) +H1(t)x(t β Ο(t))]TP[H0(t)x(t) +H1(t)x(t β Ο(t))]
}dt
+ 2yT (t)P[H0(t)x(t) +H1(t)x(t β Ο(t))]dw(t).
(3.7)
From (2.5), for a scalar Ξ΄ > 0, we have
βΞ΄[fT (x(t β Ο(t)))f(x(t β Ο(t))) β xT (t β Ο(t))GTGx(t β Ο(t))
]β₯ 0. (3.8)
Using Lemma 2.4, we have
(β« t
tβr(t)f(x(s))ds
)T
rβ1S
(β« t
tβr(t)f(x(s))ds
)β€β« t
tβr(t)fT (x(s))Sf(x(s))ds. (3.9)
Together (3.8), (3.9) with dV (t, x(t)), we obtain
dV (t, x(t)) β€{xT (t)
[βPA(t) βAT (t)P +Q + R + rGTSG
]x(t) + xT (t)AT (t)PCx(t β h(t))
+ xT (t β h(t))CTPA(t)x(t) + xT (t)PB(t)f(x(t β Ο(t)))
+ fT (x(t β Ο(t)))BT (t)Px(t)
+ xT (t)PD(t)β« t
tβr(t)f(x(s))ds +
(β« t
tβr(t)f(x(s))ds
)T
DT (t)Px(t)
+ xT (t β h(t))[βU1 β
(1 β Ξ·2
)R]x(t β h(t)) β xT (t β h(t))CTPB(t)f(x(t β Ο(t)))
Journal of Applied Mathematics 9
β fT (x(t β Ο(t)))BT (t)PCx(t β h(t)) β xT (t β h(t))CTPD(t)β« t
tβr(t)f(x(s))ds
β(β« t
tβr(t)f(x(s))ds
)T
DT (t)PCx(t β h(t)) + xT (t β Ο(t))
Γ [βU2 β(1 β Ξ·1
)Q]x(t β Ο(t))
β(β« t
tβr(t)f(x(s))ds
)T
rβ1S
(β« t
tβr(t)f(x(s))ds
)
+[H0(t)x(t) +H1(t)x(t β Ο(t))]TP[H0(t)x(t) +H1(t)x(t β Ο(t))]}dt
+ 2yT (t)P[H0(t)x(t) +H1(t)x(t β Ο(t))]dw(t).
(3.10)
That is,
dV (t, x(t)) β€ ΞΎT (t)Σξ(t)dt + 2yT (t)P[H0(t)x(t) +H1(t)x(t β Ο(t))]dw(t), (3.11)
where ΞΎT (t) = (xT (t), xT (t β h(t)), fT (x(t β Ο(t))), (β« ttβr(t) f(x(s))ds)
T, xT (t β Ο(t))), and the
matrix Ξ£ is given in (3.5).Taking the mathematical expectation, we get
E(dV (t, x(t))
dt
)β€ E
(ξT (t)Σξ(t)
)β€ Ξ»max(Ξ£)Eβx(t)β2. (3.12)
From (3.5), we know Ξ£ < 0, that is, Ξ»max(Ξ£) < 0. By Lyapunov-Krasovskii stability theorems,the system (2.1) is globally robustly asymptotically stable. The proof is completed.
Remark 3.2. To the best of our knowledge, few authors have considered the stochasticallyasymptotic stability for uncertain neutral-type neural networks driven by Wiener process.We can find recent papers [18, 22β24]. However, it is assumed in [18] that the system is alinear model and all delays are constants. In [22], it is assumed that the time-varying delayssatisfying Ο(t) β€ ΟΟ < 1, h(t) β€ Οh < 1, in this paper, we relax it to Ο(t) β€ ΟΟ < β, h(t) β€ Οh < β.In [23, 24], the authors discussed the robust stability for uncertain stochastic neural networksof neutral-type with time-varying delays. However, the distributed delays were not takeninto account in the models. Hence, our results in this paper have wider adaptive range.
Remark 3.3. Suppose that C = 0, D(t) = 0 (i.e., without neutral-type and distributed delays),then the system (2.1) becomes the one investigated in [15].
Remark 3.4. In [17], the authors studied the global stability for uncertain stochastic neuralnetworks with time-varying delay by Lyapunov functional method and LMI technique.However, the neutral term and distributed delays were not taken into account in the models.Therefore, our developed results in this paper are more general than those reported in [17].
10 Journal of Applied Mathematics
Remark 3.5. It should be noted that the condition is given as linear matrix inequalities LMIs,therefore, by using the MATLAB LMI Toolbox, it is straightforward to check the feasibility ofthe condition.
4. Numerical Example
Consider the following uncertain neutral-type delayed neural networks:
d[x(t) β Cx(t β h(t))] =
[β (A +UF(t)M1)x(t) + (B +UF(t)M2)f(x(t β Ο(t)))
+(D +UF(t)M3)β« t
tβr(t)f(x(s))ds
]dt
+ [UF(t)M4x(t) +UF(t)M5x(t β Ο(t))]dw(t),
(4.1)
where n = 2, fi(xi) = sinxi, i = 1, 2, Ξ·1 = 0.7, Ξ·2 = 0.5, 0 < r(t) β€ r = 3, FT (t)F(t) β€ I.The constant matrices are
A =
(3 0
0 3
), B =
(0.2 0.16
0.04 0.08
), C =
(0.2 0
0 0.2
),
D =
(0.04 0.03
β0.02 0.05
), U =
(0.1 0.5
0.5 0.3
), M1 =
(0.6 0
0 0.6
),
M2 = M3 = M4 =
(0.2 0
0 0.2
), M5 =
(0.4 0
0 0.4
), G =
(1 0
0 1
).
(4.2)
By using the MATLAB LMI Control Toolbox, we obtain the feasible solution as follows: Ξ΄ =2.0876, Ξ΅1 = 5.0486, Ξ΅2 = 8.0446,
P =
(6.8465 β0.7257β0.7257 6.6012
), Q =
(9.9371 β0.2792β0.2792 10.4388
), R =
(8.0104 β2.3936β2.3936 5.4991
),
S =
(3.4984 β0.8143β0.8143 1.9588
), U1 =
(1.1111 0.3889
0.3889 1.1060
), U2 =
(2.4193 β0.6561β0.6561 2.6270
).
(4.3)
That is the system (4.1) is globally robustly stochastically asymptotically stable in the meansquare.
5. Conclusion
In this paper, the stochastically asymptotic stability problem has been studied for a class ofuncertain neutral-type delayed neural networks driven by Wiener process by utilizing the
Journal of Applied Mathematics 11
Lyapunov-Krasovskii functional and linear matrix inequality (LMI) approach. A numericalexample is given to illustrate the applicability of the result.
Acknowledgment
This paper was fully supported by the National Natural Science Foundation of China (no.10771199 and no. 10871117).
References
[1] A. Bouzerdoum and T. R. Pattison, βNeural network for quadratic optimization with boundconstraints,β IEEE Transactions on Neural Networks, vol. 4, no. 2, pp. 293β304, 1993.
[2] M. Forti and A. Tesi, βNew conditions for global stability of neural networks with application to linearand quadratic programming problems,β IEEE Transactions on Circuits and Systems I, vol. 42, no. 7, pp.354β366, 1995.
[3] M. P. Kennedy and L. O. Chua, βNeural networks for nonlinear programming,β IEEE Transactions onCircuits and Systems, vol. 35, no. 5, pp. 554β562, 1988.
[4] S. Xu, J. Lam, D. W. C. Ho, and Y. Zou, βDelay-dependent exponential stability for a class of neuralnetworkswith time delays,β Journal of Computational and AppliedMathematics, vol. 183, no. 1, pp. 16β28,2005.
[5] R. Zhang and L. Wang, βGlobal exponential robust stability of interval cellular neural networks withS-type distributed delays,βMathematical and Computer Modelling, vol. 50, no. 3-4, pp. 380β385, 2009.
[6] J. H. Park, O. M. Kwon, and S. M. Lee, βLMI optimization approach on stability for delayed neuralnetworks of neutral-type,β Applied Mathematics and Computation, vol. 196, no. 1, pp. 236β244, 2008.
[7] R. Samli and S. Arik, βNew results for global stability of a class of neutral-type neural systems withtime delays,β Applied Mathematics and Computation, vol. 210, no. 2, pp. 564β570, 2009.
[8] R. Rakkiyappan and P. Balasubramaniam, βLMI conditions for global asymptotic stability results forneutral-type neural networks with distributed time delays,βAppliedMathematics and Computation, vol.204, no. 1, pp. 317β324, 2008.
[9] R. Rakkiyappan and P. Balasubramaniam, βNew global exponential stability results for neutral typeneural networks with distributed time delays,β Neurocomputing, vol. 71, no. 4β6, pp. 1039β1045, 2008.
[10] L. Liu, Z. Han, and W. Li, βGlobal stability analysis of interval neural networks with discrete anddistributed delays of neutral type,β Expert Systems with Applications, vol. 36, no. 3, pp. 7328β7331,2009.
[11] R. Samidurai, S. M. Anthoni, and K. Balachandran, βGlobal exponential stability of neutral-typeimpulsive neural networks with discrete and distributed delays,β Nonlinear Analysis: Hybrid Systems,vol. 4, no. 1, pp. 103β112, 2010.
[12] R. Rakkiyappan, P. Balasubramaniam, and J. Cao, βGlobal exponential stability results for neutral-type impulsive neural networks,β Nonlinear Analysis: Real World Applications, vol. 11, no. 1, pp. 122β130, 2010.
[13] J. H. Park and O. M. Kwon, βFurther results on state estimation for neural networks of neutral-typewith time-varying delay,β Applied Mathematics and Computation, vol. 208, no. 1, pp. 69β75, 2009.
[14] J. Qiu and J. Cao, βDelay-dependent robust stability of neutral-type neural networks with timedelays,β Journal of Mathematical Control Science and Applications, vol. 1, pp. 179β188, 2007.
[15] J. Zhang, P. Shi, and J. Qiu, βNovel robust stability criteria for uncertain stochastic Hopfield neuralnetworkswith time-varying delays,βNonlinear Analysis: RealWorld Applications, vol. 8, no. 4, pp. 1349β1357, 2007.
[16] L. Wang, Z. Zhang, and Y. Wang, βStochastic exponential stability of the delayed reaction-diffusionrecurrent neural networks with Markovian jumping parameters,β Physics Letters A, vol. 372, no. 18,pp. 3201β3209, 2008.
[17] Y.Wu, Y.Wu, and Y. Chen, βMean square exponential stability of uncertain stochastic neural networkswith time-varying delay,β Neurocomputing, vol. 72, no. 10β12, pp. 2379β2384, 2009.
[18] M. H. Jiang, Y. Shen, and X. X. Liao, βRobust stability of uncertain neutral linear stochastic differentialdelay system,β Applied Mathematics and Mechanics, vol. 28, no. 6, pp. 741β748, 2007.
12 Journal of Applied Mathematics
[19] H. Zhang, M. Dong, Y. Wang, and N. Sun, βStochastic stability analysis of neutral-type impulsiveneural networks with mixed time-varying delays and Markovian jumping,β Neurocomputing, vol. 73,no. 13β15, pp. 2689β2695, 2010.
[20] Q. Zhu and J. Cao, βStability analysis for stochastic neural networks of neutral type with bothMarkovian jump parameters and mixed time delays,β Neurocomputing, vol. 73, no. 13β15, pp. 2671β2680, 2010.
[21] H. Bao and J. Cao, βStochastic global exponential stability for neutral-type impulsive neural networkswith mixed time-delays and Markovian jumping parameters,β Communications in Nonlinear Scienceand Numerical Simulation, vol. 16, no. 9, pp. 3786β3791, 2011.
[22] X. Li, βGlobal robust stability for stochastic interval neural networks with continuously distributeddelays of neutral type,β Applied Mathematics and Computation, vol. 215, no. 12, pp. 4370β4384, 2010.
[23] H. Chen, Y. Zhang, and P. Hu, βNovel delay-dependent robust stability criteria for neutral stochasticdelayed neural networks,β Neurocomputing, vol. 73, no. 13β15, pp. 2554β2561, 2010.
[24] G. Liu, S. X. Yang, Y. Chai,W. Feng, andW. Fu, βRobust stability criteria for uncertain stochastic neuralnetworks of neutral-type with interval time-varying delays,β Neural Computing and Applications. Inpress.
[25] X. Mao, Stochastic Differential Equations and Their Applications, Horwood Publishing Series inMathematics & Applications, Horwood Publishing, Chichester, UK, 1997.
[26] Y. Y. Wang, L. Xie, and C. E. de Souza, βRobust control of a class of uncertain nonlinear systems,βSystems & Control Letters, vol. 19, no. 2, pp. 139β149, 1992.
[27] K. Gu, βAn integral inequality in the stability problem of time-delay systems,β in Proceedings of the39th IEEE Confernce on Decision and Control, pp. 2805β2810, Sydney, Australia, December 2000.
Submit your manuscripts athttp://www.hindawi.com
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Stochastic AnalysisInternational Journal of