13
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014 751 A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network Qing Hui, Member, IEEE, Wassim M. Haddad, Fellow, IEEE , James M. Bailey, and Tomohisa Hayakawa, Member, IEEE Abstract—With the advances in biochemistry, molecular biol- ogy, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we develop a mean field synaptic drive firing rate cortical neuronal model and demonstrate how the induction of general anesthesia can be explained using multistability; the property whereby the solutions of a dynamical system exhibit multiple attracting equilibria under asymptotically slowly changing inputs or system parameters. In particular, we demonstrate multistability in the mean when the system initial conditions or the system coefficients of the neuronal connectivity matrix are random variables. Uncertainty in the system coefficients is captured by representing system uncertain parameters by a multiplicative white noise model wherein stochastic integration is interpreted in the sense of Itô. Modeling a priori system parameter uncertainty using a multiplicative white noise model is motivated by means of the maximum entropy principle of Jaynes and statistical analysis. Index Terms— Brownian motion, excitatory and inhibitory neurons, general anesthesia, mean field model, multiplicative white noise, spiking neuron models, stochastic multistability, uncertainty modeling, Wiener process. I. I NTRODUCTION T HE neurosciences have relied on mathematical modeling throughout the relatively short history of this discipline [1]–[4]. Mathematical models facilitate the interpretation of experimental results, and lead to new hypotheses about neurological function, ranging from the simplest reflex arc to Manuscript received October 15, 2012; revised August 29, 2013; accepted September 2, 2013. Date of publication September 30, 2013; date of current version March 10, 2014. This work was supported in part by the Defense Threat Reduction Agency under Grant HDTRA1-10-1-0090 and Grant HDTRA1-13-1-0048, in part by the QNRF under NPRP Grant 4-187-2-060, and in part by the Air Force Office of Scientific Research under Grant FA9550-12-1-0192. Q. Hui is with the Department of Mechanical Engineering, Texas Tech University, Lubbock, TX 79409-1021 USA (e-mail: [email protected]). W. M. Haddad is with the School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA (e-mail: [email protected]). J. M. Bailey is with the Department of Anesthesiology, North- east Georgia Medical Center, Gainesville, GA 30503 USA (e-mail: [email protected]). T. Hayakawa is with the Department of Mechanical and Environmental Informatics, Tokyo Institute of Technology, Tokyo 152-8552, Japan (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2013.2281065 the question of what is consciousness. Nonlinear dynamical system theory, in particular, provides a framework for exploring the behavior of large-scale networks of neurons. An important application of nonlinear dynamical systems theory to neuroscience is the study of phenomena that exhibit nearly discontinuous transitions between macroscopic states [5]. Understanding these phenomena is immediately clinically relevant as anesthetic agents exhibit this behavior [6]–[11]. In both animal and human studies, it has been observed that with increasing doses of anesthetic agents the transition from consciousness to unconsciousness or from responsiveness to nonresponsiveness in the individual subject is very sharp, almost an all-or-none transition [12], confirming the clinical observations of generations of clinicians. The neural network of the brain consists of approximately 10 11 neurons (nerve cells) with each having 10 4 –10 5 connections. Our research is postulated on the belief that the explanation for the mechanism of action of anesthetics lies in the network properties of the brain. The challenge is how to account for such a transition in network properties in terms of the known molecular properties of the anesthetic agent. While the earliest theories of the mechanism of action of general anesthesia postulated a perturbation of the lipid bilayer membrane of neurons, more recent focus is on the interac- tion of the anesthetic agent with specific protein receptors [13]–[17]. The experimental evidence indicates that general anesthetics alter postsynaptic potentials [18], [19]. However, it is not immediately clear how the effect on postsynaptic potentials translates to the observed very sharp transition between consciousness and unconsciousness induced by gen- eral anesthetic agents. The dynamics of even a single neuron are quite com- plex [20] and when this complexity is coupled with the large scale and high connectivity of the neural network of the brain, theoretical analysis becomes intractable without major simplifying assumptions. Typically, the description of single neuron dynamics is simplified and assumed to be described either by an integrate-and-fire model or by a spike response model [21]. In addition, the scale and connectivity of the network are simplified using mean field theories. Earlier mean field theories assumed that the brain is organized into a limited number of pools of identical spiking neurons [22]. However, more commonly mean field theories assume that the strength of connection between neurons is normally distributed around some mean value. 2162-237X © 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

Embed Size (px)

Citation preview

Page 1: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014 751

A Stochastic Mean Field Model for an Excitatoryand Inhibitory Synaptic Drive Cortical

Neuronal NetworkQing Hui, Member, IEEE, Wassim M. Haddad, Fellow, IEEE, James M. Bailey,

and Tomohisa Hayakawa, Member, IEEE

Abstract— With the advances in biochemistry, molecular biol-ogy, and neurochemistry there has been impressive progressin understanding the molecular properties of anesthetic agents.However, there has been little focus on how the molecularproperties of anesthetic agents lead to the observed macroscopicproperty that defines the anesthetic state, that is, lack ofresponsiveness to noxious stimuli. In this paper, we develop amean field synaptic drive firing rate cortical neuronal modeland demonstrate how the induction of general anesthesia can beexplained using multistability; the property whereby the solutionsof a dynamical system exhibit multiple attracting equilibria underasymptotically slowly changing inputs or system parameters.In particular, we demonstrate multistability in the mean whenthe system initial conditions or the system coefficients of theneuronal connectivity matrix are random variables. Uncertaintyin the system coefficients is captured by representing systemuncertain parameters by a multiplicative white noise modelwherein stochastic integration is interpreted in the sense ofItô. Modeling a priori system parameter uncertainty using amultiplicative white noise model is motivated by means of themaximum entropy principle of Jaynes and statistical analysis.

Index Terms— Brownian motion, excitatory and inhibitoryneurons, general anesthesia, mean field model, multiplicativewhite noise, spiking neuron models, stochastic multistability,uncertainty modeling, Wiener process.

I. INTRODUCTION

THE neurosciences have relied on mathematical modelingthroughout the relatively short history of this discipline

[1]–[4]. Mathematical models facilitate the interpretationof experimental results, and lead to new hypotheses aboutneurological function, ranging from the simplest reflex arc to

Manuscript received October 15, 2012; revised August 29, 2013; acceptedSeptember 2, 2013. Date of publication September 30, 2013; date ofcurrent version March 10, 2014. This work was supported in part bythe Defense Threat Reduction Agency under Grant HDTRA1-10-1-0090and Grant HDTRA1-13-1-0048, in part by the QNRF under NPRP Grant4-187-2-060, and in part by the Air Force Office of Scientific Research underGrant FA9550-12-1-0192.

Q. Hui is with the Department of Mechanical Engineering, Texas TechUniversity, Lubbock, TX 79409-1021 USA (e-mail: [email protected]).

W. M. Haddad is with the School of Aerospace Engineering,Georgia Institute of Technology, Atlanta, GA 30332 USA (e-mail:[email protected]).

J. M. Bailey is with the Department of Anesthesiology, North-east Georgia Medical Center, Gainesville, GA 30503 USA (e-mail:[email protected]).

T. Hayakawa is with the Department of Mechanical and EnvironmentalInformatics, Tokyo Institute of Technology, Tokyo 152-8552, Japan (e-mail:[email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TNNLS.2013.2281065

the question of what is consciousness. Nonlinear dynamicalsystem theory, in particular, provides a framework forexploring the behavior of large-scale networks of neurons. Animportant application of nonlinear dynamical systems theoryto neuroscience is the study of phenomena that exhibit nearlydiscontinuous transitions between macroscopic states [5].Understanding these phenomena is immediately clinicallyrelevant as anesthetic agents exhibit this behavior [6]–[11].

In both animal and human studies, it has been observedthat with increasing doses of anesthetic agents the transitionfrom consciousness to unconsciousness or from responsivenessto nonresponsiveness in the individual subject is very sharp,almost an all-or-none transition [12], confirming the clinicalobservations of generations of clinicians. The neural networkof the brain consists of approximately 1011 neurons (nervecells) with each having 104–105 connections. Our research ispostulated on the belief that the explanation for the mechanismof action of anesthetics lies in the network properties of thebrain. The challenge is how to account for such a transition innetwork properties in terms of the known molecular propertiesof the anesthetic agent.

While the earliest theories of the mechanism of action ofgeneral anesthesia postulated a perturbation of the lipid bilayermembrane of neurons, more recent focus is on the interac-tion of the anesthetic agent with specific protein receptors[13]–[17]. The experimental evidence indicates that generalanesthetics alter postsynaptic potentials [18], [19]. However,it is not immediately clear how the effect on postsynapticpotentials translates to the observed very sharp transitionbetween consciousness and unconsciousness induced by gen-eral anesthetic agents.

The dynamics of even a single neuron are quite com-plex [20] and when this complexity is coupled with the largescale and high connectivity of the neural network of thebrain, theoretical analysis becomes intractable without majorsimplifying assumptions. Typically, the description of singleneuron dynamics is simplified and assumed to be describedeither by an integrate-and-fire model or by a spike responsemodel [21]. In addition, the scale and connectivity of thenetwork are simplified using mean field theories. Earlier meanfield theories assumed that the brain is organized into a limitednumber of pools of identical spiking neurons [22]. However,more commonly mean field theories assume that the strengthof connection between neurons is normally distributed aroundsome mean value.

2162-237X © 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

752 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

Mean field theories impose self-consistency on field vari-ables; for example, if postsynaptic potentials are assumed tobe a function of some mean firing rate, then those postsynapticpotentials should lead to a consistent predicted mean firingrate. The idea of applying mean field theories, drawn from thestudy of condensed matter, originated with [23]. Subsequently,Sompolinsky et al. [24] developed a mean field theory forneural networks analogous to the equations developed for spinglasses with randomly symmetric bonds [25]. Amit and Brunel[26] investigated the stability of system states for a networkof integrate-and-fire neurons, while Brunel and Hakim [27]extended this theoretical model to the analysis of oscillations.Gerstner et al. [21], [28] subsequently developed a mean fieldtheory using a spike response model and also demonstratedthat the integrate-and-fire model was a special case of the spikeresponse model.

In [5], we used deterministic multistability theory to explainthe underlying mechanism of action for anesthesia and con-sciousness using a synaptic drive firing model framework [20].Specifically, we applied our results to the induction of generalanesthesia with a mean field assumption leading to a two-state(mean excitatory and mean inhibitory) model, and within thisassumption, we demonstrated multistability as a consequenceof changes in the inhibitory postsynaptic potential induced byanesthetic agents.

In this paper, we extend these results further by demon-strating multistability in the mean when the coefficients of theneuronal connectivity matrix are random variables or whenthe synaptic drive firing model initial conditions are random.Specifically, we use a stochastic multiplicative uncertaintymodel to include modeling of a priori uncertainty in thecoefficients of the neuronal connectivity matrix by means ofstate-dependent noise. A stochastic multiplicative uncertaintymodel uses state-dependent Gaussian white noise to representparameter uncertainty by defining a measure of ignorance, interms of an information-theoretic entropy, and then determin-ing the probability distribution, which maximizes this measuresubjected to agreement with a given model.

II. BIOLOGICAL NEURAL NETWORKS

The fundamental building block of the central nervoussystem, the neuron, can be divided into three functionallydistinct parts, namely, the dendrites, soma (or cell body),and axon. The dendrites play the role of input devices thatcollect signals from other neurons and transmit them to thesoma; whereas the soma generates a signal that is transmittedto other neurons by the axon. The axons of other neuronsconnect to the dendrites and soma surfaces by means ofconnectors called synapses. The behavior of the neuron is bestdescribed in terms of the electrochemical potential gradientacross the cell membrane. If the voltage gradient across themembrane increases to a critical threshold value, then there isa subsequent abrupt steplike increase in the potential gradient,the action potential. This action potential is transmitted fromthe soma along the axon to a dendrite of a receiving neuron.The action potential elicits the release of neurotransmittermolecules that diffuse to the dendrite of a receiving neuron.This alters the voltage gradient across the receiving neuron.

The electrochemical potential for a neuron can be describedby a nonlinear four-state system [20]. Coupling these systemequations for each neuron in a large neural population iscomputationally prohibitive. To simplify the mathematicalmodeling, it has been common to use phenomenological firingrate models for studying neural coding, memory, and net-work dynamics [20]. Firing rate models involve the averagedbehavior of the spiking rates of groups of neurons ratherthan tracking the spike rate of each individual neuron cell.In such population models, the activity of a neuron, that is,the rate at which the neuron generates an action potential(fires) is modeled as a function of the voltage (across themembrane).

The firing of a neuron evokes voltage changes, postsynapticpotentials on receiving neurons; that is, neurons electricallyconnected to the firing neurons via axon–dendrite connec-tions. In general, neurons are either excitatory or inhibitorydepending on whether the postsynaptic potential increasesor decreases the potential of the receiving neuron. In par-ticular, excitatory neurotransmitters depolarize postsynapticmembranes by increasing membrane potentials and can collec-tively generate an action potential. Inhibitory neurotransmittershyperpolarize the postsynaptic membrane by decreasing mem-brane potentials, thereby nullifying the actions of excitatoryneurotransmitters and in certain cases prevent the generationof action potentials.

Biological neural network models predict a voltage in thereceiving or postsynaptic neuron given by

V (t) =nE∑

i=1

j

αEi (t − t j ) +

nI∑

i ′=1

j ′αI

i ′ (t − t j ′)

where i ∈ {1, . . . , nE} and i ′ ∈ {1, . . . , nI} enumerate theaction potential or firings of the excitatory and inhibitorytransmitting (presynaptic) neurons at firing times t j and t j ′ ,respectively, and αE

i (·) and αIi ′ (·) are functions (in volts)

describing the evolution of the excitatory and inhibitory post-synaptic potentials, respectively.

Using a (possibly discontinuous) function fi (·) to representthe firing rate (in hertz) of the i th neuron and assuming thatthe firing rate is a function of the voltage vE

i (·) (respectively,vI

i (·)) across the membrane of the i th neuron given by fi (vEi )

(respectively, fi (vIi )), X, Y ∈ {E, I}, it follows that

vEi (t) =

nE∑

j=1, j �=i

AEEi j

∫ t

−∞αE

j (t − τ ) f j

(vE

j (τ ))

+nI∑

j ′=1

AEIi j ′

∫ t

−∞αI

j ′(t − τ ) f j ′(vI

j ′(τ ))

+vEthi (t), i = 1, . . . , nE (1)

vIi (t) =

nE∑

j=1

AIEi j

∫ t

−∞αE

j (t − τ ) f j

(vE

j (τ ))

+nI∑

j ′=1, j ′ �=i

AIIi j ′

∫ t

−∞αI

j ′(t − τ ) f j ′(vI

j ′(τ ))

+vIthi (t), i = 1, . . . , nI (2)

Page 3: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS 753

where the neuronal connectivity matrix AXY, with units ofvolts×synapse, contains entries AXY

i j , X, Y ∈ {E, I}, represent-ing the coupling strength of the j th neuron on the i th neuronsuch that either AXE

i j > 0 or AXIi j < 0, X ∈ {E, I}, if the j th

neuron is connected (i.e., contributes a postsynaptic potential)to the i th neuron, and AXY

i j = 0, otherwise. Furthermore,vE

th i (·) and vIth i (·) are continuous threshold input voltages. Note

that AEEii � AII

ii � 0 by definition.Next, defining the synaptic drive—a dimensionless quantity

per synapse—of each (excitatory or inhibitory) neuron by

S(E,I)i (t) �

∫ t

−∞α

(E,I)i (t − τ ) fi (v

(E,I)i (τ ))dτ (3)

and assuming

α(E,I)i (t) = B(E,I)e

− t

λ(E,I)i (4)

where the dimensionless gain B(E,I) is equal to BE if the i thneuron is excitatory and BI if the i th neuron is inhibitory, andsimilarly for S(E,I)

i , v(E,I)i , α

(E,I)i , and λ

(E,I)i , it follows from

(3) and (4) that

dS(E,I)i (t)

dt= − 1

λ(E,I)i

S(E,I)i (t) + B(E,I) fi (v

(E,I)i (t)).

Now, using the expressions for the excitatory and inhibitoryvoltages given by (1) and (2), respectively, it follows that

dSEi (t)

dt= − 1

λEi

SEi (t) + BE fi

⎝nE∑

j=1, j �=i

AEEi j SE

j (t)

+nI∑

j ′=1

AEIi j ′ SI

j ′(t) + vEth i (t)

⎠ , i = 1, . . . , nE (5)

dSIi (t)

dt= − 1

λIi

SIi (t) + BI fi

⎝nE∑

j=1

AIEi j SE

j (t)

+nI∑

j ′=1, j ′ �=i

AIIi j ′ SI

j ′(t)+vIthi (t)

⎠ , i = 1, . . . , nI. (6)

The above analysis reveals that a form for capturing the neu-roelectronic behavior of biological excitatory and inhibitoryneuronal networks can be written as

dSi (t)

dt= −τi Si (t) + Bi fi

⎝n∑

j=1

Aij S j (t) + vth i (t)

Si (0) = Si0, t ≥ 0, i = 1, . . . , n (7)

where Si (t) ∈ D ⊆ R, t ≥ 0, is the i th synaptic drive,vth i (t) ∈ R, t ≥ 0 denotes the threshold input voltage ofthe i th neuron, Aij is a constant representing the couplingstrength of the j th neuron on the i th neuron, τi � 1/λi is atime constant, Bi is a constant gain for the firing rate of the i thneuron, and fi (·) is a nonlinear activation function describingthe relationship between the synaptic drive and the firing rateof the i th neuron.

In this paper, we assume that fi (·) is a continuous functionsuch as a half-wave rectification function. Specifically, for a

typical neuron [3]

fi (x) = [x]+ (8)

where i ∈ {1, . . . , n} and [x]+ = x if x ≥ 0, and [x]+ = 0,otherwise. Alternatively, we can approximate fi (x) by thesmooth (i.e., infinitely differentiable) half-wave rectificationfunction

fi (x) = xeγ x

1 + eγ x(9)

where i ∈ {1, . . . , n} and γ � 0. Note that f ′i (x) ≈ 1

for x > 0 and f ′′i (x) ≈ 0, x �= 0. In addition, note that

(8) and (9) reflect the fact that as the voltage increases acrossthe membrane of the i th neuron, the firing rate increases aswell. Often, the membrane potential-firing rate curve exhibitsa linear characteristic for given range of voltages. At highervoltages, however, a saturation phenomenon appears, showingthat the full effect of the firing rate has been reached. Tocapture this effect, fi (·) can be modeled as

fi (x) = fmaxeγ x

1 + eγ x(10)

where i ∈ {1, . . . , n}, γ � 0, and fmax = limγ→∞ fi (x) arethe maximum firing rate.

III. TWO-CLASS MEAN EXCITATORY AND INHIBITORY

SYNAPTIC DRIVE MODEL

As shown in [5], the excitatory and inhibitory neuralnetwork model given by (5) and (6) can possess multipleequilibria. For certain values of the model parameters it canbe shown that as the inhibitory time constants λI

i get larger,the equilibrium states can flip their stabilities. Since molecularstudies suggest that one possible mechanism of action of anes-thetics is the prolongation of the time constants of inhibitoryneurons [18], [19], this suggests that general anesthesia isa phenomenon in which different equilibria can be attainedwith changing anesthetic agent concentrations. In this section,we develop a simplified model involving mean excitatoryand inhibitory synaptic drives to explore this multistabilityphenomenon.

Consider the excitatory and inhibitory synaptic drive modelgiven by (5) and (6) with fi (·) = f (·), BE = BI = 1,λE

i = λE, and λIi = λI. In this case, (5) and (6) become

dSEi (t)

dt= f

⎝nE∑

j=1

AEEi j SE

j (t) +nI∑

k=1

AEIik SI

k(t) + vEth i (t)

− 1

λE SEi (t), i = 1, . . . , nE (11)

dSIi (t)

dt= f

⎝nE∑

j=1

AIEi j SE

j (t) +nI∑

k=1

AIIik SI

k(t) + vIthi (t)

− 1

λI SIi (t), i = 1, . . . , nI (12)

where f (·) is given by either (9) or (10) and AEEii = AII

ii = 0.

Next, let AEEi j = A

EE + �EEi j , AEI

i j = AEI + �EI

i j , AIEi j =

AIE + �IE

i j , and AIIi j = A

II + �IIi j , where A

XY � (1/(nXnY))

Page 4: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

754 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

∑nXi=1

∑nYj=1 AXY

i j , X, Y ∈ {E, I}, denote mean and �XYi j ,

X, Y ∈ {E, I}, are deviations from the mean. In this case,it follows that

nE∑

i=1

nE∑

j=1

�EEi j =

nE∑

i=1

nI∑

j=1

�EIi j =

nI∑

i=1

nE∑

j=1

�IEi j

=nI∑

i=1

nI∑

j=1

�IIi j = 0. (13)

Now, using the average and perturbed expressions forAXY

i j , X, Y ∈ {E, I}, (11) and (12) can be rewritten as

dSEi (t)

dt= f

(nE A

EES

E(t) +

nE∑

j=1

�EEi j SE

j (t)

+nI AEI

SI(t) +

nI∑

k=1

�EIik SI

k(t) + vEthi (t)

)

− 1

λE SEi (t), i = 1, . . . , nE (14)

dSIi (t)

dt= f

(nE A

IES

E(t) +

nE∑

j=1

�IEi j SE

j (t)

+nI AII

SI(t) +

nI∑

k=1

�IIik SI

k(t) + vIthi (t)

)

− 1

λI SIi (t), i = 1, . . . , nI (15)

where SE(t) � (1/nE)

∑nEj=1 SE

j (t) and SI(t) �

(1/nI)∑nI

j=1 SIj (t) denote the mean excitatory synaptic drive

and mean inhibitory synaptic drive in dimensionless units of1/synapse2, respectively. Now, defining δE

i (t) � SEi (t)− S

E(t)

and δIi (t) � SI

i (t)− SI(t), where δE

i (t) and δIi (t) are deviations

from the mean, it follows that (14) and (15) become

dSEi (t)

dt= f

(nE A

EES

E(t) + S

E(t)

nE∑

j=1

�EEi j + nI A

EIS

I(t)

+SI(t)

nI∑

k=1

�EIik +

nE∑

j=1

�EEi j δE

j (t)

+nI∑

k=1

�EIik δI

k(t) + vEth i (t)

)− 1

λE SEi (t),

i = 1, . . . , nE (16)

dSIi (t)

dt= f

(nE A

IES

E(t) + S

E(t)

nE∑

j=1

�IEi j + nI A

IIS

I(t)

+SI(t)

nI∑

k=1

�IIik +

nE∑

j=1

�IEi j δE

j (t)

+nI∑

k=1

�IIikδ

Ik(t) + vI

th i (t)

)− 1

λI SIi (t),

i = 1, . . . , nI. (17)

Next, assume that all terms with a factor �XYi j , X, Y ∈

{E, I} , i = 1, . . . , nX and j = 1, . . . , nY, in (16) and (17)are small relative to the remaining terms in f (·). Then, a

first-order expansion of (16) and (17) gives

dSEi (t)

dt= f

(nE A

EES

E(t) + nI A

EIS

I(t) + vE

th i (t))

+ f ′ (nE AEE

SE(t) + nI A

EIS

I(t) + vE

th i (t))

×⎡

⎣SE(t)

nE∑

j=1

�EEi j + S

I(t)

nI∑

k=1

�EIik

+nE∑

j=1

�EEi j δE

j (t) +nI∑

k=1

�EIik δI

k(t)

⎦ − 1

λE SEi (t),

i = 1, . . . , nE (18)dSI

i (t)

dt= f

(nE A

IES

E(t) + nI A

IIS

I(t) + vI

th i (t))

+ f ′ (nE AIE

SE(t) + nI A

IIS

I(t) + vI

th i (t))

×⎡

⎣SE(t)

nE∑

j=1

�IEi j + S

I(t)

nI∑

k=1

�IIik

+nE∑

j=1

�IEi j δE

j (t) +nI∑

k=1

�IIikδ

Ik(t)

⎦ − 1

λI SIi (t),

i = 1, . . . , nI. (19)

Now, assuming that the higher order terms can be ignored,(18) and (19) become

dSEi (t)

dt= f

(nE A

EES

E(t) + nI A

EIS

I(t) + vE

thi (t))

+ f ′ (nE AEE

SE(t) + nI A

EIS

I(t) + vE

th i (t))

×⎡

⎣SE(t)

nE∑

j=1

�EEi j + S

I(t)

nI∑

k=1

�EIik

− 1

λE SEi (t), i = 1, . . . , nE (20)

dSIi (t)

dt= f

(nE A

IES

E(t) + nI A

IIS

I(t) + vI

th i (t))

+ f ′ (nE AIE

SE(t) + nI A

IIS

I(t) + vI

th i (t))

×⎡

⎣SE(t)

nE∑

j=1

�IEi j + S

I(t)

nI∑

k=1

�IIik

− 1

λI SIi (t), i = 1, . . . , nI. (21)

Finally, summing (20) and (21) over i = 1, . . . , nE andi = 1, . . . , nI, dividing by nE and nI, respectively, assumingthat vE

th1(t) = vEth2(t) = · · · = vE

thnE(t) = vE

th and vIth1(t) =

vIth2(t) = · · · = vI

thnI(t) = vI

th, t ≥ 0, and using (13),it follows that the average excitatory synaptic drive and theaverage inhibitory synaptic drive are given by

dSE(t)

dt= f

(nE A

EES

E(t) + nI A

EIS

I(t) + vE

th

)

− 1

λE SE(t), t ≥ 0 (22)

dSI(t)

dt= f

(nE A

IES

E(t)+nI A

IIS

I(t)+vI

th

)− 1

λI SI(t). (23)

Page 5: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS 755

IV. MULTISTABILITY ANALYSIS OF THE MEAN

FIELD SYNAPTIC DRIVE MODEL WITH

RANDOM INITIAL CONDITIONS

In this section, we consider the unforced version (i.e.,vE

th i (t) ≡ 0 and vIth i (t) ≡ 0) of (11) and (12) given by

dSEi (t)

dt= f

⎝nE∑

j=1

AEEi j SE

j (t) +nI∑

k=1

AEIik SI

k(t)

⎠ − 1

λE SEi (t),

i = 1, . . . , nE, t ≥ 0 (24)

dSIi (t)

dt= f

⎝nE∑

j=1

AIEi j SE

j (t) +nI∑

k=1

AIIik SI

k(t)

⎠ − 1

λI SIi (t),

i = 1, . . . , nI. (25)

In this case, (22) and (23) can be further simplified as

dSE(t)

dt= f

(nE A

EES

E(t) + nI A

EIS

I(t)

)− 1

λE SE(t),

t ≥ 0 (26)

dSI(t)

dt= f

(nE A

IES

E(t) + nI A

IIS

I(t)

)− 1

λI SI(t). (27)

Hence, (26) and (27) represent the spatial average (mean)dynamics of the system (24) and (25). It is important to notethat the system (26) and (27) is not the first moment equationof (24) and (25).

Next, we develop several key results on boundedness ofsolutions of the average synaptic drives.

Proposition 1: Let R2+ denote the nonnegative orthant of

R2. Consider the dynamical system (26) and (27), and assume

SE(0) ≥ 0 and S

I(0) ≥ 0. Then R

2+ is an invariant set with

respect to (26) and (27).Proof: The result is a direct consequence of Proposi-

tion 2.1 of [29] by noting that f (x) = xeγ x/(1 + eγ x) ≥ 0(respectively, f (x) = fmaxeγ x/(1 + eγ x) ≥ 0) for all x ≥ 0.

For the statement of the next result, recall that a matrixA ∈ R

n×n is Lyapunov stable if and only if ‖eAt‖ is boundedfor all t ≥ 0.

Proposition 2: Consider the dynamical system (26) and (27),and assume that S

E(0) ≥ 0 and S

I(0) ≥ 0. If A − L is

Lyapunov stable, where

A �[

nE AEE

nI AEI

nE AIE

nI AII

], L �

[1λE 00 1

λI

](28)

then the solutions to (26) and (27) are bounded for all t ≥ 0.Proof: It follows from Proposition 1 that S

E(t) ≥ 0 and

SI(t) ≥ 0 for all t ≥ 0. Now, since eγ x/(1 + eγ x) < 1 for

every x ∈ R, it follows that f (x) ≤ x for all x ≥ 0. Thus,denoting by “≤≤” a componentwise inequality, it follows that⎡

⎣ f(

nE AEE

SE(t) + nI A

EIS

I(t)

)− 1

λE SE(t)

f(

nE AIE

SE(t) + nI A

IIS

I(t)

)− 1

λI SI(t)

≤≤[

nE AEE

SE(t) + nI A

EIS

I(t) − 1

λE SE(t)

nE AIE

SE(t) + nI A

IIS

I(t) − 1

λI SI(t)

]

= (A − L)

[S

E(t)

SI(t)

], t ≥ 0

which implies that the solutions to (26) and (27) are boundedfor all t ≥ 0.

The following corollary to Proposition 2 holds for the casewhere fi (·) = f (·) is given by (9).

Corollary 1: Consider the dynamical system (26) and (27),and assume that S

E(0) ≥ 0 and S

I(0) ≥ 0. If (1/2)A − L is

unstable, then the solutions to (26) and (27) are unboundedfor all t ≥ 0.

Proof: The proof is similar to the proof of Proposition 2by noting that f (x) ≥ (1/2)x for all x ≥ 0.

Alternatively, in light of Proposition 1 we can use a linearfunction to give a sufficient condition guaranteeing bounded-ness of solutions for (26) and (27).

Proposition 3: Consider the dynamical system (26) and (27),and assume that S

E(0) ≥ 0 and S

I(0) ≥ 0. Furthermore,

assume that there exist positive scalars L11 and L12 such thatQ11 ≤ 0 and Q12 ≤ 0, where Q11 = −L11/λ

E + L11nE AEE +

L12nE AIE

and Q12 = −L12/λI + L12nI A

II + L11nI AEI

. Thenthe solutions to (26) and (27) are bounded for all t ≥ 0.

Proof: Consider the function V (S) = LS, S ∈ R2+, where

L = [L11, L12]T and S = [SE, S

I]T, and note that

V (S(t)) = L11 SE(t) + L12 SI(t) = − L11

λE SE(t)

+L11 f(

nE AEE

SE(t) + nI(t)A

EIS

I(t)

)

− L12

λI SI(t) + L12 f

(nE A

IES

E(t) + nI A

IIS

I(t)

),

t ≥ 0.

Now, using similar arguments as in the proof of Theorem 1given below, it follows that

V (S(t)) ≤ − L11

λE SE(t) + L11

(nE A

EES

E(t) + nI A

EIS

I(t)

)

− L12

λI SI(t) + L12

(nE A

IES

E(t) + nI A

IIS

I(t)

)

=(− L11

λE + L11nE AEE + L12nE A

IE)

SE(t)

+(− L12

λI + L12nI AII + L11nI A

EI)

SI(t)

= Q11SE(t) + Q12S

I(t), t ≥ 0.

Choosing L11 and L12 such that Q11 ≤ 0 and Q12 ≤ 0, andnoting that S

E(t) ≥ 0 and S

I(t) ≥ 0, t ≥ 0, it follows that

V (S(t)) ≤ 0, t ≥ 0, which proves that all the solutions to (26)and (27) are bounded for all t ≥ 0.

Next, we study the stability of the moments of (26) and (27)to obtain the average asymptotic motion of (24) and (25). LetE⊂ R

2 denote the equilibrium set of (26) and (27). Clearly,if (x, y) ∈ E , then f (nE A

EEx + nI A

EIy) − (1/λE)x = 0 and

f (nE AIE

x + nI AII

y)− (1/λI)y = 0. For the next set of defin-itions E[ · ] denotes expectation and S(t) � [SE

(t), SI(t)]T,

t ≥ 0. Inspired by [5] and [30], we have the followingdefinitions.

Definition 1: Let p > 0 and let D ⊆ R2 be a positively

invariant set with respect to (26) and (27). An equilibrium

Page 6: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

756 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

solution S(t) ≡ α ∈ E ∩ D of (26) and (27) is semistable inthe pth mean with respect to D if the following statementshold.

1) For every ε > 0, there exists δ = δ(ε) > 0 such thatS(t) ∈ D and E[‖S(t) − α‖p] < ε for every t ≥ 0 andE[‖S(0) − α‖p] < δ.

2) There exist ε > 0 and α∗ ∈ E ∩ D such that, for everyE[‖S(0) − α‖p] < ε and S(0) ∈ D, limt→∞ E[‖S(t) −α∗‖p] = 0.

The system (26) and (27) is semistable in the pth meanwith respect to D if every equilibrium solution in D of (26)and (27) is semistable in the pth mean with respect to D.If, alternatively, S(t) ≡ α ∈ E only satisfies 1), then theequilibrium solution S(t) ≡ α ∈ E of (26) and (27) isLyapunov stable in the pth mean with respect to D.

Definition 2: An equilibrium solution S(t) ≡ α ∈ E ∩ D of(26) and (27) has a semistable expectation with respect to Dif the following statements hold.

1) For every ε > 0, there exists δ = δ(ε) > 0 such thatS(t) ∈ D and ‖E[S(t)] − α‖ < ε for every t ≥ 0 and‖E[S(0)] − α‖ < δ.

2) There exist ε > 0 and α∗ ∈ E ∩ D such that, for every‖E[S(0)] − α‖ < ε and S(0) ∈ D, limt→∞ ‖E[S(t)] −α∗‖ = 0.

The system (26) and (27) has a semistable expectation withrespect to D if every equilibrium solution in D of (26) and (27)has a semistable expectation with respect to D. If, alternatively,S(t) ≡ α ∈ E only satisfies 1), then the equilibrium solutionS(t) ≡ α ∈ E of (26) and (27) has a Lyapunov stableexpectation with respect to D.

Definition 3: Consider the dynamical system (26) and (27).Let μ(·) denote the Lebesgue measure in R

2 and let D ⊆ R2

be a positively invariant set with input to (26) and (27). Thedynamical system (26) and (27) is multistable in the pth meanwith respect to D if the following statements hold.

1) E ∩ D\{(0, 0)} �= ∅.2) For every S(0) ∈ D, there exists α ∈ E ∩ D such that

limt→∞ E[‖S(t) − α‖p] = 0.3) There exists a subset M ⊂ D satisfying μ(M) = 0

and a Lyapunov stable in the pth mean with respect toD equilibrium point α ∈ E ∩ D such that, for everyS(0) ∈ D\M, limt→∞ E[‖S(t) − α‖p] = 0.

We say that the system (26) and (27) has a multistableexpectation with respect to D if limt→∞ E[‖S(t) − α‖p] = 0is replaced by limt→∞ ‖E[S(t)] − α‖ = 0 and α in 3) has aLyapunov stable expectation with respect to D.

Next, we use [5, Th. 1] to prove multistability in the pthmean for (26) and (27). The key idea behind [5, Th. 5.1]is to characterize multistability of dynamical systems viaLyapunov functions that do not make assumptions on signdefiniteness.

Theorem 1: Consider the dynamical system (26) and (27).Assume that S

E(0) ≥ 0 and S

I(0) ≥ 0 are random variables

and E ∩ R2+\{(0, 0)} �= ∅. Furthermore, assume that the

hypothesis of Proposition 2 or Proposition 3 hold and there

exist scalars Kij ∈ R, i, j = 1, 2, such that

M11 < 0 (29)

M11 M22 − 1

4(M12 + M21)

2 ≥ 0 (30)

where

Mij = − Kij + K ji

2λX + K j j

[3

4+ 1

4sign(K j j )

]nX A

YX

+1

2(K12 + K21)

[3

4+ 1

4sign

(1

2(K12 + K21)

)]

×nX AZX

, i, j = 1, 2

X, Y, Z ∈ {E, I}, X = E if i = 1; otherwise, X = I, Y = Eif j = 1; otherwise, Y = I, and Z = {E, I}\Y, and sign(σ ) �σ/|σ |, σ �= 0, and sign(0) � 0. In addition, assume thatevery point in the largest invariant set of N (M + MT), where

M =[ M11 M12

M21 M22

]and N (X) denotes the null space of X , is

a Lyapunov stable equilibrium point in the pth mean for (26)and (27) with respect to R

2+. Then (26) and (27) is multistable

in the pth mean and has a multistable expectation with respectto R

2+.

Proof: Consider the function V (S) = (1/2)ST K S, S ∈R

2+, where K ∈ R

2×2 is to be determined, and note that

V (S(t)) = K11SE(t)S

E(t) + K22S

I(t)S

I(t)

+1

2(K12 + K21)S

E(t)S

I(t)

+1

2(K12 + K21)S

E(t)S

I(t)

= − K11

λE (SE(t))2 + K11 f

(nE A

EES

E(t)

+nI AEI

SI(t)

)S

E(t) − K22

λI (SI(t))2

+K22 f(

nE AIE

SE(t) + nI A

IIS

I(t)

)S

I(t)

− K12 + K21

2λE SE(t)S

I(t) + 1

2(K12 + K21)

×(

nE AEE

SE(t) + nI A

EIS

I(t)

)S

I(t)

− K12 + K21

2λI SI(t)S

E(t) + 1

2(K12 + K21)

×(

nE AIE

SE(t) + nI A

IIS

I(t)

)S

E(t).

Now, since, by Proposition 1, nE AEE

SE + nI A

EIS

I ≥ 0 andnE A

IES

E + nI AII

SI ≥ 0, it follows that

1

2≤ eγ (nE A

EES

E+nI AEI

SI)

[1 + eγ (nE AEE

SE+nI A

EIS

I)]

< 1

and

1

2≤ eγ (nE A

IES

E+nI AII

SI)

[1 + eγ (nE AIE

SE+nI A

IIS

I)]

< 1.

Page 7: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS 757

Hence,

V (S(t)) ≤2∑

i=1

2∑

j=1

Mij SX(t)S

Y(t)

=[

SE(t), S

I(t)

]M

[S

E(t), S

I(t)

]T

=[

SE(t), S

I(t)

] M + MT

2

[S

E(t), S

I(t)

]T,

t ≥ 0.

Next, choosing Kij , i, j = 1, 2, such that M11 < 0 andM11 M22 − 1/4(M12 + M21)

2 ≥ 0, it follows that (M +MT)/2 ≤ 0, which implies that V (S(t)) ≤ 0, t ≥ 0.Thus, it follows from [5, Th. 5.1] that (26) and (27) ismultistable, that is, for every p ≥ 1 and every S(0) ∈ R

2+,

there exists α ∈ E ∩ R2+ such that limt→∞ ‖S(t) − α‖p = 0;

and there exists a subset M satisfying μ(M) = 0 such that,for every S(0) ∈ R

2+\M, there exists a Lyapunov stable α ∈

E ∩ R2+ with respect to R

2+ such that limt→∞ ‖S(t)−α‖p = 0.

Now, taking the expectation operation on both limit results, itfollows from Definition 3 that (26) and (27) is multistablein the pth mean. Finally, since ‖E[X]‖ ≤ E[‖X‖] for astochastic variable X , stability in the mean implies stabilityof the expectation, and hence, the system (26) and (27) has amultistable expectation with respect to R

2+.

The following corollary to Theorem 1 is immediate.Corollary 1: Consider the dynamical system (26) and (27)

with random initial conditions SE(0) ≥ 0 and S

I(0) ≥ 0.

Assume that E is a connected set, the matrix A given by (28)is Lyapunov stable, and there exist Kij ∈ R, i, j = 1, 2, suchthat (29) and (30) hold. Furthermore, assume that every pointin the largest invariant set of N (M + MT) is a Lyapunovstable equilibrium point in the pth mean for (26) and (27)with respect to R

2+. Then (26) and (27) is semistable in the

pth mean and has a semistable expectation with respect toR

2+.

V. STOCHASTIC MULTISTABILITY OF THE MEAN

FIELD SYNAPTIC DRIVE MODEL

In this section, we consider the mean field synaptic drivemodel where the coefficients of (22) and (23) are randomlydisturbed. Specifically, assuming that the initial value S(0) �[SE

(0), SI(0)]T is deterministic and contained in the nonneg-

ative orthant of the state space, we consider the stochasticdifferential mean field synaptic drive model given by

dS(t) = −LS(t)(dt + νdw(t)) + [AS(t)(dt + νdw(t))]+S(0) = S0, t ≥ 0 (31)

where L and A are given by (28), w(t) represents Brownianmotion, that is, a Wiener process, ν ∈ R indicates the intensityof the Gaussian white noise dw(t), and [x]+ � [[x1]+, [x2]+]T

for x = [x1, x2]T ∈ R2. Here, we assume that every entry of

the matrices A and L of the mean dynamics (22) and (23)(with vE

th = vIth = 0) is synchronously perturbed.

For the statement of the results in this section, we requiresome additional notation and definitions. Specifically, let

(�,F , P) be the probability space associated with (31), where� denotes the sample space, F denotes a σ -algebra, andP defines a probability measure on the σ -algebra F , that is, P

is a nonnegative countably additive set function on F suchthat P(�) = 1 [30]. Note that (31) is a Markov process, andhence, there exists a filtration {Ft } satisfying Fτ ⊂ Ft ⊂ F ,0 ≤ τ < t , such that {ω ∈ � : S(t) ∈ B} ∈ Ft , t ≥ 0, forall Borel sets B ⊂ R

2 contained in the Borel σ -algebra B.Finally, spec(X) denotes the spectrum of the square matrix Xincluding multiplicity and R(Y ) denotes the range space ofthe matrix Y .

When every component of the vector AS(t)(dt + νdw(t)),t ≥ 0, is nonnegative, the stochastic dynamical system (31)can be written as

dS(t) = AS(t)dt + AsS(t)dw(t), S(0) = S0, t ≥ 0 (32)

where A � A−L and As � ν(A−L). The multiplicative whitenoise model (32) can be regarded as a parameter uncertaintymodel where dw(t) corresponds to an uncertain parameterwhose pattern and magnitude are given by As/‖ A‖ and ‖ As‖,respectively. Note that if rank(A − L) < 2, then every pointα ∈ N ( A) is an equilibrium point of (32). With a slightabuse of notation, we use E to denote the equilibrium set of(31) or (32). First, motivated by the definition of stochasticLyapunov stability in [30], we have the following definitionof stochastic semistability. For the statement of the next result,define dist(x, E) � inf y∈E ‖x − y‖. For a similar definition ofstochastic semistability, see [31].

Definition 4: An equilibrium solution S(t) ≡ α ∈ E of (31)is stochastically semistable if the following statements hold.

1) For every ε > 0, limS(0)→α P[sup0≤t<∞ ‖S(t) − α‖ ≥ε] = 0.

2) limS(0)→E P [limt→∞ dist(S(t), E) = 0] = 1.

The dynamical system (31) is stochastically semistable ifevery equilibrium solution of (31) is stochastically semistable.Finally, the system (31) is globally stochastically semistableif it is stochastically semistable and P[limt→∞ dist(S(t), E) =0] = 1 for every initial condition S(0) ∈ R

2. If, alternatively,S(t) ≡ α ∈ E only satisfies 1), then the equilibrium solutionS(t) ≡ α ∈ E of (31) is stochastically Lyapunov stable.

Definition 4 is a stability notion for the stochastic dynam-ical system (31) having a continuum of equilibria and is ageneralization of the notion of semistability from deterministicdynamical systems [32], [33] to stochastic dynamical systems.It is noted in [32] that existing methods for analyzing thestability of deterministic dynamical systems with isolated equi-libria cannot be used for deterministic dynamical systems withnonisolated equilibria due to the connectedness property ofequilibrium sets. Hence, Definition 4 is essential for analyzingthe stability of stochastic dynamical systems with nonisolatedequilibria. Note that 1) of Definition 4 implies stochasticLyapunov stability of an equilibrium, whereas 2) impliesalmost sure convergence of trajectories to the equilibriummanifold.

Next, we extend the notion of multistability for determin-istic dynamical systems defined in [5] to that of stochasticmultistability for stochastic dynamical systems.

Page 8: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

758 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

Definition 5: Consider the dynamical system (31) and letμ(·) denote the Lebesgue measure in R

2. We say that thesystem (31) is stochastically multistable if the followingstatements hold.

1) E\{(0, 0)} �= ∅.2) For every S(0) ∈ R

2, there exists α(ω) ∈ E , ω ∈ �,such that P [limt→∞ S(t) = α(ω)] = 1.

3) There exists a subset M ⊂ R2 satisfying

μ(M) = 0 such that, for every S(0) ∈ R2\M,

P[limt→∞ dist(S(t), E) = 0] = 1.

Stochastic multistability is a global stability notion forthe stochastic dynamical system (31) having isolated equi-libria and/or a continuum of equilibria, whereas stochasticsemistability is a local stability notion for the stochasticdynamical system (31) having a continuum of equilibria.Hence, stochastic multistability is a stronger notion thanstochastic semistability. The next result states a relation-ship between stochastic multistability and global stochasticsemistability.

Proposition 4: Consider the dynamical system (31). If (31)is globally stochastically semistable, then (31) is stochasticallymultistable.

Proof: Suppose that the dynamical system (31)is globally stochastically semistable. Then, by definition,P[limt→∞ dist(S(t), E) = 0] = 1 for every initial conditionS(0) ∈ R

2. Next, we show that for every S(0) ∈ R2, there

exists α = α(ω) ∈ E , ω ∈ �, such that P[limt→∞ S(t) =α(ω)] = 1. Let

(S) �{

x ∈ R2 : there exists a divergent sequence

{ti }∞i=1 such that P

[lim

i→∞ S(ti ) = x]

= 1}

and Bδ(z) � {x ∈ R2 : ‖x − z‖ < δ}.

Suppose z ∈ (S) is stochastically Lyapunov stable and letε1, ε2 > 0. Since z is stochastically Lyapunov stable thereexists an open neighborhood Bδ(z), where δ = δ(ε1, ε2) > 0,such that, for every S(0) ∈ Bδ(z), P[supt≥0 ‖S(t) − z‖ ≥ε1] < ε2, and hence, P[supt≥0 ‖S(t) − z‖ < ε1] ≥ 1 − ε2.Now, since z ∈ (S), it follows that there exists a divergentsequence {ti }∞i=1 in [0,∞) such that P[limi→∞ S(ti ) = z] = 1,and hence, for every ε3, ε4 > 0, there exists k = k(ε3) ≥ 1such that P[supi≥k ‖S(ti ) − z‖ > ε3] < ε4 or, equivalently,P[supi≥k ‖S(ti ) − z‖ < ε3] ≥ 1 − ε4.

Next, note that P[supt≥tk ‖S(t)−z‖ < ε1] ≥ P[supt≥0 ‖S(t)−z‖ < ε1]. It now follows that

P

[supt≥tk

‖S(t) − z‖ < ε1

]

≥ P

[supt≥tk

‖S(t) − z‖ < ε1

∣∣∣ supi≥k

‖S(ti ) − z‖ < ε3

]

×P

[supi≥k

‖S(ti ) − z‖ < ε3

]

≥ (1 − ε2)(1 − ε4)

where P[·|·] denotes conditional probability. Since ε1, ε2,and ε4 were chosen arbitrarily, it follows that P[z =limt→∞ S(t)] = 1. Thus, P[limn→∞ S(tn) = z] = 1 for everydivergent sequence {tn}∞n=1, and hence, (S) = {z}; that is,

for every S(0) ∈ R2, there exists α = α(ω) ∈ E , ω ∈ �, such

that P[limt→∞ S(t) = α(ω)] = 1.Next, recall from [34] that a matrix A ∈ R

n×n is semi-stable if and only if limt→∞ eAt exists. In other words,A is semistable if and only if for every λ ∈ spec( A),λ = 0 or Re λ < 0 and if λ = 0, then 0 is semisimple.Furthermore, if A is semistable, then the index of A is zeroor one, and hence, A is group invertible. The group inverseA# of A is a special case of the Drazin inverse AD inthe case where A has index zero or one [34]. In this case,limt→∞ eAt = In − A A# [34].

Proposition 5: If A is semistable, then, for sufficientlysmall |ν|, the dynamical system (32) is globally stochasticallysemistable.

Proof: First, note that the solution to (32) is given by

S(t) = eAt S(0) +∫ t

0eA(t−s) AsS(s)dw(s), t ≥ 0. (33)

Since A is semistable, it follows that limt→∞ eAt S(0) exists.In this case, let S∞ = limt→∞ eAt S(0). Furthermore, notethat S∞ = (I2 − A A#)S(0) ∈ N ( A) [34], where A# denotesthe group inverse of A. Next, note that

∫ t0 eA(t−s) AsS(s)dw(s)

is an Itô integral and let ‖ · ‖ denote the Euclidean norm onR

2. Then, it follows from Property e) of Theorem 4.4.14 of[30, p. 73] that

E

[∥∥∥∥∫ t

0eA(t−s) AsS(s)dw(s)

∥∥∥∥2]

=∫ t

0E

[∥∥∥eA(t−s) AsS(s)∥∥∥

2]

ds

= ν2∫ t

0E

[∥∥∥eA(t−s) AS(s)∥∥∥

2]

ds

= ν2∫ t

0E

[∥∥∥(eA(t−s) − (I2 − A A#)) AS(s)∥∥∥

2]

ds

= ν2∫ t

0E

[∥∥∥(eA(t−s) − (I2 − A A#)) A(S(s) − S∞)∥∥∥

2]

ds

(34)

where E[ · ] denotes expectation with respect to the probabilityspace (�,F , P).

Next, define e(t) � eAt S(0) − (I2 − A A#)S(0) =eAt S(0) − S∞. Then it follows from the semistability of Athat limt→∞ e(t) = 0. Since e(t) = Ae(t) for every t ≥ 0,it follows from the equivalence of (uniform) asymptoticstability and (uniform) exponential stability for linear time-invariant systems [35] that there exist real scalars σ, r > 0such that ‖e(t)‖ ≤ σe−rt‖e(0)‖, t ≥ 0, or, equivalently,‖[eAt − (I2 − A A#)]S(0)‖ ≤ σe−rt‖ A A# S(0)‖, t ≥ 0. Hence

‖eAt − (I2 − A A#)‖′

= maxS(0)∈R2\{0}

‖[eAt − (I2 − A A#)]S(0)‖‖S(0)‖

≤ σe−rt maxS(0)∈R2\{0}

‖ A A# S(0)‖‖S(0)‖

= σe−rt‖ A A#‖′, t ≥ 0 (35)

Page 9: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS 759

where ‖ · ‖′ = σmax(·) and σmax(·) denotes the maximumsingular value. Thus, (35) implies

‖eAt − (I2 − A A#)‖′ ≤ ρe−rt , t ≥ 0 (36)

where ρ � σ‖ A A#‖′.Next, it follows from (34) and (36) that

∫ t

0E

[∥∥∥eA(t−s) As(s)S(s)∥∥∥

2]

ds

≤ ν2ρ2‖ A‖′2∫ t

0e−2r(t−s)

E

[‖S(s) − S∞‖2

]ds, t ≥ 0. (37)

Now, it follows from (33) and (37), and the triangle inequalitythat

E

[‖S(t) − S∞‖2

]

≤ ‖eAt S(0) − S∞‖2 + ν2ρ2‖ A‖′2

×∫ t

0e−2r(t−s)

E

[‖S(s) − S∞‖2

]ds

≤ ‖eAt S(0) − S∞‖2 + ν2ρ2‖ A‖′2

×e−2rt∫ t

0e2rs

E

[‖S(s) − S∞‖2

]ds, t ≥ 0

and hence

e2rtE

[‖S(t) − S∞‖2

]

≤ e2rt‖eAt S(0) − S∞‖2 + ν2ρ2‖ A‖′2

×∫ t

0e2rs

E

[‖S(s) − S∞‖2

]ds, t ≥ 0.

Hence, it follows from the Gronwall-Bellman lemma[36, p. 125] that

e2rtE

[‖S(t) − S∞‖2

]

≤ e2rt‖eAt S(0) − S∞‖2 + ν2ρ2‖ A‖′2∫ t

0e2rs

×‖eAs S(0) − S∞‖2eν2ρ2‖A‖′2(t−s)ds, t ≥ 0

or, equivalently, for ν �= 0

E

[‖S(t) − S∞‖2

]

≤ ‖eAt S(0) − S∞‖2 + ν2ρ2‖ A‖′2∫ t

0e−2r(t−s)

×‖eAs S(0) − S∞‖2eν2ρ2‖A‖′2(t−s)ds

≤ ‖eAt S(0) − S∞‖2 + ν2ρ4‖ A‖′2‖S(0)‖2∫ t

0e−2rt

×eν2ρ2‖A‖′2(t−s)ds

= ‖eAt S(0) − S∞‖2 + ν2ρ4‖ A‖′2‖S(0)‖2

×e−(2r−ν2ρ2‖A‖′2)t∫ t

0e−ν2ρ2‖A‖′2sds

= ‖eAt S(0) − S∞‖2 + ρ2‖S(0)‖2e−(2r−ν2ρ2‖A‖′2)t

×(

1 − e−ν2ρ2‖A‖′2t), t ≥ 0.

Taking |ν| to be such that

ν2ρ2‖ A‖′2 < 2r (38)

it follows that limt→∞ e−(2r−ν2ρ2‖A‖′2)t = 0. In this case,limt→∞ E[‖S(t) − S∞‖2] = 0, that is, S(t), t ≥ 0, convergesto S∞ in the mean square.

Finally, by Theorem 7.6.10 of [37] or [30, p. 187] (Khas-minskiy’s theorem), for every initial condition S(0) ∈ R

2 andevery ε > 0, we have

P

[sup

0≤t<∞‖S(t) − S∞‖ ≥ ε

]≤ 1

ε2 E[‖S(0) − S∞‖2]

and P[limt→∞ S(t) exists] = 1. Thus, the dynamical system(32) is globally stochastically semistable.

Remark 1: If A is semistable, then there exists aninvertible transformation matrix T ∈ R

2×2 such thatT AT −1 = diag[−λ, 0], where λ ∈ spec( A) and λ > 0. In thiscase, defining the new coordinates [S1(t), S2(t)]T � T S(t),(32) yields the two decoupled stochastic differential equationsgiven by

d S1(t) = −λS1(t)dt − νλS1(t)dw(t), S1(0) = S10 (39)

d S2(t) = 0, S2(0) = S20(0), t ≥ 0. (40)

Since the analytical solution to (39) is given by S1(t) =S1(0)e−λ(1+1/2λν2)t−νλw(t), it follows that

S(t) = T −1 S(t) = T −1

[S1(0)e−λ(1+ 1

2 λν2)t−νλw(t)

S2(0)

].

Finally, we provide a sufficient condition for stochasticmultistability for the dynamical system (32). For this result,the following lemma is first needed.

Lemma 1: Let A ∈ Rn×n . If there exist n × n matrices

P = PT ≥ 0 and R = RT ≥ 0, and a nonnegative integer ksuch that

( Ak)T( AT P + P A + R) Ak = 0 (41)

k = min

{l ∈ Z+ :

n⋂

i=1

N (R Ai+l−1) = N ( A)

}(42)

then 1) N (P Ak) ⊆ N ( A) ⊆ N (R Ak) and 2) N ( A) ∩R( A) = {0}.

Proof: The proof is similar to that of Lemma 4.5 of [38]and, hence, is omitted.

Theorem 2: Consider the dynamical system (32). Supposethere exist 2 × 2 matrices P = PT ≥ 0 and R = RT ≥ 0,and a nonnegative integer k such that (41) and (42) hold withn = 2. If N (A− L)\{(0, 0)} �= ∅ and |ν| is sufficiently small,then the dynamical system (32) is stochastically multistable.

Proof: By Proposition 5 it suffices to show that A �A − L is semistable. Consider the deterministic dynamicalsystem given by

x(t) = Ax(t), x(0) = x0, t ≥ 0 (43)

where x(t) ∈ R2. Note that A is semistable if and only if (43)

is semistable [34], and hence, it suffices to show that (43)is semistable. Since, by Lemma 1, N ( A) ∩ R( A) = {0}, itfollows from [39, p. 119] that A is group invertible. Thus, letL � I2 − A A# and note that L2 = L. Hence, L is the unique2 × 2 matrix satisfying N (L) = R( A), R(L) = N ( A), andLx = x for all x ∈ N ( A).

Page 10: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

760 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

Fig. 1. Eigenvalues of A − L as a function of λI. Arrows: increasing valuesof λI.

Fig. 2. State trajectories of the sample trajectories of (31) for λI = 0.9 withν = 0.2.

Next, consider the nonnegative function

V(x) = xT( Ak)T P Ak x + xT LT Lx .

If V(x) = 0 for some x ∈ R2, then P Ak x = 0 and Lx = 0.

Now, it follows from Lemma 1 that x ∈ N ( A), whereasLx = 0 implies x ∈ R( A), and hence, V(x) = 0 only ifx = 0. Hence, V(·) is positive definite. Next, since L A =A − A A# A = 0, it follows that the time derivative along thetrajectories of (43) is given by

V(x(t)) = −xT(t)( Ak)T R Ak x(t) + xT(t) AT LT Lx(t)

+xT(t)LT L Ax(t)

= −xT(t)( Ak)T R Ak x(t) ≤ 0, t ≥ 0.

Note that V−1(0) = N (R Ak).

To find the largest invariant set M contained in N (R Ak),consider a solution x(·) of (43) such that R Ak x(t) = 0for all t ≥ 0. Then, R Ak(di−1/dt i−1)x(t) = 0 for everyi ∈ {1, 2, . . .} and t ≥ 0, that is, R Ak Ai−1x(t) =R Ak+i−1x(t) = 0 for every i ∈ {1, 2, . . .} and t ≥ 0.Equation (42) now implies that x(t) ∈ N ( A) for allt ≥ 0. Thus, M ⊆ N ( A). However, N ( A) consistsof only equilibrium points, and hence, is invariant. Hence,M = N ( A).

Finally, let xe ∈ N ( A) be an equilibrium point of (43) andconsider the Lyapunov function candidate U(x) = V(x − xe),which is positive definite with respect to xe. Then it followsthat the Lyapunov derivative along the trajectories of (43) isgiven by

U(x(t)) = −(x(t)−xe)T( Ak)T R Ak(x(t)−xe) ≤ 0, t ≥ 0.

Fig. 3. Phase portrait of the sample trajectories of (31) for λI = 0.9 withν = 0.2.

Fig. 4. Histogram showing the limit points of loge SE over 10 000 samples.

Thus, it follows that xe is Lyapunov stable. Now, it followsfrom Theorem 3.1 of [33] that (43) is semistable, that is, A issemistable. Finally, it follows from Proposition 5 that (32) isstochastically multistable.

VI. ILLUSTRATIVE NUMERICAL EXAMPLE FOR THE MEAN

FIELD SYNAPTIC MODEL

In this section, we present a numerical example to illustratethe stochastic multistability properties of the two-state nonlin-ear synaptic drive neuronal firing model (31). Specifically, con-sider the mean field synaptic drive model given by (31) withnE A

EE = 1 V · synapse, nI AEI = −1 V · synapse, nE A

IE =1 V · synapse, nI A

II = 0 V · synapse, and λE = 10 ms, and letλI vary. In this case, the system matrices in (32) are given by

A =[

1 −11 0

], L =

[0.1 00 1

λI

]

A − L =[

0.9 −11 − 1

λI

].

Fig. 1 shows the eigenvalues of A − L as a function of λI.Note that for λI < 0.9 ms or λI > 0.93 ms, (A−L) is unstable,whereas for 0.9 ms < λI < 0.93 ms, (A − L) is asymptoticallystable. Clearly, rank (A − L) < 2 for λI = 0.9 ms. Hence, itfollows from Theorem 2 that the stochastic dynamical system(32) exhibits multistability for λI = 0.9 ms. In this case, A −L is semistable and the N (A − L) is characterized by thedirection vector [1, 0.9]T.

For our simulation, we use the initial conditionS(0) = [0.1, 0.5]T. Figs. 2 and 3 show the time response forthe average excitatory and inhibitory synaptic drives, and the

Page 11: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS 761

Fig. 5. State trajectories of the sample trajectories of (31) for λI = 0.78with ν = 0.2.

Fig. 6. Phase portrait of the sample trajectories of (31) for λI = 0.78 withν = 0.2.

Fig. 7. State trajectories of the sample trajectories of (31) for λI = 1.20with ν = 0.2.

Fig. 8. Phase portrait of the sample trajectories of (31) for λI = 1.20 withν = 0.2.

phase portrait for λI = 0.9 ms with ν = 0.2. Furthermore,for λI = 0.9 ms with ν = 0.2, Fig. 4 shows a histogram ofthe limit points of loge SE (or, equivalently, loge 0.9SI) over10 000 samples. Note that the mean and variance of loge SE

is −1.6135 and 0.3152, respectively. Similar plots shown inFigs. 2 and 3 are shown for λI = 0.78 ms and λI = 1.20 mswith ν = 0.2 in Figs. 5–8. Finally, Figs. 9 and 10 showsimilar simulations for the case where λI = 0.9 ms (i.e.,(A − L) is semistable) and ν = 1. However, note that inthis case condition (38) of Proposition 5 is not satisfied, andhence, the model exhibits instability.

The trajectories of (26) and (27) can exhibit unstableand multistable behaviors for different values of the para-meters, which are similar to the simulation results for (31).

Fig. 9. State trajectories of the sample trajectories of (31) for λI = 0.9 withν = 1.

Fig. 10. Phase portrait of the sample trajectories of (31) for λI = 0.9 withν = 1.

Moreover, its averaging dynamics will be analogous to theresults of the deterministic model given in [5].

VII. CONCLUSION

There has been remarkable progress in understanding themolecular properties of anesthetic agents [6]–[11], [15]–[19].However, we have lagged behind in understanding how thesemolecular properties lead to the behavior of the intact organ-ism. The concentration–response relationship for anestheticagents is still described by empiric equations [40] rather thanby relationships that reflect the underlying molecular prop-erties of the drug. Clinicians have observed for generationsthat the transition from consciousness to the anesthetizedstate seems to be a sharp, almost all-or none change. Onefascinating possibility is that the anesthetic bifurcation tounconsciousness or the nearly all-or none characteristic induc-tion of anesthesia is a type of phase transition of the neuralnetwork.

This possibility was first considered by Steyn-Ross et al.(see [41] and the references therein). Their focus was on themean voltage of cell body of the neuron. Specifically, theauthors in [41] show that the biological change of state toanesthetic unconsciousness is analogous to a thermodynamicphase change involving a liquid to solid phase transition. Forcertain ranges of anesthetic concentrations, their first-ordermodel predicts the existence of multiple steady states for brainactivity, leading to a transition from normal levels of cerebralcortical activity to a quiescent state.

In earlier research [5], we have demonstrated how the induc-tion of anesthesia can be viewed as an example of multista-bility, the property whereby the solutions of a dynamical sys-tem exhibit multiple attracting equilibria under asymptoticallyslowly changing inputs or system parameters. In this paper,we demonstrate multistability in the mean when the synapticdrive firing model initial conditions are random variables and

Page 12: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

762 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

we also demonstrate stochastic multistability when the meanrate system coefficients of the neuronal connectivity matrix areperturbed by a Wiener process. We postulate that the inductionof anesthesia may be an example of multistability with twoattracting equilibria; consciousness and hypnosis.

The philosophy of representing uncertain parameters bymeans of multiplicative white noise is motivated by meansof the maximum entropy principle of Jaynes [42], [43] andstatistical analysis [44]. Maximum entropy modeling is aform of stochastic modeling wherein stochastic integration isinterpreted in the sense of Itô to provide a model for systemparameter uncertainty. The use of stochastic theory to modelsystem parameter uncertainty has been used within a moderninformation-theoretic interpretation of probability theory [42],[43], [45]. In particular, rather than regarding the probability ofan event as an objective quantity such as the limiting frequencyof outcomes of numerous repetitions, maximum entropy mod-eling adopts the view that the probability of an event is asubjective quantity, which reflects the observer’s certainty toa particular event occurring. This quantity corresponds to ameasure of information. The validity of a stochastic model fora biological neural network does not rely on the existence of anensemble model but rather in the interpretation that it expressesmodeling certainty or uncertainty regarding the coefficients ofthe neuronal connectivity matrix.

The stability of neural network models of the brain hasbeen the subject of multiple investigations and we refer thereader to selected references [24]–[28], [46] and especiallyChapter 8 of [3] (which provides multiple other references).More recently, Cessac [47] has reviewed neural networkmodels for the brain as dynamical systems. In particular, Ces-sac [48] has derived rigorous results, with minimal simplifyingassumptions, for a discrete-time neural network model and hasoffered strong arguments for the use of a discrete-time model.In contrast to both prior investigations and to the results in[47] and [48], in this paper we present Lyapunov-based testsfor multistability in stochastic systems having a continuum ofequilibria. The results in this paper are predicated on a meanfield assumption that reduces the complex (approximately1011 × 1011) neuronal connectivity matrix to a 2 × 2 excita-tory/inhibitory system. Although this is a drastic assumption,it has been commonly used in theoretical neuroscience goingback to the pioneering work of Wilson and Cowan [22].

ACKNOWLEDGMENT

The authors would like to thank H. Li for carrying outsome of the numerical calculations in Section VI. TomohisaHayakawa is grateful to the School of Aerospace Engineeringat Georgia Tech for their hospitality during the month ofMarch 2012.

REFERENCES

[1] L. Lapicque, “Recherches quantitatives sur l’excitation electiique desnerfs traitee comme une polarization,” J. Physiol. Gen., vol. 9,pp. 620–635, Jan. 1907.

[2] A. L. Hodgkin and A. F. Huxley, “A quantitative description of mem-brane current and application to conduction and excitation in nerve,”J. Physiol., vol. 117, no. 4, pp. 500–544, 1952.

[3] P. Dayan and L. F. Abbott, Theoretical Neuroscience: Computationaland Mathematical Modeling of Neural Systems. Cambridge, MA, USA:MIT Press, 2005.

[4] B. Ermentrout and D. H. Terman, Mathematical Foundations of Neuro-science. New York, NY, USA: Springer-Verlag, 2010.

[5] Q. Hui, W. M. Haddad, and J. M. Bailey, “Multistability, bifurcations,and biological neural networks: A synaptic drive firing model for cere-bral cortex transition in the induction of general anesthesia,” NonlinearAnal., Hybrid Syst., vol. 5, no. 3, pp. 554–573, Dec. 2011.

[6] G. A. Mashour, “Consciousness unbound: Toward a paradigm of generalanesthesia,” Anesthesiology, vol. 100, no. 2, pp. 428–433, 2004.

[7] A. Y. Zecharia and N. P. Franks, “General anesthesia and ascendingarousal pathways,” Anesthesiology, vol. 111, no. 4, pp. 695–696, 2009.

[8] J. M. Sonner, J. F. Antognini, R. C. Dutton, P. Flood, A. T. Gray,R. A. Harris, G. E. Homanics, J. Kendig, B. Orser, D. E. Raines,J. Trudell, B. Vissel, and E. I. Eger, “Inhaled anesthetics andimmobility: Mechanisms, mysteries, and minimum alveolar anestheticconcentration,” Anesthesia Analgesia, vol. 97, no. 3, pp. 718–740,2003.

[9] J. A. Campagna, K. W. Miller, and S. A. Forman, “Mechanisms ofactions of inhaled anesthetics,” New England J. Med., vol. 348, no. 21,pp. 2110–2124, 2003.

[10] E. R. John and L. S. Prichep, “The anesthetic cascade: A theory of howanesthesia suppresses consciousness,” Anesthesiology, vol. 102, no. 2,pp. 447–471, 2005.

[11] S. Hameroff, “The entwined mysteries of anesthesia and consciousness:Is there a common underlying mechanism?” Anesthesiology, vol. 105,no. 2, pp. 400–412, 2006.

[12] J. Vuyk, T. Lim, F. H. M. Engbers, A. G. L. Burm, A. A. Viet-ter, and J. G. Bovill, “Pharmacodynamics of Alfentanil as a sup-plement to propofol of nitrous oxide for lower abdominal surgeryin female patients,” Anesthesiology, vol. 78, no. 6, pp. 1936–1945,1993.

[13] E. Overton, Studien über die Narkose Zugleich ein Beitrag zur Allge-meinen Pharmakologie. New York, NY, USA: Fischer, 1901.

[14] H. Meyer, “Welche eigenschaft der anesthetica bedingt ihre narkotischewirkung?” Archiv Für Experim. Pathol. Und Pharmakol., vol. 42, no. 1,pp. 109–118, 1889.

[15] I. Ueda, “Molecular mechanisms of anesthesia,” Anesth. Analgesia,vol. 63, no. 10, pp. 929–945, 2003.

[16] N. P. Franks and W. R. Lieb, “Molecular and cellular mechanisms ofgeneral anesthesia,” Nature, vol. 367, no. 6464, pp. 607–614, 1994.

[17] C. North and D. S. Cafiso, “Contrasting memebrane localization andbehavior of halogenated cyclobutanes that follow or violate the Meyer-Overton hypothesis of general anesthetic potency,” Biophys. J., vol. 72,no. 4, pp. 1754–1761, 1997.

[18] A. Kitamura, W. Marszalec, J. Z. Yeh, and T. Narahishi, “Effects ofhalothane and propofol on excitatory and inhibitory synaptic transmis-sion in rat cortical neurons,” J. Pharmacol., vol. 304, no. 1, pp. 162–171,2002.

[19] A. Hutt and A. Longtin, “Effects of the anesthetic agent propofol onneural populations,” Cognit. Neurodyn., vol. 4, no. 1, pp. 37–59, 2009.

[20] B. Gutkin, D. Pinto, and B. Ermentrout, “Mathematical neuroscience:From neurons to circuits to systems,” J. Physiol., Paris, vol. 97, nos. 2–3,pp. 209–219, 2003.

[21] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons,Populations, Plasticity. Cambridge, U.K.: Cambridge Univ. Press, 2002.

[22] H. R. Wilson and J. D. Cowan, “Excitatory and inhibitory interactionsin localized populations of model neurons,” Biophys. J., vol. 12, no. 1,pp. 1–24, 1972.

[23] S. Amari, K. Yoshida, and K. Kanatani, “A mathematical foundationfor statistical neurodynamics,” SIAM J. Appl. Math., vol. 33, no. 1,pp. 95–126, 1977.

[24] H. Sompolinsky, A. Crisanti, and H. Sommers, “Chaos in random neuralnetworks,” Phys. Rev. Lett., vol. 61, no. 3, pp. 259–262, 1988.

[25] D. J. Amit, H. Gutfreund, and H. Sompolinsky, “Spin-glass models ofneural networks,” Phys. Rev. A, vol. 32, no. 2, pp. 1007–1018, 1985.

[26] D. J. Amit and N. Brunel, “Model of global spontaneous activ-ity and local structured activity during delay periods in thecerebral cortex,” Cerebral Cortex, vol. 7, no. 3, pp. 237–252,1997.

[27] N. Brunel and V. Hakim, “Fast global oscillations in networks ofintegrate-and-fire neurons with low firing rates,” Neural Comput.,vol. 11, no. 7, pp. 1621–1671, 1999.

Page 13: A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS 763

[28] W. Gerstner, “Time structure of the activity in neural network models,”Phys. Rev. E, vol. 51, no. 1, pp. 738–758, 1995.

[29] W. M. Haddad, V. Chellaboina, and Q. Hui, Nonnegative and Compart-mental Dynamical Systems. Princeton, NJ, USA: Princeton Univ. Press,2010.

[30] L. Arnold, Stochastic Differential Equations: Theory and Applications.New York, NY, USA: Wiley, 1974.

[31] J. Zhou and Q. Wang, “Stochastic semistability with application toagreement problems over random networks,” in Proc. Amer. ControlConf., 2010, pp. 568–573.

[32] Q. Hui, W. M. Haddad, and S. P. Bhat, “Finite-time semistabilityand consensus for nonlinear dynamical networks,” IEEE Trans. Autom.Control, vol. 53, no. 8, pp. 1887–1900, Sep. 2008.

[33] Q. Hui, W. M. Haddad, and S. P. Bhat, “Semistability, finite-timestability, differential inclusions, and discontinuous dynamical systemshaving a continuum of equilibria,” IEEE Trans. Autom. Control, vol. 54,no. 10, pp. 2465–2470, Oct. 2009.

[34] D. S. Bernstein, Matrix Mathematics, 2nd ed. Princeton, NJ, USA:Princeton Univ. Press, 2009.

[35] J. Shen, J. Hu, and Q. Hui, “Semistability of switched linear systemswith applications to sensor networks: A generating function approach,”in Proc. IEEE Conf. Decision Control, Dec. 2011, pp. 8044–8049.

[36] W. M. Haddad and V. Chellaboina, Nonlinear Dynamical Systems andControl: A Lyapunov-Based Approach. Princeton, NJ, USA: PrincetonUniv. Press, 2008.

[37] R. B. Ash, Real Analysis and Probability. New York, NY, USA:Academic, 1972.

[38] Q. Hui, “Optimal semistable control for continuous-time linear systems,”Syst. Control Lett., vol. 60, no. 4, pp. 278–284, 2011.

[39] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathe-matical Sciences. New York, NY, USA: Academic, 1979.

[40] J. M. Bailey and W. M. Haddad, “Drug dosing control in clinicalpharmacology: Paradigms, benefits, and challenges,” IEEE Control Syst.Mag., vol. 25, no. 1, pp. 35–51, Jun. 2005.

[41] M. L. Steyn-Ross, D. A. Steyn-Ross, and J. W. Sleigh, “Modellinggeneral anesthesia as a first-order phase transition in the cortex,” Progr.Biophys. Molecular Biol., vol. 85, nos. 2–3, pp. 369–385, 2004.

[42] E. T. Jaynes, “Information theory and statistical mechanics,” Phys. Rev.,vol. 106, no. 4, pp. 620–630, 1957.

[43] E. T. Jaynes, “Prior probabilities,” IEEE Trans. Sys. Sci. Cybern.,vol. SSC-4, no. 3, pp. 227–241, Sep. 1968.

[44] R. H. Lyon, Statistical Energy Analysis of Dynamical Systems: Theoryand Applications. Cambridge, MA, USA: MIT Press, 1975.

[45] R. D. Rosenkrantz and E. T. Jaynes, Papers on Probability, Statisticsand Statistical Physics. Boston, MA, USA: Reidel, 1983.

[46] C. A. van Vreeswijk and H. Sompolinsky, “Chaos in neuronal networkswith balance excitatory and inhibitory activity,” Science, vol. 274,no. 5293, pp. 1724–1726, 1996.

[47] B. Cessac, “A view of neural networks as dynamical systems,” Int. J.Bifurcations Chaos, vol. 20, no. 6, pp. 1585–1629, 2010.

[48] B. Cessac, “A discrete-time neural network model with spiking neurons.Rigorous results on the spontaneous dynamics,” J. Math. Biol., vol. 56,no. 3, pp. 311–345, 2008.

Qing Hui (S’01–M’08) received the B.Eng. degreein aerospace engineering from the National Univer-sity of Defense Technology, Changsha, China, in1999, the M. Eng. degree in automotive engineeringfrom Tsinghua University, Beijing, China, in 2002,and the M.S. degree in applied mathematics andthe Ph.D. degree in aerospace engineering from theGeorgia Institute of Technology, Atlanta, GA, USA,in 2005 and 2008, respectively.

He joined the Department of Mechanical Engi-neering, Texas Tech University, Lubbock, TX, USA,

in 2008, where he is currently an Assistant Professor. He is co-author ofthe book “Nonnegative and Compartmental Dynamical Systems” (Princeton,NJ: Princeton University Press, 2010). His current research interests includenetwork robustness and vulnerability analysis, consensus, synchronization andcontrol of network systems, network optimization, network interdependencyand cascading failures, swarm optimization, hybrid systems, biomedicalsystems, and scientific computing.

Wassim M. Haddad (S’87–M’87–SM’01–F’09)received the B.S., M.S., and Ph.D. degrees inmechanical engineering from Florida Tech, Mel-bourne, FL, USA, in 1983, 1984, and 1987, respec-tively.

In 1988 he joined the faculty of the Mechanicaland Aerospace Engineering Department at FloridaTech. Since 1994, he has been with the Schoolof Aerospace Engineering, Georgia Tech, where heholds the rank of Professor, the Andrew and DavidLewis Chair in Dynamical Systems and Control, and

serves as Chair of the Flight Mechanics and Control Discipline. He also holdsjoint Professor appointments with the School of Biomedical Engineering andElectrical and Computer Engineering, Georgia Tech, Atlanta, GA, USA. Hisinterdisciplinary research contributions in systems and control are documentedin more than 540 archival journal and conference publications, and sevenbooks in the areas of science, mathematics, medicine, and engineering. Hiscurrent research interests include the nonlinear robust and adaptive control,nonlinear systems, large-scale systems, hierarchical control, impulsive andhybrid systems, system thermodynamics, network systems, system biology,mathematical neuroscience, the history of science and mathematics, as wellas natural philosophy.

Dr. Haddad is an NSF Presidential Faculty Fellow in recognition for hisdemonstrated excellence and continued promise in scientific and engineeringresearch; a member of the Academy of Nonlinear Sciences for contributionsto nonlinear stability theory, dynamical systems, and control; and an IEEEFellow for contributions to robust, nonlinear, and hybrid control systems.He has received numerous outstanding research scholar awards including anOutstanding Alumni Achievement Award for his contributions to nonlineardynamical systems and control, and recognition for outstanding contributionsto joint university and industry programs.

James M. Bailey received the B.S. degree fromDavidson College, St, Davidson, NC, USA, in 1969,the Ph.D. degree in chemistry from the University ofNorth Carolina, Chapel Hill, NC, USA, in 1973, andthe M.D. degree from the Southern Illinois Univer-sity School of Medicine, Springfield, IL, USA, in1982.

He was a Helen Hay Whitney Fellow with theCalifornia Institute of Technology, Pasadena, CA,USA, from 1973 to 1975, and an Assistant Profes-sor of chemistry and biochemistry with Southern

Illinois University, USA, from 1975 to 1979. He completed a residencyin anesthesiology and then a fellowship in cardiac anesthesiology with theEmory University School of Medicine, Atlanta, GA, USA. From 1986 to2002, he was an Assistant Professor of anesthesiology and then AssociateProfessor of anesthesiology with Emory, where he also served as the Directorof the critical care service. In September 2002, he moved his clinicalpractice to Northeast Georgia Medical Center, Gainesville, GA, USA, as theDirector of cardiac anesthesia and Consultant in critical care medicine. Hehas served as Chief Medical Officer of Northeast Georgia Health Systems,Gainesville, GA, USA, since 2008. He is board certified in anesthesiology,critical care medicine, and transesophageal echocardiography. His currentresearch interests include pharmacokinetic and pharmacodynamic modeling ofanesthetic and vasoactive drugs and, more recently, applications of dynamicalsystem theory in medicine. He is the author or co-author of more than 100journal articles, conference publications, or book chapters.

Tomohisa Hayakawa (S’00–M’04) received theB.Eng. degree in aeronautical engineering from theKyoto University, Kyoto, Japan, in 1997, the M.S.degree in aerospace engineering from the State Uni-versity of New York, Buffalo, NY, USA, in 1999,and the M.S. degree in applied mathematics and thePh.D. degree in aerospace engineering both from theGeorgia Institute of Technology, Atlanta, GA, USA,in 2001 and 2003, respectively.

He has served as a Research Fellow with theDepartment of Aeronautics and Astronautics, Kyoto

University, and the Japan Science and Technology Agency. He joined theDepartment of Mechanical and Environmental Informatics, the Tokyo Instituteof Technology, Tokyo, Japan, in 2006, where he is an Associate Professor.His current research interests include the stability of nonlinear systems,nonnegative and compartmental systems, hybrid systems, nonlinear adaptivecontrol, neural networks and intelligent control, stochastic dynamical systems,and applications to aerospace vehicles, multiagent systems, robotic systems,financial dynamics, and biological/biomedical systems.