17
Fokker-Planck Description of Conductance-Based Integrate-and-Fire Neuronal Networks Gregor Kovaˇ ciˇ c, 1 Louis Tao, 2 Aaditya V. Rangan, 3 and David Cai 3, 4 1 Mathematical Sciences Department, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180 2 Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetics Engineering, College of Life Sciences, Peking University, 5 Summer Palace Road, Beijing, People’s Republic of China 100871 3 Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012-1185 4 Department of Mathematics, Shanghai Jiao Tong University, Dong Chuan Road 800, Shanghai 200240, China Steady dynamics of coupled conductance-based, integrate-and-fire neuronal networks in the limit of small fluctuations is studied via the equilibrium states of a Fokker-Planck equation. An asymp- totic approximation for the membrane potential probability density function is derived and the corresponding gain curves are found. Validity conditions are discussed for the Fokker-Planck de- scription and verified via direct numerical simulations. PACS numbers: 87.19.L–, 05.20.Dd, 84.35.+i Keywords: kinetic theory, neuronal networks, Fokker-Planck equation, gain curve I. INTRODUCTION The use of kinetic theory for describing many-body in- teractions has a long tradition reaching back at least to the work of Boltzmann [1, 2]. Before the advent of direct numerical simulations made possible by powerful com- puters, this and related coarse-grained, statistical theo- ries generally provided the only practical means for ad- dressing large-scale problems. Only recently have direct numerical simulations of large assemblies of individual particles, waves, or neurons advanced to the degree that they provide qualitatively if not quantitatively accurate descriptions of the problems under study. Two examples of the use of kinetic theory for objects other than parti- cles are wave turbulence [3–9], and neuroscience [10, 11]. Subsequent simulations of turbulent wave dynamics [12– 17], and neuronal processing in parts of the cerebral cor- tex [18–26] largely confirmed the predictions obtained via the kinetic theory. Even now, important roles remain for the kinetic and like theories, both in finding solutions to problems that are still too large to be amenable to simu- lation, and also in identifying and providing a more fun- damental understanding of mechanisms underlying phys- ical phenomena under study. It is therefore important to study analytically the mathematical properties of these coarse grained theories. In this work, we study the basic properties of the kinetic theory for the simplest version of conductance- driven, integrate-and-fire (IF) models of all-to-all cou- pled, excitatory neuronal networks. In this version, we use the approximation that, upon receiving a spike, the excitatory conductances both rise and decay infinitely fast. After the introduction of the diffusion approxima- tion in the limit of small synaptic input fluctuations, the kinetic theory reduces to a Fokker-Planck equation for the probability density function of the neuronal mem- brane potential in the network [27, 28]. Extensive studies of this equation for a feed-forward neuron driven by white and colored noise were performed in [29–34]. Here, we investigate several important theoretical consequences of the Fokker-Planck equation for an all-to-all coupled net- work of neurons. In particular, we analyze the asymptotic properties of its steady-state solutions in the diffusion limit and derive from these solutions the network gain curves, i.e., curves depicting the dependence of the net- work firing rate on the intensity of the external driving. We find these gain curves to exhibit bistability and hys- teresis for sufficiently small synaptic input fluctuations. In addition, in the two regimes separated by the instabil- ity region, we derive explicit asymptotic solutions of the Fokker-Planck equation describing the voltage probabil- ity density function, and also segments of the gain curves, in terms of elementary functions. We emphasize that the problem involving the Fokker-Planck equation which we study here is not of the usual linear type. Instead, it is highly nonlinear due to the presence of the average neuronal firing rate as a multiplicative self-consistency parameter in both the equation and its boundary condi- tions. This presence is due to the fact that the Fokker- Planck equation in this case describes statistically the dynamics of an entire neuronal network, not just a single neuron. Typically, two types of coarse-grained descriptions are considered for neuronal networks: mean-driven [10, 35– 39] and fluctuation-driven [11, 28, 40–56]. The former only accurately describes the network activity when the mean of the external drive is by itself strong enough to induce neuronal spiking, while the latter is also valid in the regime in which considering the mean of the exter- nal drive alone would predict no spiking at all. In the limit of vanishing synaptic input fluctuations, the ki- netic theory studied in this work reproduces a version of the mean-driven model of [38], and we show how this model can be solved exactly by an explicit parametriza- tion. We provide a complete description of this mean- driven model’s gain curves, including their bistability in- tervals and asymptotic behavior for large values of ex-

Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

Fokker-Planck Description of Conductance-Based Integrate-and-Fire Neuronal

Networks

Gregor Kovacic,1 Louis Tao,2 Aaditya V. Rangan,3 and David Cai3, 4

1Mathematical Sciences Department, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 121802Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetics Engineering,

College of Life Sciences, Peking University, 5 Summer Palace Road, Beijing, People’s Republic of China 1008713Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012-1185

4Department of Mathematics, Shanghai Jiao Tong University,

Dong Chuan Road 800, Shanghai 200240, China

Steady dynamics of coupled conductance-based, integrate-and-fire neuronal networks in the limitof small fluctuations is studied via the equilibrium states of a Fokker-Planck equation. An asymp-totic approximation for the membrane potential probability density function is derived and thecorresponding gain curves are found. Validity conditions are discussed for the Fokker-Planck de-scription and verified via direct numerical simulations.

PACS numbers: 87.19.L–, 05.20.Dd, 84.35.+i

Keywords: kinetic theory, neuronal networks, Fokker-Planck equation, gain curve

I. INTRODUCTION

The use of kinetic theory for describing many-body in-teractions has a long tradition reaching back at least tothe work of Boltzmann [1, 2]. Before the advent of directnumerical simulations made possible by powerful com-puters, this and related coarse-grained, statistical theo-ries generally provided the only practical means for ad-dressing large-scale problems. Only recently have directnumerical simulations of large assemblies of individualparticles, waves, or neurons advanced to the degree thatthey provide qualitatively if not quantitatively accuratedescriptions of the problems under study. Two examplesof the use of kinetic theory for objects other than parti-cles are wave turbulence [3–9], and neuroscience [10, 11].Subsequent simulations of turbulent wave dynamics [12–17], and neuronal processing in parts of the cerebral cor-tex [18–26] largely confirmed the predictions obtained viathe kinetic theory. Even now, important roles remain forthe kinetic and like theories, both in finding solutions toproblems that are still too large to be amenable to simu-lation, and also in identifying and providing a more fun-damental understanding of mechanisms underlying phys-ical phenomena under study. It is therefore important tostudy analytically the mathematical properties of thesecoarse grained theories.

In this work, we study the basic properties of thekinetic theory for the simplest version of conductance-driven, integrate-and-fire (IF) models of all-to-all cou-pled, excitatory neuronal networks. In this version, weuse the approximation that, upon receiving a spike, theexcitatory conductances both rise and decay infinitelyfast. After the introduction of the diffusion approxima-tion in the limit of small synaptic input fluctuations, thekinetic theory reduces to a Fokker-Planck equation forthe probability density function of the neuronal mem-brane potential in the network [27, 28]. Extensive studiesof this equation for a feed-forward neuron driven by white

and colored noise were performed in [29–34]. Here, weinvestigate several important theoretical consequences ofthe Fokker-Planck equation for an all-to-all coupled net-

work of neurons. In particular, we analyze the asymptoticproperties of its steady-state solutions in the diffusionlimit and derive from these solutions the network gaincurves, i.e., curves depicting the dependence of the net-work firing rate on the intensity of the external driving.We find these gain curves to exhibit bistability and hys-teresis for sufficiently small synaptic input fluctuations.In addition, in the two regimes separated by the instabil-ity region, we derive explicit asymptotic solutions of theFokker-Planck equation describing the voltage probabil-ity density function, and also segments of the gain curves,in terms of elementary functions. We emphasize that theproblem involving the Fokker-Planck equation which westudy here is not of the usual linear type. Instead, itis highly nonlinear due to the presence of the averageneuronal firing rate as a multiplicative self-consistencyparameter in both the equation and its boundary condi-tions. This presence is due to the fact that the Fokker-Planck equation in this case describes statistically thedynamics of an entire neuronal network, not just a singleneuron.

Typically, two types of coarse-grained descriptions areconsidered for neuronal networks: mean-driven [10, 35–39] and fluctuation-driven [11, 28, 40–56]. The formeronly accurately describes the network activity when themean of the external drive is by itself strong enough toinduce neuronal spiking, while the latter is also valid inthe regime in which considering the mean of the exter-nal drive alone would predict no spiking at all. In thelimit of vanishing synaptic input fluctuations, the ki-netic theory studied in this work reproduces a versionof the mean-driven model of [38], and we show how thismodel can be solved exactly by an explicit parametriza-tion. We provide a complete description of this mean-driven model’s gain curves, including their bistability in-tervals and asymptotic behavior for large values of ex-

Page 2: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

2

ternal drive. We also investigate how the solutions andgain curves obtained from the Fokker-Planck equationapproach the corresponding mean-driven solutions andgain curves in the limit as the synaptic input fluctua-tions decrease and eventually vanish.

Finally, we study the validity of the Fokker-Planck de-scription by comparison with direct numerical simula-tions of a conductance-driven IF neuronal network withvery short conductance rise and decay times. In theregime of small synaptic input fluctuations, in which theFokker-Planck equation is expected to be valid, we in-deed find excellent agreement between the simulationsand theory. When the ratio of the synaptic input fluc-tuations to its mean exceeds the values for which theFokker-Planck description is appropriate, we find thatthe theoretical gain curves still have a functional formvery similar to that of the curves obtained through sim-ulations, except that their slopes at high drive valuesbecome too large.

The remainder of this paper is organized as follows.In Sec. II, we describe the IF model under investigation.In Sec. III, we present the kinetic equation derived fromthis IF model, and discuss how to obtain the networkfiring rate from it. In Sec. IV, we introduce the diffu-sion approximation and derive the corresponding Fokker-Planck equation from the kinetic equation in the diffu-sion limit. We also describe the validity conditions forthe diffusion limit in terms of the original IF networkproperties. In Sec. V, we give a complete analytical so-lution of the mean-driven limit, and find its bistabilityintervals. The exact solution for the voltage probabil-ity density function in terms of integrals is described inSec. VI, and an exact equation for the corresponding gaincurve is derived in terms of generalized hypergeometricfunctions. In Sec. VII, we introduce a uniform asymp-totic approximation for the voltage probability densityfunction valid in the regime of small synaptic input fluc-tuations, and use it to find the functional form of thegain curves. In Sec. VIII, we find two simpler asymp-totic approximations for this density function and thegain curves away from the bistability region. The va-lidity of the Fokker-Planck description is discussed viacomparison with direct numerical simulations of an IFnetwork with short conductance rise and decay times inSec. IX. Implications and ramifications of the results arediscussed in Sec. X. Finally, in Appendix A, a detailedderivation of the uniform asymptotic approximation forthe voltage probability density function for small synap-tic input fluctuations is described.

II. INTEGRATE-AND-FIRE NETWORK

We consider a conductance-based, IF neuronal networkcomposed of N excitatory neurons with instantaneuousconductance rise and decay rates [57]. The dynamicsof the membrane potential Vi of the ith neuron in this

network is described by the linear differential equation

τdVi

dt= − (Vi − εr) −

[

f∑

j

δ (t − τij)

+S

pN

N∑

k=1

k 6=i

l

piklδ (t − tkl)

]

(Vi − εE) . (1)

Here, the expression in the square brackets describes theith neuron’s conductance arising from its synaptic input.The first term of this expression represents the contri-bution from the afferent external input spikes, and thesecond from the network spikes. Among the parametersin Eq. (1), τ is the leakage time-scale, τij is the jth spiketime of the external input to the ith neuron, tkl is thelth spike time of the kth neuron in the network, f isthe external input spike strength, S/pN is the couplingstrength, εr is the reset potential, and εE is the excitatoryreversal potential. We assume the non-dimensionalizedvalues

εr = 0, VT = 1, εE =14

3, (2)

with VT being the firing threshold. For the leakage time-scale, we assume τ = 20 ms, which corresponds to τ = 1in our dimensionless units. The coefficients pikl modelthe stochastic nature of the synaptic release [58–63].Each of them is an independent Bernoulli-distributedrandom variable such that pikl = 1 with probability pand 0 with probability 1 − p at the time tkl when a net-work spike arrives at the ith neuron. We scale the cou-pling strength by the “effective size” of the network, pN ,so that the total network input to each neuron remainsfinite in the limit of large network size N and does notvanish for small values of the synaptic release probabilityp.

Except at the spike times τij or tkl, the membrane po-tential Vi decays exponentially towards the reset poten-tial εr, as indicated by the first term on the right-handside of Eq. (1). When the ith neuron receives a spike,its membrane potential jumps. The relation between thevalues of the ith neuron’s membrane potential Vi imme-diately before and after the times of its incoming spikesis derived to be

Vi(τ+ij ) = (1 − Γ)Vi(τ

ij ) + ΓεE , Γ = 1 − e−f/τ (3a)

for an external input spike, and

Vi(t+kl) = (1−Σ)Vi(t

kl) +ΣεE , Σ = 1− e−S/pNτ (3b)

for a network neuron spike, provided Vi(τ+ij ), Vi(t

+kl) <

VT , respectively [28, 55]. Here, the superscripts + and −denote the right-hand and left-hand limits, respectively.

Since in the absence of incoming spikes, its voltage canonly decrease, any neuron in the network (1) can onlyspike when it receives a spike itself. In particular, the kthneuron can only spike when its membrane potential ex-ceeds the firing threshold VT as a result of a jump in (3a)

Page 3: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

3

or one or more simultaneous jumps in (3b). At this time,say tkl, a spike is distributed to each of the other neuronsin the network, i.e., the term (S/pN)δ (t − tkl) (Vi − εE)is added with probability pikl to every equation for themembrane potential (1) with i = 1, . . . , N , i 6= k. Thekth neuron’s membrane potential Vk is reset to the valueεr. Note that every membrane potential Vi remains con-fined in the interval εr ≤ Vi ≤ VT for all times.

We assume that the external input spike train {τij}arriving at the ith neuron in the network is an indepen-dent Poisson process for every i = 1, . . . , N , with the rateν(t). The spikings of any given network neuron do not,in general, constitute a Poisson process. Moreover, asdescribed in the previous paragraph, each of the ith neu-ron’s spike times must coincide either with a spike timeof its external drive or a spike time of one or more othernetwork neurons, which again coincides with a spike timeof some network neuron’s external drive. However, we aremainly interested in the case of high external drive Pois-son rate, ν(t) ≫ 1/τ , and moderate population-averagedfiring rate per neuron, m(t). (This implies that the exter-nal drive spike strength f should be small.) Thus, if thenumber of neurons N in the network is sufficiently largeand the synaptic release probability p sufficiently small,we can assume that each neuron spikes approximatelyindependently of other network neurons and of its ownexternal drive, and also relatively infrequently. There-fore, we are justified in assuming that the network spiketimes, {tkl | k = 1, . . . , N, l = 1, 2, . . .}, can be approx-imated by a Poisson process [64], and that this processis approximately independent of the external drive spiketrains to the network neurons. Note that the network fir-ing rate is Nm(t), where m(t) is the population-averagedfiring rate per neuron in the network referred to above.

III. KINETIC EQUATION

A probabilistic description of the network dynamics forEq. (1) is obtained by considering the probability density

ρ(v, t) =1

N

N∑

i=1

E [δ (v − Vi(t))] , (4)

where E(·) is the expectation over all realizations of theexternal input spike trains and all initial conditions. Thekinetic equation for ρ(v, t),

∂tρ(v, t) =∂v

[(

v − εr

τ

)

ρ(v, t)

]

+ ν(t)

[

1

1 − Γρ

(

v − ΓεE

1 − Γ, t

)

− ρ(v, t)

]

+ Npm(t)

[

1

1 − Σρ

(

v − ΣεE

1 − Σ, t

)

− ρ(v, t)

]

+ m(t)δ(v − εr), (5)

is derived in a way similar to [28, 55], where the coeffi-cients Γ and Σ are as in Eqs. (3a) and (3b). The last

term in Eq. (5) represents the probability source due tothe neuronal membrane potentials being reset to v = εr

after crossing the firing threshold at v = VT .Equation (5) can be written in the conservation form

∂tρ(v, t) + ∂vJ(v, t) = m(t)δ(v − εr), (6)

with the probability flux

J(v, t) = JS(v, t) + JI(v, t). (7)

Here,

JS(v, t) = −(

v − εr

τ

)

ρ(v, t) (8a)

is the flux due to the smooth streaming of phase pointsunder the relaxation dynamics in Eq. (1), while

JI(v, t) = ν(t)

∫ v

v−ΓεE

1−Γ

ρ(u, t)du+Npm(t)

∫ v

v−ΣεE

1−Σ

ρ(u, t)du

(8b)is the flux due to jumps induced by both the externalinput and neuronal spiking.

A boundary condition for Eq. (5) can be derived byrecalling that a neuron in the network (1) can only fireupon receiving a spike and that its membrane potentialcannot stream up to move across the firing threshold VT .Thus, on the one hand,

JS(VT , t) = −(

VT − εr

τ

)

ρ (VT , t) ≤ 0,

but on the other hand, negative relaxation flux at v = VT

is impossible since no neuron’s voltage can ever streamback down from VT via relaxation: once it reaches thethreshold VT , it can only return to the reset value εr.Therefore,

JS (VT , t) ≡ 0, (9)

and so

ρ (VT , t) ≡ 0 (10)

is a boundary condition for Eq. (5) [28, 55].The firing rate m(t) is given by the probability flux

J(VT , t) across the threshold VT , which, in light of thedefinitions (7) and (8b), and the boundary condition (9),reads

m(t) = J (VT , t) = JI (VT , t) = ν(t)

∫ VT

VT −ΓεE

1−Γ

ρ(u, t)du

+ Npm(t)

∫ VT

VT −ΣεE

1−Σ

ρ(u, t)du. (11)

Moreover, since ρ(v, t) is a probability density functiondefined on the v-interval εr < v < VT , it must satisfy thenon-negativity condition

ρ(v, t) ≥ 0, (12)

Page 4: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

4

and the normalization condition

∫ VT

εr

ρ(v, t) dv = 1. (13)

The simultaneous presence of the firing rate m(t) asa self-consistency parameter in the kinetic equation (5)and the boundary condition (11), together with the nor-malization condition (13), makes the problem at handhighly nonlinear.

IV. FOKKER-PLANCK EQUATION

For small values of the external drive and networkspike strengths, f and S/pN , respectively, we can ob-tain the diffusion approximation to the kinetic equation(5) by Taylor-expanding the differences in the terms aris-ing from the voltage jumps [55]. Here, the smallness ofthe spike strengths f and S/pN is interpreted in termsof the minimal number of spike-induced voltage jumpsa neuron with the initial membrane potential at the re-set value εr must undergo in order to fire. In particular,this number must be large for the diffusion approxima-tion to hold. Using the jump conditions (3a) and (3b),this requirement gives rise to the jump-size estimates

f,S

pN≪ τ

B − 1

B, (14)

where

B =εE − εr

εE − VT. (15)

The number of jumps can be estimated by the smaller ofthe two ratios, [τ(B − 1)/B]/f or [τ(B − 1)/B]/(S/pN).Moreover, in this regime, in order for the network neuronsto maintain nonvanishing firing rates, the input Poissonrate must be sufficiently large. In particular, the externalinput spike strength f and the Poisson rate ν must satisfythe conditions

f

τ≪ 1, ντ ≫ 1, fν = O(1). (16)

Note that these conditions are consistent with the re-quirement that the spike train produced jointly by thenetwork neurons be (approximately) independent of anyindividual neuron’s external driving train, discussed atthe end of Sec. II.

An equivalent approach to Taylor-expanding the dif-ference terms in Eq. (5) is to approximate Eq. (6) byexpanding the probability flux J(v, t) in Eq. (7) usingthe trapezoidal rule to evaluate the respective integralsin Eq. (8b). Thus, we obtain for (7) the expression

J(v, t) = − 1

τ

{

[(a + q2)v − (a + q2 − 1)εE − εr]ρ(v, t)

+ q2 (εE − v)2∂vρ(v, t)

}

, (17)

with the coefficients

a = 1 + fν + Sm, q2 =1

(

f2ν +S2m

pN

)

. (18)

The flux (7) satisfies the equation

J(ε+r ) − J(ε−r ) = m(t),

as can be gleaned from Eq. (6). Moreover, since ρ(v <εr, t) ≡ 0, we have J(v < εr, t) ≡ 0, and we find thatfor εr < v < VT the probability density function ρ(v, t)obeys the partial differential equation

∂tρ(v, t) + ∂vJ(v, t) = 0, (19)

that is, the Fokker-Planck equation

τ∂tρ(v, t) =∂v

{

[(a + q2)v − (a + q2 − 1)εE − εr]ρ(v, t)

+ q2 (εE − v)2 ∂vρ(v, t)}

, (20)

with the flux boundary condition

J(ε+r , t) = J(VT , t) = m(t). (21)

Here, the flux J(v, t) is given by Eq. (17) [28, 54, 55].We retain the boundary condition ρ(VT , t) = 0 in Eq.

(10) as the absorbing boundary condition for the Fokker-Planck equation (20) in the diffusion limit [28, 55]. Like-wise, we retain the non-negativity condition (12) andthe normalization condition (13). In this approximation,again, the Fokker-Planck equation (20), together withthe boundary conditions (21) and (10), and the normal-ization condition (13), gives rise to a highly nonlinearproblem due to the simultaneous occurrence of the firingrate m(t) as a multiplicative self-consistency parameterin both Eqs. (20) and (21).

Note that the ratio q2/a of the two coefficients a andq2, as defined in Eq. (18), is controlled by the two jumpsizes, f/τ and S/τpN , which also control the validity ofthe diffusion approximation according to Eq. (14). Sincethe ratio q2/a controls the size of the fluctuation termin Eq. (20), the inequalities in (14) and (16) imply thatthe diffusion approximation leading to the Fokker-Planckequation (20) also prompts the corresponding neuronaldynamics to be in the small-fluctuation limit,

q2

a≪ 1. (22)

We will be primarily interested in the steady solutionsof Eq. (20). In view of the conservation form (19), withJ(v, t) as in Eq. (17), and the boundary conditions (21),we find that Eq. (20) can in this case be replaced by thefirst-order, linear differential equation

[(a + q2)v − (a + q2 − 1)εE − εr]ρ(v)

+ q2 (εE − v)2 ∂vρ(v) = −mτ, (23)

Page 5: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

5

with a and q defined in Eq. (18). We require the steadyprobability density function ρ(v) to satisfy the boundarycondition ρ(VT ) = 0 in Eq. (10), as well as the conditions(12) and (13). From these equations, we can derive thegain curve, i.e., the firing rate m as a function of the ex-ternal driving strength fν. We carry out this derivationand its analysis below.

V. MEAN-DRIVEN AND

FLUCTUATION-DRIVEN LIMITS

We begin by analyzing the limiting case of the regimein Eq. (16) in which f → 0 and pN, ν → ∞ such thatfν = O(1). There are no input conductance fluctuationsin this limit, and so q2/a → 0 in Eq. (22). Note that, inthis limit, the effect of the external-drive Poisson trainon the network dynamics becomes identical to that of atime-independent excitatory conductance with the meanvalue of this train’s strength, fν.

The stationary equation (23) in this limit reduces to

ρ(v) =τm

a(VS − v), (24)

where

VS =εr + (a − 1)εE

a, (25)

is the effective reversal potential [65], and a is defined inEq. (18). Since the v-derivative ∂vρ(v) in Eq. (23) is lostin the limit as q → 0, it is consistent to naturally dropthe boundary condition (10).

The density ρ(v) in Eq. (24) must satisfy the conditionρ(v) ≥ 0 in (12) for εr ≤ v ≤ VT . Since a > 0, it is easyto show that VS > εr and so the condition (12) is satisfiedwhen

VT < VS , i.e., a > B, (26)

where B is defined as in Eq. (15). The condition a > Bcorresponds to the fact that the neuronal voltages muststream upwards at the threshold VT for the neurons tofire; in other words, m 6= 0 under the mean drive fν+Sm.The normalization condition (13) then yields

m =a

τ lnVS − εr

VS − VT

. (27)

From Eqs. (18) and (25), we observe that Eq. (27) givesa nonlinear equation for the firing rate m in terms ofthe driving strength fν. In view of the comment madein the first paragraph of this section, that the neuronaldynamics in the present limit are driven by the meanof the external input spike train, we refer to Eq. (27) asdescribing the mean-driven limit of our neuronal network.

The case when VS < VT , i.e. a < B, corresponds to thecase when m = 0. Replacing the external drive in Eq. (1)

by its mean fν, we find that all the neuronal membranepotentials settle at the value v = VS after a time pe-riod of O(τ), and this is why no neurons can spike. Inother words, the steady-state voltage probability densityρ(v) becomes ρ(v) = δ(v − VS), where δ(·) is again theDirac delta function. This result can also be deducedfrom the appropriate solution to the full equation (23),as will be done in Section VIII A. We will refer to thecorresponding operating regime of the network, depictedby the interval 0 < fν < B−1 in the fν–m plane shownin Fig. 1, as the fluctuation-driven regime. In particular,in the present limit of vanishing fluctuations, the networkfiring rate also vanishes in this regime.

We now return to analyzing the solution of Eq. (27).From (15), (18), (25), and (27), we arrive at the ex-

act parametric representation of the m–fν gain curve interms of the parameter a,

m =a

τ lnB (a − 1)

a − B

, (28a)

fν = a − 1 − Sa

τ lnB (a − 1)

a − B

, (28b)

which holds in the mean-driven regime, a > B. In par-ticular, for every a, we can find the exact value of thefiring rate m that corresponds to its external drive fν.

0 0.1 0.2 0.3 0.4 0.50

0.5

1

1.5

2

2.5

m

(B−1,0)

FIG. 1: (Color online) Black: Gain curves m as a function offν in the mean-driven limit. Gray (red on-line) dashed line:The locations of the saddle-node bifurcation points on thegain curves. The parameter values are B = 14/11 = 1.2727,and τ = 1. The reversal and threshold potentials satisfy Eqs.(2). The coupling strengths along the gain curves are (left toright): S = 0.3, S = τ ln B = 0.2412, S = 0.2, S = 0.1, andS = 0.

The graphs of the gain curves, m versus fν, in themean-driven limit, are presented in Fig. 1 for several dif-ferent values of the parameter S. From simple analysis, itis clear that the graph of m versus fν begins at the “driv-ing threshold” value, fν = B − 1 = (VT − εr)/(εE −VT ),

Page 6: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

6

and m = 0, going backwards in fν, with the initialderivative −1/S. The threshold value fν = B − 1 corre-sponds to the external input Poisson spike rate ν beingsufficiently large that, on average, the rate of the neu-ronal membrane-potential increase in the IF network (1)induced by the external input spikes exceeds its down-ward streaming rate due to the leakage.

0.05 0.1 0.15 0.2

0.14

0.16

0.18

0.2

0.22

0.24

0.26

S/τ

f ν

FIG. 2: The fν-coordinate, fν, of the firing-rate saddle-nodebifurcation as a function of S/τ . The parameter values areB = 14/11 = 1.2727 and τ = 1. The reversal and thresholdpotentials satisfy Eqs. (2).

For large values of the parameter a, every gain curveasymptotes towards its own straight line

m =fν + 1 − (B − 1)/ lnB

τ lnB − S. (29)

If S > τ lnB, the gain curve has a negative slope ev-erywhere and terminates with a nonzero firing rate m atfν = 0, such as the leftmost curve in Fig. 1. If S < τ lnB,the gain curve turns around in a saddle-node bifurcation,such as the second and third gain curves from the rightin Fig. 1. The location of this saddle node bifurcationis at the intersection of the gain curve and the gray (redon-line) dashed curve in Fig. 1. The gain curve passesthrough a bistability interval that begins at the saddle-node bifurcation point and ends at the driving thresholdfν = B−1. The gain curve in the limit of vanishing cou-pling strength, S → 0, has an infinite derivative at thepoint fν = B − 1 and m = 0, and then monotonicallyincreases as a graph of m versus fν with a monotonicallydecreasing derivative, reflecting the fact that an uncou-pled neuron exhibits no bistability.

The dependence on the parameter S/τ of the drivingstrength fν at the saddle-node bifurcation point, whichexists for 0 < S < τ lnB, is displayed in Fig. 2. It isobtained as the minimum value, fν, of fν in Eq. (28b),from which we also see that it is clearly a function ofthe ratio S/τ alone. As S/τ → 0 along this bifurcationcurve, fν → B − 1, again indicating the absence of abistability region for an uncoupled neuron. The curve ofthe saddle-node bifurcation point locations in the fν–m

plane, parametrized by the coupling constant S, is shownin gray (red on-line) in Fig. 1.

We remark that, for the current-based version of the IFmodel (1), the analog of Eq. (27) can be solved explicitlyfor the external driving strength fν in terms of the firingrate m [66, 67]. The gain-curve shapes in the mean-driven limit for both types of models are similar.

VI. EXACT IMPLICIT SOLUTION

To address the gain curves with finite synaptic inputfluctuations as obtained from the full Eq. (23), it will beconvenient to introduce the new independent variable

x =εE − εr

εE − v. (30)

Note that the reset potential v = εr here corresponds tox = 1, and the firing threshold v = VT to x = B, with Bas in Eq. (15).

It will also be convenient to introduce the new depen-dent variable (x), which must satisfy the requirement(x) dx = ρ(v) dv. This requirement gives

ρ(v) =x2(x)

εE − εr, (31)

and Eqs. (23), (30), and (31) imply that the equation forthe density (x) becomes

q2x2′ + x(x + q2 − a) = −mτ, (32)

where the prime denotes differentiation upon x. Theboundary condition (10) becomes

(B) = 0, (33)

and the normalization condition (13) becomes

∫ B

1

(x) dx = 1, (34)

with B as in Eq. (15).

The exact solution of Eq. (32), satisfying the condi-tions (33) and (34), is given by

(x) =mτ

q2xa/q2

−1 e−x/q2

∫ B

x

s−a/q2−1 es/q2

ds. (35)

Integrating the density (35) over the interval 1 < x < B,we find that the normalization condition (34) becomes

q2

∫ B

1

s−a/q2−1 es/q2

ds

∫ s

1

xa/q2−1 e−x/q2

dx = 1,

(36)

Page 7: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

7

which can be rewritten in terms of the confluent hyper-geometric function 1F1 [68] as

a

{

∫ B

1

1

s1F1

(

1,a

q2+ 1,

s

q2

)

ds

+q2

a1F1

(

a

q2,

a

q2+ 1,− 1

q2

)

×[

B−a/q2

1F1

(

− a

q2,− a

q2+ 1,

B

q2

)

−1F1

(

− a

q2,− a

q2+ 1,

1

q2

)]

}

= 1. (37)

After performing the last integration and transformingthe result using known identities for (generalized) hyper-geometric functions [69], we finally arrive at

a

{

lnB +1

a + q2

[

B 2F2

(

[1, 1] ,

[

2,a

q2+ 2

]

,B

q2

)

−2F2

(

[1, 1] ,

[

2,a

q2+ 2

]

,1

q2

)]

+q2

a1F1

(

1,a

q2+ 1,

1

q2

)

×[

B−a/q2

e(B−1)/q2

1F1

(

1,− a

q2+ 1,−B

q2

)

−1F1

(

1,− a

q2+ 1,− 1

q2

)]}

= 1. (38)

Here, kFl is the generalized hypergeometric function

kFl ([α1, . . . , αk] , [β1, . . . , βl] , z) =

∞∑

n=0

(α1)n · · · (αk)n

(β1)n · · · (βl)n

zn

n!,

with (γ)j = γ(γ+1) · · · (γ+j−1) being the Pochhammersymbol [68].

In light of the definitions of the coefficients a and q2

in (18), we observe that (38) gives a nonlinear equationconnecting the driving fν and the firing rate m.

VII. GAIN CURVES

Equation (38) is of little practical use in computing thegain curves due to the difficulties in evaluating hypergeo-metric functions of large arguments [70]. Instead, we canrewrite Eq. (36) as

q2

∫ B/q2

1/q2

y−a/q2−1ey

[

γ

(

a

q2, y

)

− γ

(

a

q2,

1

q2

)]

dy = 1,

(39)where γ(α, z) =

∫ z

0 tα−1e−t dt is the incomplete Gammafunction, and perform the integration and solve the equa-tion in (39) numerically using the functional dependence(18) of a and q2 on fν and m.

Alternatively, we can recall that our investigation ofthe gain curves is only valid in the small-fluctuation

0 0.1 0.2 0.3 0.4 0.50

0.5

1

1.5

2

2.5

m

mean−drivenvalid finvalid f

0.24 0.26

0.2

0.3

0.4

0.5

0.6

m

FIG. 3: (Color Online) Gain curve: the firing rate m as afunction of the driving strength fν. The reversal and thresh-old potentials satisfy Eqs. (2). The parameter values areτ = 1, S = 0.1, λ = 1, where pN = λ/f , and δs = 0.0002.The strength of the external input spikes f varies from curveto curve. From the left, the values of f are 0.1, 0.05, 0.025,0.005, 0.001, 0.0001, and 0, respectively. Note that, along theleft three gain curves (dashed; red on-line), f does not sat-isfy the small-jumps condition (14), and the Fokker-Planckequation (20) becomes therefore progressively less valid withincreasing f and with proportionally decreasing pN . The cor-responding minimal numbers of jumps needed for a neuron toreach VT beginning at εr, computed from Eq. (14), are ∼ 2, 4,8, 40, 200, and 2000, respectively. The rightmost gain curve(gray; green on-line) was plotted using the parametrization(28) in the mean-driven limit (f = 0, pN → ∞, ν → ∞, fνfinite). Inset: Convergence of the gain curves to the mean-driven limit gain curve near the upper turning point as fdecreases and pN increases proportionally.

limit (22), q2/a ≪ 1, of the Fokker-Planck equation (20),due to the conditions (14) and (16) imposed by the dif-fusion approximation. In this limit, as described in Ap-pendix A, it follows from the result of [71] that the prob-ability density (x) in (35) has an asymptotic expansionin terms of q2/a, uniform in x, a, and/or q, whose firsttwo terms are given by

(x) ∼mτ

xq

{

exp

(

a[

η2(B) − η2(x)]

2q2

)

×[

2

aD

(

η(B)

q

a

2

)

− q

aη(B)+

q

B − a

]

−[

2

aD

(

η(x)

q

a

2

)

− q

aη(x)+

q

x − a

]}

.

(40)

where

η(x) = sign(x − a)

2(x

a− 1 − ln

x

a

)

, (41)

Page 8: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

8

and D(·) denotes the Dawson integral

D(z) = e−z2

∫ z

0

ey2

dy. (42)

Using Eq. (34), we obtain the gain curves by numericallyintegrating the approximation (40) over the interval 1 <x < B, and using the functional dependence (18) of aand q2 on fν and m.

0.15 0.2 0.25 0.3 0.35 0.40

0.5

1

1.5

2

m

A B

CD

0 0.2 0.4 0.6 0.8 10

2

4

6

8

10

v

ρ

A

B

CD

0.95 10

0.5

1

1.5

v

ρ

FIG. 4: (Color Online) Probability density functions ρ(v) atspecific points on the m–fν gain curve. The reversal andthreshold potentials satisfy Eqs. (2). The parameter valuesare τ = 1, f = 0.001, S = 0.1, pN = 1000, and δs = 0.0002.Top: specific points on the gain curve. Bottom: ρ(v) at thesepoints. The functions ρ(v) were computed via the asymp-totic formula (40). (Note that ρ(v) computed numerically viaEqs. (30), (31), and (35) instead of (40) appear identical.)Inset: The boundary layer at v = VT of the probability den-sity functions ρ(v) corresponding to the location “D” on thehigh-firing part of the gain curve as shown in black dashedline. (Note the scale on the v-axis.) In gray (green on-line) isshown ρ(v) in the mean-driven limit, described by Eq. (24).

To compute numerically the gain curves, such as thosepresented in Fig. 3, in practice, we combine the abovetwo numerical methods, either using Eq. (39) or Eq. (40).We switch from the former to the latter when the valueof q2/a drops below a prescribed tolerance, δs. (This is

to ensure that the error of the asymptotic formula (40)at larger values of q2/a does not affect the gain-curveshapes. However, formula (40) is so accurate that inall the figures presented in this work, the difference be-tween the probability density functions and gain curvesobtained by the two methods is imperceptible.) Wesee that for small external driving spike strength f andproportionally large effective network size pN , the gaincurve closely follows its no-fluctuations counterpart fromFig. 1, except for a smoothed-out corner near the pointfν = B − 1, m = 0. In particular, the small-f gaincurves still exhibit bistability. As the value of the ex-ternal driving spike strength f increases and the effec-tive network size pN decreases at the same rate, i.e., theamount of fluctuations in the input to a single neuron in-creases, the backwards-sloping portion of the gain curveinside the bistability interval steepens and the bistabilityinterval narrows until it disappears. As fν decreases, again curve eventually approaches the fν-axis exponen-tially fast in fν (and f/τ), as we will further discuss inSec. VIII A below.

For large values of the driving fν, the gain curves inFig. 3 increase almost linearly. In this regime, q2 ≫1, the exact probability density function (35) behavesasymptotically as

ρ(x) ∼ mτ

ax

[

1 −( x

B

)a/q2]

. (43)

The normalization condition (34) yields the equation

a

[

lnB − q2

a

(

1 − B−a/q2)

]

∼ 1 (44)

for the asymptotic behavior of the gain curves, fromwhich the slopes of the straight-line asymptotes can eas-ily be computed. In particular, for q2/a ≪ 1, the limitingslope agrees with that of the straight-line asymptote inthe mean-driven limit, given by Eq. (29).

On the bottom of Fig. 4, we present the probabil-ity density functions ρ(v) corresponding to four specificpoints on the gain curve for f = 0.001 in Fig. 3, whichis replotted on the top of Fig. 4. The function ρ(v) inthe fν-regime below the bistability interval appears verysimilar to a Gaussian curve, a point we will return to inSec. VIII A. Progressing up the gain curve, the densityfunctions undergo a shape change until the one in thefν-regime well above the bistability interval appears in-distinguishable from the mean-driven-limit density ρ(v)in Eq. (24), except in a thin boundary layer near v = VT .In that layer, the density ρ(v) in Fig. 4 decreases rapidly,and vanishes at v = VT to satisfy the boundary condi-tion (10). We will further elaborate on the gain-curveshapes in the fν-regimes above and below the bistabilityinterval in Sec. VIII.

Page 9: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

9

VIII. ASYMPTOTIC BEHAVIOR OF GAIN

CURVES IN THE SMALL-FLUCTUATION LIMIT

In the small-fluctuation limit (22), i.e. for q2/a ≪1, we can derive exact asymptotic expressions for theprobability density function ρ(v) and the pieces of thegain curves that lie in the fluctuation-driven (a < B)and mean-driven (a > B) regimes, respectively. Roughlyspeaking, in Fig. 3, the former regime corresponds to theportion of the gain curve near the fν-axis and the latterto the portion near the limiting mean-driven gain curve(28), with both staying O(q/

√a) away from the threshold

value fν = B − 1, m = 0. The asymptotic expansionsare derived using Laplace’s method [72] on the integralin Eq. (35).

As we recall from Sec. V, in the fluctuation-drivenregime, a < B, the effective reversal potential VS in (25)lies below the firing threshold, VS < VT . The few neu-ronal firings that occur in this regime are driven by themembrane-potential fluctuations of individual neurons.The smallness of these fluctuations when q2/a ≪ 1, i.e.,when f/τ, S/τpN ≪ 1 in accordance with Eq. (14), leadsto low firing rates. As we show in Sec. VIII A below, theneuronal firing rate m is in fact an exponentially smallfunction of the voltage jump size f/τ .

In the mean-driven regime, a > B, the effective rever-sal potential VS in (25) lies above the firing threshold,VS > VT . As we recall from Sec. V, due to the con-ditions (14) and (16), the effect of the external drive isnearly constant in time, and the neuronal network firingrate is largely controlled by the mean of this drive.

We now explore the asymptotic expressions for the fir-ing rates in these two regimes in detail.

A. Fluctuation-Driven Regime

In the fluctuation-driven regime, a < B (VS < VT ),the leading order in q2/a of the density (x) is given by

(x) ∼ mτ

a(B − a)exp

(

aη2(B)

2q2

)

exp

(

− (x − a)2

2q2a

)

,

(45)with η(x) as in (41), which has a relative error of O(q2/a)near x = a and is exponentially accurate everywhere else.

The normalization condition (34) used on Eq. (45)gives

mτq

B − aexp

(

aη2(B)

2q2

)√

π

2a

[

1 + erf

(

a − 1√2a q

)]

∼ 1,

(46)where erf(·) denotes the error function erf(z) =

(2/√

π)∫ z

0e−t2 dt.

Formula (46) implies that the firing rate m is expo-nentially small in q2/a. Therefore, in this case, from Eq.(18), we find asymptotically

a ∼ 1 + fν, q2 ∼ f2ν

2τ. (47)

0.22 0.24 0.26 0.28 0.3 0.320

0.2

0.4

0.6

0.8

1

1.2

m

(B−1,0)

fluctuation−drivenmean−drivenFokker−Planck

FIG. 5: (Color Online) Comparison between a gain curve(black dashed) computed from the steady-state Fokker-Planckequation as described in Sec. VII (displayed on top of Fig. 4)and its approximations in the fluctuation-driven (light gray;green on-line) and mean-driven (dark gray dash-dotted; redon-line) regimes, computed via Eqs. (48) and (28), respec-tively. The parameter values along the computed gain curveare τ = 1, f = 0.001, S = 0.1, pN = 1000, and δs = 0.0002.The reversal and threshold potentials satisfy Eqs. (2).

In other words, to within an exponentially small error,the firing rate gain in this case is induced by the fluctu-ations in the feed-forward input to the neuron while thefeed-back synaptic input from other neurons in the pop-ulation is negligible. This phenomenon of “asymptoticdecoupling”—i.e., under a small-fluctuation subthresh-old drive, the network is asymptotically decoupled—isimportant for understanding small-fluctuation networkdynamics.

Except when a−1 . O(q/√

a), the leading-order of thebracketed term containing the error function in formula(46) equals 2. From this, the assumption f ≪ 1, and thefact that the firing rate m is exponentially small in q2/a(and so in f) in this regime—and therefore (47) holds—we find for m the explicit leading-order approximation

m ∼ [B − (1 + fν)]

1 + fν

πτf2ν

× exp

{

f2ν

[

(1 + fν)

(

1 + lnB

1 + fν

)

− B

]}

.

(48)

We remark that Eq. (48) is not valid uniformly in thefν-interval between 0 and B−1. In particular, near fν =B − 1, the right-hand-side of (48) becomes of size O(1),contradicting the exponential smallness of the firing ratem, and so Eq. (48) ceases to be valid there, as can beseen from Fig. 5. This right-hand side has a maximumat

fν = B − 1 −√

fB(B − 1)

2τ+ O(f),

Page 10: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

10

and then vanishes at fν = B − 1. The location of thismaximum indicates that the size of the excluded regionis O(

√f).

From Eqs. (30) and (47), it is clear that the probabilitydensity function ρ(v) corresponding to the density (x)in Eq. (45) is also Gaussian, given by the expression

ρ(v) ∼√

τ(1 + fν)3

πf2νexp

(

−τ(1 + fν)3(v − VS)2

f2ν

)

,

(49)with

VS ∼ εr + fν εE

1 + fν(50)

by Eq. (25).

0 0.2 0.4 0.6 0.8 10

2

4

6

8

10

12

v

ρ

fluctuation−driven approximationFokker−Planck

0.23 0.24 0.250

0.05

0.1

m

FIG. 6: (Color Online) Comparison between the probabil-ity density function ρ(v) computed using the uniform asymp-totic approximation (40) of the steady Fokker-Planck solutionand the Gaussian approximation (49), which is valid in thefluctuation-driven regime. The parameter values are τ = 1,f = 0.001, S = 0.1, and pN = 1000. The reversal and thresh-old potentials satisfy Eqs. (2). Inset: The value of the exter-nal driving strength fν is the same as at the location “B” inthe top panel of Fig. 4.

In Fig. 5, we present a comparison between the ap-proximate gain curve computed from Eq. (48) and thegain curve displayed on top of Fig. 4. The approximategain curve follows the uniformly-valid gain curve (blackdashed line) very closely in the entire fluctuation-drivenregime, to the point of indicating the location of the lat-ter curve’s bottom turning point shortly after the validityof Eq. (48) breaks down. Moreover, Eq. (49) provides anexcellent approximation for the the probability densityfunction ρ(v), as calculated via Eq. (40), throughout thefluctuation-driven regime. In particular, the curve ρ(v)corresponding to the location “A” in Fig. 4 cannot bedistinguished from its approximation by Eq. (49). (Datanot shown.) For the curve in Fig. 4 corresponding to thelocation “B,” which is very close to the bottom turningpoint of the gain curve, this Gaussian approximation still

remains satisfactory, as shown in Fig. 6, even this closeto the turning point.

We now also remark that Eq. (48), with an additionalfactor of 1/2 on the right-hand side, describes the gain-curve shape for fν ≪ 1, regardless of the value of the ex-ternal driving spike strength f . This is because the argu-ment in the error function in Eq. (46) becomes O(

√fν),

and so the leading-order of the bracketed term containingthe error function in Eq. (46) now equals 1 rather than2 as in Eq. (48).

Finally, we again comment on the f → 0 limit in thefluctuation-driven regime, a < B or VS < VT . This ar-gument complements the argument made in Section V.In particular, Eq. (48) and the discussion in the para-graph following it imply that m → 0 as f → 0 in theentire interval 0 < fν < B − 1. Moreover, Eq. (45),together with the normalization condition (34), impliesthat (x) → δ(x − a), and so indeed ρ(v) → δ (v − VS),where δ(·) again represents the Dirac delta function, aswas mentioned in Section V. This argument reaffirmsthe observation that in the absence of input conductancefluctuations, if VS < VT , in the steady state, the externalinput makes all the neuronal membrane potentials equili-brate at v = VS , and so no neurons can fire, as consistentwith intuition.

B. Mean-Driven Regime

In the mean-driven regime, when a > B (VS > VT ),again using a Laplace-type asymptotic expansion [72] ofthe integral in Eq. (35), we obtain the leading-order termin q2/a of the density (x) as

(x) ∼ mτ

x(a − x)

{

1 − exp

[

− 1

q2

( a

B− 1)

(B − x)

]}

.

This equation corresponds to the probability densityfunction

ρ(v) ∼ mτ

a(VS − v)

[

1 − exp

(

− (a − B)(VT − v)

q2(εE − VT )

)]

(51)

of the original voltage variable v. This expression isequivalent to Eq. (24), except in an O(q2/a) thin bound-ary layer near the firing threshold v = VT . This layerensures that the boundary condition (33) is satisfied. Werecall that such behavior was already observed in Fig. 4.At O(1), the normalization condition (34) again yields forthe firing rate m the logarithmic equation (27), whosesolution via the parametrization (28) was discussed inSection V.

In Fig. 5, a comparison is shown between the gain curvepresented on top of Fig. 4 and its approximation in themean-driven limit obtained via Eqs. (28) for the externaldriving spike strength f = 0.001. We see that the approx-imation is excellent deep into the mean-driven regime,but that it gradually breaks down upon approaching thebistability region, in which it is invalid. For the external

Page 11: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

11

0 0.2 0.4 0.6 0.8 10

1

2

3

v

ρmean−driven approximationFokker−Planck

0.23 0.24 0.250

0.2

0.4

0.6

m

FIG. 7: (Color Online) Comparison between the probabilitydensity function ρ(v) computed using the uniform asymptoticapproximation (40) of the steady Fokker-Planck solution andthe approximation (51), which is valid in the mean-drivenregime. The reversal and threshold potentials satisfy Eqs.(2). The parameter values are τ = 1, f = 0.001, S = 0.1, andpN = 1000. Note that the the mean-driven approximationcurve is only normalized to O(1) and is missing an O(q2/a)amount, as explained in the text. Inset: The value of theexternal driving fν is the same as at the location “C” in thetop panel of Fig. 4.

driving value fν corresponding to the location “D” onthe gain curve in Fig. 4, the probability density functionρ(v) computed via Eqs. (51) and (28) is indistinguishablefrom that computed via the uniform asymptotic approx-imation (40). (Data not shown.) The latter is the curve“D” on the bottom of Fig. 4, whose boundary-layer detailis presented in the inset in Fig. 4. In Fig. 7, we display thecomparison between the probability density function ρ(v)corresponding to the location “C” on the top of Fig. 4,and its approximation given by Eq. (40), computed atthe same value of the driving fν. The agreement still re-mains satisfactory, but notice that ρ(v) computed fromEq. (40) has a smaller area. This is an artifact of the ex-ponential boundary-layer term in Eq. (40) subtracting anO(q2/a) amount from the area, which cannot be includedin the area evaluation at O(1).

As can be seen from the inset in Fig. 3, the mean-drivenapproximation via Eqs. (28) improves with decreasing fnear the upper turning point of the gain curve, and even-tually also along its unstable branch, but is always theleast accurate in an O(

√f) neighborhood of the point

(fν, m) = (B−1, 0). In particular, recall that in the zero-fluctuations limit discussed in Section V, this point sepa-rates two very distinct regimes of the probability densityfunction ρ(v): ρ(v) = δ(v − VS) along the fν axis andρ(v) given by Eq. (24) along the gain curve parametrizedby Eqs. (28), with a discontinuous transition between thetwo at the threshold point (fν, m) = (B − 1, 0).

IX. VALIDITY OF THE FOKKER-PLANCK

DESCRIPTION

In this section, we discuss the accuracy of the gaincurves and voltage probability density functions obtainedusing the Fokker-Planck equation (20) as an approxima-tion to their counterparts in a more realistic IF networkwith finite rise and decay conductance times obtainedvia direct numerical simulations. In these simulations,we used an IF model of the same form as (1), but withthe instantaneous, δ-function conductance time coursereplaced by the α-function time course

G(t) =1

σd − σr

[

exp(−t/σd) − exp(−t/σr)]

θ(t), (52)

were θ(t) is the Heaviside function, i.e., θ(t) = 1 for t > 0and 0 otherwise. For the δ-function conductance time-course assumed in the model (1) to be a good approxima-tion of the more realistic conductance time-course (52),short conductance rise and decay times, σr = 0.1 ms andσd = 0.2 ms, were used in the simulations.

0 0.1 0.2 0.3 0.4 0.50

0.5

1

1.5

2

2.5

mFokker−Plancksimulation

FIG. 8: Comparison of gain curves evaluated theoretically(black, dashed) as in Section VII, and shown in Fig. 3, andgain curves computed via numerical simulations (gray; greenon-line). The reversal and threshold potentials satisfy Eqs.(2). The parameter values are τ = 1, S = 0.1, λ = 1, wherepN = λ/f . The strength of the external input f assumesthe values 0.1, 0.025, 0.005, and 0.002, respectively, on thegain curves from left to right. In the simulations, N = 104,except on the bottom half of the gain curve correspondingto f = 0.002, where N = 2 × 104. The synaptic releaseprobability p is calculated to yield the appropriate “effectivenetwork size” pN . Poisson-distributed random delays wereused in computing the gain curve for f = 0.002; the averagedelay was 6 ms on the bottom half, and 4 ms on the top halfof the gain curve. Note that, for large values of the externaldriving strength fν, the gain curves computed numericallyappear to all asymptote towards the straight line given byEq. (29), which is the asymptote of the gain curve (28) in themean-driven limit for the same values of τ and S.

From Secs. II and IV, we recall that the quantity con-trolling the validity of both the Poisson approximation

Page 12: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

12

of the network spike train and the diffusion approxima-tion leading to the Fokker-Planck equation (20) from thekinetic equation (5) is the number of spikes that a neu-ron at the reset value εr must receive in order to spikeitself. The requirement that this number must be suffi-ciently large results in the estimates in Eq. (14), whichimply that the strength of the external driving spikes, f ,must be small and the effective size of the network, pN ,large. To examine how the validity of Eq. (20) deterio-rates with f increasing and pN decreasing at the samerate, we compare a set of theoretical gain curves (Cf.Fig. 3) with gain curves computed via direct numericalsimulations at the same parameter values. The resultsare presented in Fig. 8.

We remark that, as the strength of the external driv-ing spikes, f , decreases and the effective size of the net-work, pN , increases, the simulated IF network developsa strong tendency to oscillate in a nearly synchronized,periodic manner [67, 73, 74]. Because of this tendency,the basin of attraction of the steady asynchronous solu-tion studied here, described by the probability densityfunction ρ(v), becomes exceedingly small for small f andlarge pN . This leads to the difficulty of observing thesesolutions in simulations even for very small synaptic re-lease probability. The tendency of the network to os-cillate can be broken by incorporating into the modeladditional physiological effects, such as random trans-mission delays which are due to randomness in axonalpropagation velocities and lengths [75–77]. These delaysdo not alter the steady-state equation (23) or its solu-tion ρ(v) [67]. We incorporated Poisson-distributed de-lays in the IF neuronal network simulation to obtain thegain curve for f = 0.002 in Fig. 8 and the correspondingprobability density functions ρ(v), as shown in Fig. 9.

For the external driving spike strength f = 0.002 (andthe corresponding effective network size pN = 500), wecan see that the agreement between the theoretically-and numerically-obtained gain curves in Fig. 8 is excel-lent. For these values of f and N , the minimal num-ber of spike-induced jumps that a neuron with the resetmembrane potential value must receive in order to spikeitself equals ∼ 100. In Fig. 9, we show a comparisonbetween the theoretical voltage probability density func-tions and those obtained via numerical simulation for thesame value set of the external drive strength fν. We seethat the agreement is generally very good, except, for theprobability density functions in the mean-driven regime,near the reset value at v = εr and the threshold valuev = VT . These two regions of discrepancy arise due to acommon reason, which is because the vanishing boundarycondition ρ(VT ) = 0 in Eq. (10) is not satisfied. This isa consequence of the finite (rise and) decay conductancetime(s), as explicitly demonstrated in Fig. 10. We willelaborate more on this point in the discussion in Sec. X.

The two leftmost pairs of gain curves in Fig. 8 corre-spond to the larger values of the external driving spikestrength, f = 0.1 and 0.025, and the proportionallysmaller values of the effective network sizes, pN = 10 and

0 0.2 0.4 0.6 0.8 10

2

4

6

8

v

ρ

0.2 0.3 0.4 0.5

0.5

1

1.5

2

2.5

m0 0.2 0.4 0.6 0.8 1

0

0.5

1

1.5

2

2.5

3

v

ρFokker−Plancksimulation

0.94 0.96 0.98 10

1

2

3

v

ρ

FIG. 9: (Color Online) Voltage probability density functionsobtained via the Fokker-Planck equation (20) and numericalsimulations for the external driving spike strength f = 0.002and the effective network size pN = 500, for which (20) gives avalid approximation. Starting with the graph with the highestmaximum and ending with the graph with the lowest maxi-mum, the corresponding values of the external driving fν are(top) 0.2, 0.22, 0.24, (bottom) 0.26, 0.32, 0.38, 0.44, and 0.5.Other parameter values are τ = 1, S = 0.1, and δs = 0.0002.The reversal and threshold potentials satisfy Eqs. (2). Theminimal number of spike-induced voltage jumps needed for aneuron to reach VT beginning at εr, computed from Eq. (14),is ∼ 100. In the simulations, N = 104 and p = 0.05. Poisson-distributed random delays were used in the simulation; theaverage delay was 4 ms. Inset (top): The locations of thedisplayed voltage probability density functions along the gaincurve. The value of the probability density function’s max-imum decreases with increasing value of fν along the gaincurve. Inset (bottom): The probability density functions ob-tained via simulations do not satisfy the boundary condition(10) due to finite rise and decay conductance times.

40. For these values of f and pN , the minimal numbersof spike-induced voltage jumps needed to reach thresh-old are 2 and 8, respectively. From Fig. 8, we see thatthe theoretically and numerically obtained gain curvesstill agree quite well for low values of the driving fνwhen the firing rates are also low. In particular, the

Page 13: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

13

0.98 0.985 0.99 0.995 10

0.5

1

1.5

v

ρ σ =0.1 msσ=0.05 msσ=0.025 msσ=0.0125 msδ PSP

FIG. 10: (Color Online) Detail near v = VT of the voltageprobability density computed via numerical simulations for afeed-forward neuron with the external driving spike strengthf = 0.0025 and the driving strength fν = 40. Other pa-rameter values are τ = 1 (= 20 ms), S = 0, and N = 1.The reversal and threshold potentials satisfy Eqs. (2). Theconductance rise time in Eq. (52) is instantaneous, σr = 0.The decay conductance time σ = σd is varied as shown in thelegend, with δ-PSP denoting the instantaneous conductancetime course used in Eq. (1). Note how the value of ρ(VT )decreases with decreasing σd and vanishes for δ-PSP.

theoretical gain curves can be used to predict quite wellwhen the firing rates will start increasing and their sizeswill become O(1). However, the theoretical curves can-not be used to predict correctly the straight-line asymp-totes of the numerically-obtained gain curves, becausethe theoretically-predicted slopes are too large. In par-ticular, the straight-line asymptotes of the numerically-obtained gain curves all seem to converge to the straight-line asymptote of the mean-driven gain curve correspond-ing to the same parameter values, in contrast to thoseof the theoretically-predicted gain curves. This discrep-ancy clearly signifies the breakdown of the validity of theFokker-Planck equation (20) in the large f , small pNregime.

To shed further light on how the validity of the Fokker-Planck equation (20) breaks down with increasing ex-ternal driving spike strength f and decreasing effectivenetwork size pN , we display in Fig. 11 the contrast be-tween the voltage probability density functions ρ(v) com-puted via the Fokker-Planck equation (20) and numeri-cal simulations, for the external driving spike strengthf = 0.1 and effective network size pN = 10. For thisspike strength and network size, the diffusion approx-imation leading to the Fokker-Planck equation (20) isclearly invalid, as only two consecutive external drivingspikes suffice to propel a network neuron to fire. One cansee from Fig. 11 that the two sets of functions ρ(v) stillshare some amount of similarity, e.g., their means pro-gressively move towards higher membrane-potential val-ues with increasing external driving strength fν. On the

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

2

v

ρ

fν=0.1fν=0.2fν=0.3fν=0.4fν=0.5

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

2

v

ρfν=0.1fν=0.2fν=0.3fν=0.4fν=0.5

FIG. 11: (Color Online) Voltage probability density obtainedtheoretically from the Fokker-Planck equation (20) and com-puted via numerical simulations (bottom) for the fixed exter-nal driving spike strength f = 0.1 and the effective networksize pN = 10, for which (20) is not a valid approximation.Other parameter values are τ = 1, S = 0.1, and δs = 0.0002.The reversal and threshold potentials satisfy Eqs. (2). Thevalues of the external driving strength fν increase from thegraphs with the highest to the graphs with the lowest max-ima. In the simulations, N = 104 and p = 0.001. The minimalnumber of spike-induced voltage jumps needed for a neuronto reach VT beginning at εr, computed from Eq. (14), is ∼ 2.

other hand, the details of the two density sets are verydifferent, with the numerically-obtained densities beingmuch rougher.

Finally, we discuss the implications of thetheoretically-predicted bistability. This bistabilityunderlies the hysteresis that can be observed in thesimulations: By slowly ramping the strength fν of theexternal drive up and down through the bistabilityinterval [26], the resulting network firing rate tracesout the bottom and top stable branches of the gaincurve, respectively, and undergoes sharp jumps to theremaining stable branch near the saddle-node bifurcationpoints where one stable branch disappears. The size ofthe hysteresis loops depends on the ramping speed. The

Page 14: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

14

0.18 0.2 0.22 0.24 0.260

0.5

1

1.5

2

2.5

3

(B−1,0)

mFokker−Planckramp upramp downmean−driven

0.19 0.2 0.21 0.22 0.230

0.5

1

1.5

2

2.5

m

Fokker−Planckslow rampingfast ramping

FIG. 12: (Color Online) Top: Bistability and hysteresis ingain curves obtained theoretically from the Fokker-Planckequation (20) and the mean-driven approximation (28), andcomputed via numerical simulations for the external driv-ing spike strength f = 0.002 and the effective network sizepN = 500. Note that the mean-driven theory would predictthe jump on the up-ramp branch of the gain curve obtainedby the simulations to be initiated near the point fν = B − 1,m = 0. Other parameter values are τ = 1, S = 0.2, andδs = 0.0002. The reversal and threshold potentials satisfyEqs. (2). In the simulations, N = 104 and p = 0.05. Foreach value of fν, τfν = 10 s of the network evolution wassimulated. The difference between two consecutive fν-valueswas ∆(fν) = 0.001, giving the speed 10−4 s−1. Poisson-distributed random transmission delays averaging 4 ms wereused. Bottom: Hysteresis obtained in the simulations us-ing fast and slow ramping speeds. The hysteretic gain-curvebranches obtained for the slow ramping speed are the sameas on the top. For the gain curves obtained by fast ramping,an ensemble of 50 simulation results is shown, with N = 103,p = 0.5, τfν = 1 s, ∆(fν) = 0.0004, speed = 4 × 10−4 s−1.

slower the ramping speed, the closer are the loops tothe theoretically-predicted bistability curves. Examplesof such hysteresis loops for the small-fluctuation regimeare presented in Fig. 12, and show good agreementbetween the theoretical gain-curve prediction and thehysteretic gain curve branches computed via numerical

simulations. On the top, the result of a simulation witha very slow ramping speed is shown. On the bottom,the results of 50 simulations with a faster rampingspeed show that, at faster speeds, jumps are initiatedat random fν values in the vicinity of the bifurcationvalues. Moreover, the resulting hysteresis loops aresmaller than for the slower ramping speed. Note that,in the mean-driven limit (f → 0, N → ∞, ν → ∞,fν finite), the jump on the up-ramp branch would beinitiated near the point fν = B − 1, m = 0. We shouldremark that the probability of fluctuation-inducedrandom transitions between the two stable branchesis very small in our small-fluctuation regime and wehave not observed them in our simulations. Moreover,due to random transmission delays incorporated in oursystem, the asynchronous regime becomes more stable.We have thus not yet observed switching between theasynchronous and fast oscillatory regimes, which werereported for a discrete model without leakage in [74].

X. DISCUSSION

In this work, we have addressed the basic properties ofthe Fokker-Planck equation describing the conductance-based IF neuronal network model. In addition to the as-sumption that the dynamics of a neuronal network can beadequately captured by one-point statistics, three mainassumptions lead to the description using the Fokker-Planck equation. The first is the vanishing of the conduc-tance rise and decay time-scales. The second is that thenetwork dynamics is asynchronous, and the spike timesof the entire network can be approximated by a Poissontrain. The third is the diffusion approximation, justifiedwhen each neuron receives large numbers of spikes, eachinducing only a very small change of this neuron’s mem-brane potential, which appears to be the case in somecortical neurons in vivo [78].

We have addressed the consequences of the above sec-ond and third assumptions in Sec. IX. In particular,we showed that the gain curves and probability densityfunctions obtained via the Fokker-Planck equation (20)closely match those obtained using numerical simulationsof an IF network with short but finite conductance riseand decay times in the regime of small synaptic inputfluctuations in which the two assumptions are valid. Wealso described how this comparison deteriorates with in-creasing amounts of synaptic input fluctuations.

We should remark that we avoided numerical simula-tions with the instantaneous conductance time scale andthe related question of whether and how the solutionsof the Fokker-Planck equation (20) and the correspond-ing solutions of the kinetic equation (5) converge to one-another in the limit as the strength of the external drivespikes f vanishes, and the Poisson rate ν of the externaldrive and the network size N increase without a bound.This question is non-trivial in particular because Eq. (20)is not the corresponding asymptotic limit of Eq. (5); in-

Page 15: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

15

stead, it is the second-order truncation for f small and νand N large but finite in the Kramers-Moyal expansionof Eq. (5) [27]. In the time-independent case, the trueasymptotic limit is the mean-driven equation (24), whichis equivalent to the first-order truncation of Eq. (5), andis thus a highly singular limit. The likely complicatednature of the approach to this limit can be gleaned fromthe highly oscillatory behavior of the voltage probabil-ity density function, described by an equation analogousto (5), even for a feed-forward, current-based IF neu-ron [79]. Therefore, studying this limit will be relegatedto a future publication.

The assumption of asynchronous dynamics becomes vi-olated for small values of the input fluctuations when thenetwork exhibits synchronous oscillations. As was seenin Sec. IX, in this case, decreasing the synaptic releaserate and/or including random transmission delays [75–77]helps to stabilize the asynchronous solutions, thus givingrise to the corresponding gain curves described here. Os-cillations in a current-driven IF model analogous to (1),with a time-periodically modulated Poisson rate of theexternal drive, were studied in [46, 53, 80]. In our recentstudy [67] of synchrony in the curren-driven version of theIF model (1), the asynchronous solutions of the curren-driven IF model and the bistability of its gain curves weresimilarly addressed using random transmission delays.

Incorporation of finite conductance rise and decaytimes changes the kinetic theory approach dramatically.In particular Eqs. (5) and (20) in this case become par-tial diffential(-difference) equations with time, voltage,and conductance(s) as independent variables, and an ad-ditional closure needs to be employed to address thevoltage statistics alone [54, 55]. After the diffusion ap-proximation leading to the analog of Eq. (20) is made,this closure is obtained via the maximal entropy princi-ple [56]. The resulting kinetic theory consists of a sys-tem of equations, which in the simplest scenario describethe voltage probability density and the first conductancemoment conditioned on the voltage value. The Fokker-Planck equation (20) can be recovered from this systemin the limit of vanishing conductance time scales, whenthe conditional conductance moment becomes slaved tothe voltage probability density ρ(v). However, whilethe periodic flux boundary condition (21) is retainedin this description, the absorbing boundary condition(10) is replaced by a periodic boundary condition forthe conditional conductance moment, i.e., the condition(VT − εE)ρ(VT ) = (εr − εE)ρ(εr). The difference in theboundary conditions stems from the non-commuting oftwo different limits: vanishing fluctuations and vanish-ing conductance time-scales. The Fokker-Planck descrip-tions with the two differing boundary conditions producevery similar results [55], except for the values of ρ(v) veryclose to the firing threshold v = VT . Note that the prob-ability density functions in Fig. 9, in particular, in itsinset, further demonstrate this point. (See also the cor-responding discussion in Sec. IX.) Therefore, both setsof boundary conditions can be useful, but the conditions

for the use of either should depend on the physiologicalmodeling requirements in each specific case under inves-tigation.

Finally, we remark that while this paper addressesasynchronous dynamics of an idealized neuronal net-work model, the underlying bifurcation scenario of thegain curves, which are bistable in the regime of smallsynaptic-input fluctuations and become single-valued asthe amount of synaptic-input fluctuations increases, re-mains valid in much more realistic neuronal models. Inparticular, similar bistability is present in the gain curvesof excitatory complex cells in models of orientation andspatial-frequency tuning in the primary visual cortex,which cover four orientation pinwheels with ∼ 104 neu-rons, with both excitatory and inhibitory as well as sim-ple and complex cells, and incorporate realistic corticalarchitecture [26, 81]. In these models, the amount offluctuations is controlled by the sparsity of synaptic con-nections and/or the ratio between the amounts of fastand slow excitatory conductances. The knowledge of theunderlying bifurcation structure makes it possible to con-strain the model parameters, in particular, the amountof sparsity, so that the models operate in physiologicallyplausible regimes and reproduce the correct properties ofthe tuning curves [26, 81]. This bifurcation scenario wastherefore in these modeling works a crucial element inthe understanding of the mechanisms that produce thecorrect tuning curves in the computational models of theprimary visual cortex.

Acknowledgments

A.V.R. and D.C. were partly supported by NSF grantsDMS-0506396 and DMS-0507901, and a grant from theSwartz Foundation. L.T. was supported by the NSFgrant DMS-0506257. G.K. was supported by NSF grantDMS-0506287, and gratefully acknowledges the hospital-ity of New York University’s Courant Institute of Math-ematical Sciences and Center for Neural Science.

Appendix A: Uniform Asymptotics for the Voltage

Probability Density

In this appendix, we present a derivation of the asymp-totic formula (40) for the probability density function(x), which was used in Sec. VII.

For this purpose, we use the result of [71] concerningasymptotic behavior of the complementary incompleteGamma function, defined by

Γ(α, z) =

z

tα−1e−t dt (A1)

for α, z > 0, and by analytic continuation elsewhere.Defining

η = sign( z

α− 1)

z

α− 1 − ln

z

α(A2)

Page 16: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

16

for z/α > 0 and by analytic continuation elsewhere,[71] states that the asymptotic expansion

e−iπα Γ(

−α, ze−πi)

∼ 1

Γ(α + 1)

[

πi erfc

(

−iη

α

2

)

+

αeαη2/2

(

1

η− 1

z/α− 1

)

]

+ O(

eαη2/2α−3/2)

,

(A3)

holds for the complementary incomplete Gamma func-tion (A1) uniformly as α → ∞ and | arg z| < π − δ, forany small δ. Here, erfc(·) is the complementary error

function erfc(z) = (2/√

π)∫

z e−t2 dt. We remark that,as z → α, the apparent singularity in (A3) cancels outbecause 1/η − 1/(z/α− 1) → 1/3.

We are interested in the asymptotic expansion of theprobability density (x) in Eq. (35) for small values ofq2/a, which is uniform in x, a, and q, in particular aroundthe transition point a = B. The key to this expansion isthe corresponding asymptotic expansion of the integralin Eq. (35),

F (x) =

∫ B

x

s−a/q2−1 es/q2

ds. (A4)

Using the substitution s = q2veiπ, Eq. (A4) can berewritten in the form

F (x) =q−2a/q2

e−iπa/q2

[

Γ

(

− a

q2,

x

q2e−iπ

)

−Γ

(

− a

q2,B

q2e−iπ

)]

, (A5)

where Γ(·, ·) is again the complementary incompleteGamma function (A1). Thus, in our problem, z = x/q2

and α = a/q2, and we can again put η = η(x) as inEq. (A2). Equations (A3) and (A5), and Stirling’s for-mula for Γ(a/q2 + 1), then give the two-term expansion

F (x) ∼ q√2π

a−a/q2+1/2ea/q2

[

iπ erfc

(

− iη(x)

q

a

2

)

−√

aqeaη2(x)/2q2

(

1

x/a − 1− 1

η(x)

)

− iπ erfc

(

− iη(B)

q

a

2

)

+

aqeaη2(B)/2q2

(

1

B/a − 1− 1

η(B)

)

]

.

(A6)

From (35), (A4), (A6), and the formula erfc(−iz) = 1 +2iD(z)/

√π, with D(z) being the Dawson integral (42),

we finally obtain the expansion (40).

We remark that the simplified expansions, (45) and(51) can easily be obtained from (40) using the large-z asymptotic behavior of the Dawson integral, D(z) ∼1/2z + O(1/z3).

Numerically, the expansion (40) was evaluated as fol-lows. The Dawson integral (42) was computed usingits piecewise-smooth approximation by rational functionsand continued fractions with 8 terms, developed in [82].The difference 1/η(x) − 1/(x/a − 1) was computed ex-actly for |x − a| ≥ δ0 and using a 10-term Taylor seriesexpansion around x = a for |x− a| ≤ δ0, with δ0 = 10−2.

[1] L. Boltzmann, Wien. Ber. 58, 517 (1868; reprinted inBoltzmann Abhandlungen, 1, Barth, Leipzig,1909, page49).

[2] L. Boltzmann, Wien. Ber. 66, 275 (1872; reprinted inBoltzmann Abhandlungen, 1, Barth, Leipzig,1909, page316).

[3] K. Hasselmann, J. Fluid Mech. 12, 481 (1962).[4] K. Hasselmann, J. Fluid Mech. 15, 273 (1962).[5] D. Benney and P. Saffman, Proceedings of the Royal So-

ciety of London, Series A (Mathematical and PhysicalSciences) 289, 301 (1965).

[6] V. E. Zakharov, Sov. Phys. JETP 24, 740 (1967).[7] V. E. Zakharov and N. Filonenko, Sov. Phys. Dokl. 11,

881 (1967).[8] V. E. Zakharov, J. Appl. Mech. Tech. Phys. 2, 190 (1968).[9] D. J. Benney and A. C. Newell, Studies in Applied Math-

ematics 48, 29 (1969).[10] H. R. Wilson and J. D. Cowan, Biophys. J. 12, 1 (1972).[11] B. Knight, J. Gen. Physiol. 59, 734 (1972).[12] A. N. Pushkarev and V. E. Zakharov, Phys. Rev. Lett.

76, 3320 (1996).[13] A. Majda, D. McLaughlin, and E. Tabak, Journal of Non-

linear Science 7, 9 (1997).

[14] D. Cai, A. J. Majda, D. W. McLaughlin, and E. G.Tabak, Proc Natl Acad Sci U S A 96, 14216 (1999 Dec7).

[15] D. Cai, A. J. Majda, D. W. McLaughlin, and E. G.Tabak, Physica D: Nonlinear Phenomena 152-153, 551(2001).

[16] A. I. Dyachenko, A. O. Korotkevich, and V. E. Zakharov,Phys. Rev. Lett. 92, 134501 (2004).

[17] V. E. Zakharov, A. O. Korotkevich, A. Pushkarev, andD. Resio, Physical Review Letters 99, 164501 (pages 4)(2007).

[18] D. Somers, S. Nelson, and M. Sur, Journal of Neuro-science 15, 5448 (1995).

[19] T. Troyer, A. Krukowski, N. Priebe, and K. Miller, J.Neurosci. 18, 5908 (1998).

[20] D. McLaughlin, R. Shapley, M. Shelley, and J. Wielaard,Proc. Natl. Acad. Sci. USA 97, 8087 (2000).

[21] L. Tao, M. Shelley, D. McLaughlin, and R. Shapley, ProcNatl Acad Sci U S A 101, 366 (2004).

[22] D. Cai, A. Rangan, and D. McLaughlin, Proc. Nat’lAcad. Sci (USA) 102, 5868 (2005).

[23] A. V. Rangan, D. Cai, and D. W. McLaughlin, Proc.Natl. Acad. Sci. USA 102, 18793 (2005).

Page 17: Fokker-Planck Description of Conductance-Based Integrate ...homepages.rpi.edu/~kovacg/pdfpapers/gain08.pdf · II. INTEGRATE-AND-FIRE NETWORK We considera conductance-based, IF neuronal

17

[24] R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Bee-man, J. M. Bower, M. Diesmann, A. Morrison, P. H.Goodman, F. C. Harris Jr, et al., J. Comput. Neurosci.23, 349 (2007).

[25] N. Carnevale and M. Hines, The NEURON Book (Cam-bridge University Press, Cambridge, 2006).

[26] L. Tao, D. Cai, D. McLaughlin, M. Shelley, and R. Shap-ley, Proc. Natl. Acad. Sci. USA 103, 12911 (2006).

[27] H. Risken, The Fokker-Planck equation: Methods of solu-

tion and applications (Springer-Verlag, New York, 1989).[28] D. Nykamp and D. Tranchina, J. of Computational Neu-

roscience 8, 19 (2000).[29] M. Rudolph and A. Destexhe, Neural Comp. 15, 2577

(2003).[30] M. Rudolph and A. Destexhe, Neural Comput 17, 2301

(2005).[31] M. Rudolph and A. Destexhe, Neural Comput 18, 2917

(2006).[32] M. J. E. Richardson, Phys. Rev. E 69, 051918 (2004).[33] M. J. E. Richardson and W. Gerstner, Neural. Comput.

17, 923 (2005).[34] B. Lindner and A. Longtin, Neural. Comput. 18, 1896

(2006).[35] H. Wilson and J. Cowan, Kybernetik 13, 55 (1973).[36] A. Treves, Network 4, 259 (1993).[37] R. Ben-Yishai, R. Bar-Or, and H. Sompolinsky, Proc.

Nat. Acad. Sci. USA 92, 3844 (1995).[38] M. Shelley and D. McLaughlin, J. Comp. Neurosci. 12,

97 (2002).[39] P. Bressloff, J. Cowan, M. Golubitsky, P. Thomas, and

M. Wiener, Phil. Trans. R. Soc. Lond. B 356, 299 (2001).[40] W. Wilbur and J. Rinzel, J. Theor. Biol 105, 345 (1983).[41] L. Abbott and C. van Vreeswijk, Phys. Rev. E 48, 1483

(1993).[42] T. Chawanya, A. Aoyagi, T. Nishikawa, K. Okuda, and

Y. Kuramoto, Biol. Cybern. 68, 483 (1993).[43] D. J. Amit and N. Brunel, Cereb. Cortex 7, 237 (1997).[44] G. Barna, T. Grobler, and P. Erdi, Biol. Cybern. 79, 309

(1998).[45] J. Pham, K. Pakdaman, J. Champagnat, and J. Vibert,

Neural Networks 11, 415 (1998).[46] N. Brunel and V. Hakim, Neural Comp. 11, 1621 (1999).[47] W. Gerstner, Neural Comp. 12, 43 (2000).[48] A. Omurtag, B. Knight, and L. Sirovich, J. of Comp.

Neurosci. 8, 51 (2000).[49] A. Omurtag, E. Kaplan, B. Knight, and L. Sirovich, Net-

work 11, 247 (2000).[50] D. Nykamp and D. Tranchina, Neural Comp. 13, 511

(2001).[51] E. Haskell, D. Nykamp, and D. Tranchina, Network:

Compt. Neural. Syst. 12, 141 (2001).[52] A. Casti, A. Omurtag, A. Sornborger, E. Kaplan,

B. Knight, J. Victor, and L. Sirovich, Neural Compu-tation 14, 957 (2002).

[53] N. Fourcaud and N. Brunel, Neural Comp. 14, 2057(2002).

[54] D. Cai, L. Tao, M. Shelley, and D. McLaughlin, Pro. Nat.

Acad. Sci. (USA) 101, 7757 (2004).[55] D. Cai, L. Tao, A. V. Rangan, and D. W. McLaughlin,

Commun. Math. Sci. 4, 97 (2006).[56] A. V. Rangan and D. Cai, Phys. Rev. Lett. 96, 178101

(2006).[57] A. N. Burkitt, Biol Cybern 95, 1 (2006).[58] S. Redman, Physiological Reviews 70, 165 (1990).[59] N. Otmakhov, A. M. Shirke, and R. Malinow, Neuron

10, 1101 (1993).[60] C. Allen and C. F. Stevens, Proc. Natl. Acad. Sci. USA

91, 10380 (1994).[61] N. R. Hardingham and A. U. Larkman, J. Physiol. 507,

249 (1998).[62] S. J. Pyott and C. Rosenmund, J. Physiol. 539, 523

(2002).[63] M. Volgushev, I. Kudryashov, M. Chistiakova,

M. Mukovski, J. Niesmann, and U. T. Eysel, J.Neurophysiol. 92, 212 (2004).

[64] E. Cinlar, in Stochastic Point Processes: Statistical Anal-

ysis, Theory, and Applications, edited by P. Lewis (Wi-ley, New York, NY, 1972), pp. 549–606.

[65] M. Shelley, D. McLaughlin, R. Shapley, and J. Wielaard,J. Comp. Neurosci. 13, 93 (2002).

[66] Y. Kuramoto, Physica D 50, 15 (1991).[67] K. A. Newhall, G. Kovacic, P. R. Kramer, D. Zhou, A. V.

Rangan, and D. Cai, Commun. Math. Sci. submitted

(2009).[68] L. J. Slater, Confluent hypergeometric functions (Cam-

bridge University Press, New York, 1960).[69] L. J. Slater, Generalized hypergeometric functions (Cam-

bridge University Press, Cambridge, 1966).[70] K. E. Muller, Numer. Math. 90, 179 (2001).[71] N. M. Temme, Methods Appl. Anal. 3, 335 (1996).[72] F. W. J. Olver, Asymptotics and special functions, AKP

Classics (A K Peters Ltd., Wellesley, MA, 1997).[73] R. E. Mirollo and S. H. Strogatz, SIAM Journal on Ap-

plied Mathematics 50, 1645 (1990).[74] R. E. L. DeVille and C. S. Peskin, Bull Math Biol 70,

1608 (2008).[75] P. L. Nunez, Neocortical Dynamics and Human EEG

Rhythms (Oxford University Press, New York, Oxford,1995).

[76] V. Bringuier, F. Chavane, L. Glaeser, and Y. Fregnac,Science 283, 695 (1999).

[77] R. Eckhorn, A. M. Gail, A. Bruns, A. Gabriel, B. Al-Shaikhli, and M. Saam, IEEE Trans Neural Netw 15,1039 (2004).

[78] D. Pare, E. Shink, H. Gaudreau, A. Destexhe, andE. Lang, J Neurophysiol 79, 1450 (1998).

[79] L. Sirovich, A. Omurtag, and B. W. Knight, SIAM J.Appl. Math. 60, 2009 (2000).

[80] N. Brunel, J. Comp. Neurosci. 8, 183 (2000).[81] W. Zhu, M. Shelley, and R. Shapley, J Comput Neurosci

26, 271 (2009).[82] W. J. Cody, K. A. Paciorek, and H. C. Thacher, Jr.,

Math. Comp. 24 (1970).