16
Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2012, Article ID 968272, 15 pages doi:10.1155/2012/968272 Research Article Spike-Timing-Dependent Plasticity and Short-Term Plasticity Jointly Control the Excitation of Hebbian Plasticity without Weight Constraints in Neural Networks Subha Fernando 1 and Koichi Yamada 2 1 Information Science and Control Engineering, Graduate School of Engineering, Nagaoka University of Technology, 1603-1 Kamitomioka-machi, Nagaoka, Niigata 940-2188, Japan 2 Management and Information Systems Science, Faculty of Engineering, Nagaoka University of Technology, 1603-1 Kamitomioka-machi, Nagaoka, Niigata 940-2188, Japan Correspondence should be addressed to Subha Fernando, [email protected] Received 3 September 2012; Accepted 28 November 2012 Academic Editor: Vince D. Calhoun Copyright © 2012 S. Fernando and K. Yamada. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Hebbian plasticity precisely describes how synapses increase their synaptic strengths according to the correlated activities between two neurons; however, it fails to explain how these activities dilute the strength of the same synapses. Recent literature has proposed spike-timing-dependent plasticity and short-term plasticity on multiple dynamic stochastic synapses that can control synaptic excitation and remove many user-defined constraints. Under this hypothesis, a network model was implemented giving more computational power to receptors, and the behavior at a synapse was defined by the collective dynamic activities of stochastic receptors. An experiment was conducted to analyze can spike-timing-dependent plasticity interplay with short-term plasticity to balance the excitation of the Hebbian neurons without weight constraints? If so what underline mechanisms help neurons to maintain such excitation in computational environment? According to our results both plasticity mechanisms work together to balance the excitation of the neural network as our neurons stabilized its weights for Poisson inputs with mean firing rates from 10 Hz to 40 Hz. The behavior generated by the two neurons was similar to the behavior discussed under synaptic redistribution, so that synaptic weights were stabilized while there was a continuous increase of presynaptic probability of release and higher turnover rate of postsynaptic receptors. 1. Introduction Even though Hebbian synaptic plasticity is a powerful concept which explains how the correlated activity between presynaptic and postsynaptic neurons increases the synaptic strength, its value has been diminished as a learning postulate because it does not provide enough explanation how synaptic weakening occurs. In simple mathematical interpretation of Hebbian learning algorithm, an increase of the synaptic strength between two neurons can be seen if their activity is correlated otherwise it is decreased [1]. This interpretation to Hebbian plasticity allows boundless growth or weakening of synaptic strength between the two neurons [2]. Even though Hebbian plasticity has been supported by the biological experiments on long-term plasticity, it is still not completely understood how Hebbian plasticity can avoid synaptic saturation and bring the competition between synapses to balance the excitation of Hebbian neurons. Normalization of weight [2], BCM theory [3], and spike timing-dependent plasticity (STDP) [4] are the most biolog- ically significant mathematical mechanisms that have been discussed in the literature to address this issue eectively. Weight normalization has been introduced either in additive or multiplicative modes to scale the synaptic weights and to control the continuous growth or weakening of synaptic strength; however, these user-defined weight constraints significantly aect the dynamic behavior of the applied neural network and limit the performance of learning [5].

Spike-Timing-DependentPlasticityandShort-Term

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Spike-Timing-DependentPlasticityandShort-Term

Hindawi Publishing CorporationComputational Intelligence and NeuroscienceVolume 2012, Article ID 968272, 15 pagesdoi:10.1155/2012/968272

Research Article

Spike-Timing-Dependent Plasticity and Short-TermPlasticity Jointly Control the Excitation of Hebbian Plasticitywithout Weight Constraints in Neural Networks

Subha Fernando1 and Koichi Yamada2

1 Information Science and Control Engineering, Graduate School of Engineering, Nagaoka University of Technology,1603-1 Kamitomioka-machi, Nagaoka, Niigata 940-2188, Japan

2 Management and Information Systems Science, Faculty of Engineering, Nagaoka University of Technology,1603-1 Kamitomioka-machi, Nagaoka, Niigata 940-2188, Japan

Correspondence should be addressed to Subha Fernando, [email protected]

Received 3 September 2012; Accepted 28 November 2012

Academic Editor: Vince D. Calhoun

Copyright © 2012 S. Fernando and K. Yamada. This is an open access article distributed under the Creative Commons AttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properlycited.

Hebbian plasticity precisely describes how synapses increase their synaptic strengths according to the correlated activities betweentwo neurons; however, it fails to explain how these activities dilute the strength of the same synapses. Recent literature has proposedspike-timing-dependent plasticity and short-term plasticity on multiple dynamic stochastic synapses that can control synapticexcitation and remove many user-defined constraints. Under this hypothesis, a network model was implemented giving morecomputational power to receptors, and the behavior at a synapse was defined by the collective dynamic activities of stochasticreceptors. An experiment was conducted to analyze can spike-timing-dependent plasticity interplay with short-term plasticityto balance the excitation of the Hebbian neurons without weight constraints? If so what underline mechanisms help neurons tomaintain such excitation in computational environment? According to our results both plasticity mechanisms work together tobalance the excitation of the neural network as our neurons stabilized its weights for Poisson inputs with mean firing rates from10 Hz to 40 Hz. The behavior generated by the two neurons was similar to the behavior discussed under synaptic redistribution,so that synaptic weights were stabilized while there was a continuous increase of presynaptic probability of release and higherturnover rate of postsynaptic receptors.

1. Introduction

Even though Hebbian synaptic plasticity is a powerfulconcept which explains how the correlated activity betweenpresynaptic and postsynaptic neurons increases the synapticstrength, its value has been diminished as a learningpostulate because it does not provide enough explanationhow synaptic weakening occurs. In simple mathematicalinterpretation of Hebbian learning algorithm, an increase ofthe synaptic strength between two neurons can be seen iftheir activity is correlated otherwise it is decreased [1]. Thisinterpretation to Hebbian plasticity allows boundless growthor weakening of synaptic strength between the two neurons[2]. Even though Hebbian plasticity has been supported

by the biological experiments on long-term plasticity, it isstill not completely understood how Hebbian plasticity canavoid synaptic saturation and bring the competition betweensynapses to balance the excitation of Hebbian neurons.Normalization of weight [2], BCM theory [3], and spiketiming-dependent plasticity (STDP) [4] are the most biolog-ically significant mathematical mechanisms that have beendiscussed in the literature to address this issue effectively.Weight normalization has been introduced either in additiveor multiplicative modes to scale the synaptic weights andto control the continuous growth or weakening of synapticstrength; however, these user-defined weight constraintssignificantly affect the dynamic behavior of the appliedneural network and limit the performance of learning [5].

Page 2: Spike-Timing-DependentPlasticityandShort-Term

2 Computational Intelligence and Neuroscience

BCM theory is another significant approach that explainssynaptic activity as a temporal competition between inputpatterns. Synaptic inputs that drive postsynaptic firing tohigher rate than a threshold value result in an increase ofsynaptic strength while inputs that make postsynaptic firingto lower rate than the threshold value result in a decreaseof synaptic strength. BCM approach has mainly consideredinstantaneous postsynaptic firing frequencies for its thresh-old updating mechanism instead of spike arrival time to thesynapses. As per the recent literature, it has recognized STDP[4] as a key mechanism of how information is processingin the brain. STDP is a form of long-term plasticity thatmerely depends on the relative timing of presynaptic andpostsynaptic action potentials [6, 7]. Although the processand the role of STDP in information passing in some areaof the human brain in the development stages are still notclear [8, 9], it has been shown that average case versionsof the perception convergence theorem hold for STDP insimple models of spike neurons for both uncorrelated andcorrelated Poisson input spike trains. And further it hasshown that not only STDP changes the weight of synapsesbut also STDP modulates the initial release probability ofdynamic synapses [10]. Moreover, STDP has been tested on avariety of computational environments, especially to balancethe excitation of Hebbian neurons by introducing synapticcompetition [11–13] and to identify the repetitive patternsin a continuous spike trains [14, 15]. These experimentalstudies on synaptic competition using STDP are conductedin two forms: additive form and multiplicative form. Inadditive form, for example, as in [11], synapses competedagainst each other to control the timing of postsynaptic firingbut this approach assumed that synaptic strength does notscale synaptic efficacy and hard constraints were used todefine the efficacy boundaries. In the multiplicative formsynaptic scaling was separately introduced to synaptic weightas a function of postsynaptic activity [12, 13]. However,because of the reduced competition between synapses, forstrong spike input correlations all synapses stabilized intosimilar equilibrium. In sum, many applications based onSTDP to control the excitation of Hebbian neuron dependon the user-defined constraints on weight algorithm whichultimately limit the performance of learning. To alleviatethis limitation in the learning process of using hard weightconstraints, another significant approach has been discussedin the literature to remove the correlation in input spiketrains by using recurrent neural networks [16]. Their resultsclaim the possibility of reducing the correlation in the spikeinputs by recurrent network dynamics. The experiment wasconducted on two types of recurrent neural networks; withpurely inhibitory neurons and mixed inhibitory-excitatoryneurons. At low firing frequencies, response fluctuationswere reduced in recurrent neural network with inhibitoryneurons when compared to feed-forward network withinhibitory neurons. Moreover, in the case of homogeneousexcitatory and inhibitory subpopulation, negative feedbackhelps to suppress the population rate in both recurrent neu-ral network and feed-forward network. Because inhibitoryfeedback effectively suppresses pairwise correlations andpopulation rate fluctuations in recurrent neural networks,

they suggested using inhibitory neurons to de correlatethe input spike correlations. Moving one step further bycombining the underlying concepts in [17, 18], that is,using nonlinear temporally asymmetric Hebbian plasticityand recent experimental observation of STDP in inhibitorysynapses, Luz and Shamir [19] have discussed the stability ofHebbian plasticity in feed-forward networks. Their findingssupported the fact that temporally asymmetric HebbianSTDP of inhibitory synapses is responsible for the balancethe transient feed-forward excitation and inhibition. UsingSTDP rules, the stochastic weights on inhibitory synapseswere defined to generate the negative feedback and stabilizedinto a unimodal weight distribution. The approach wastested on two forms of network structure; feed-forwardinhibitory synaptic population and feed forward inhibitoryand excitatory synaptic population. The former structureconverged to a uniform solution for correlation input spikesbut later destabilized and excitatory synaptic weights weresegregated according to the correlation structure in inputspike train. Even though the proposed model in the presenceof inhibitory neurons of the learning is more sensitive to thecorrelation structure, the stability of the network is neededto be validated when the correlation between the excitatorysynapses and inhibitory synapses is present.

However, the specifics of a biologically plausible modelof plasticity that can account for the observed synapticpatterns have remained elusive. To get biologically plausiblemodel and remove the instability in Hebbian plasticity manymechanisms have been discussed in recent findings. Oneremarkable suggestion is to combine STDP with multipledynamic and stochastic synaptic connections which enablethe neurons to contact each other simultaneously throughmultiple synaptic communication pathways that are highlysensitive to the dynamic updates and stochastically adjusttheir states according to the activity history. Furthermore,strength of these individual connections between neurons isnecessarily a function of the number of synaptic contacts,the probability of neurotransmitter release, and postsynapticdepolarization [20]. These synapses are further capable ofadjusting their own probability of neurotransmitter release(pr) according to the history of short-term activity [20,21] which provides an elegant way of introducing activity-dependent modifications to synapses and to generate thecompetition between synapses [22]. Based on this hypoth-esis many approaches have been proposed by modelingthe behavior at synapses stochastically [23, 24]; here themodel we have proposed differs from others because of thecomputational power that has been granted to the modelledreceptors, so that behavior at a single synapse was deter-mined by collective activities of these dynamic stochasticreceptors. Using this model, an experiment was conductedto find the answers to the following two questions: first,can STDP and short-term plasticity control the excitationof Hebbian neurons in neural networks without weightconstraints? Second, if the excitation was controlled whatparameters help STDP in such a controlling activity?

A fully connected neural network was developed withtwo neurons in which each neuron consisted of thousandsof computational units. These computational units were

Page 3: Spike-Timing-DependentPlasticityandShort-Term

Computational Intelligence and Neuroscience 3

categorized as transmitters and receptors according to therole they played on the network. A unit was called atransmitter if it transmitted signals to other neurons anda unit was called a receptor if it received the signals intothe neuron. The receptors of a given neuron were clusteredinto receptor groups. According to the excitation and theinhibition of the model neuron these computational unitscould update their states dynamically from active state toinactive state or vice versa. Only when a computationalunit was in active state it could successfully transmit signalsbetween neurons. Transmitters from presynaptic neuronand receptors of the corresponding receptor group of thepostsynaptic neuron together simulated the process of asingle synapse. Transmitter at presynaptic neuron can beconsidered as a synaptic vesicle which can release only asingle neurotransmitter at a time and the model receptorscan be considered as postsynaptic receptors at synaptic cleft.With these features, excitation of a neuron at a particularsynapse in our network was determined by the functionof the number of active transmitters in the presynapticneuron, transmitters’ release probability, and the number ofactive receptors at the corresponding receptor group of thepostsynaptic neuron. First, in order to analyze how networkwith two neurons could balance the excitation when Poissoninputs with mean rates 10 Hz and 40 Hz were applied, Onlyone neuron was fed by Poisson inputs while letting theother neuron to adjust itself according to the presynapticfluctuations. Neurons stabilized its weight for both Poissoninputs while the weight values of Poisson inputs with meanrate 10 Hz were stabilized into higher range compared towhen Poisson inputs with mean rate 40 Hz was applied.The analysis into internal dynamics of neurons shows thatneurons have behaved similar to the process discussed insynaptic redistribution when long-term plasticity interactswith short-term depression. Further, neurons have playedcomplementary roles to maintain the network’s excitationin an operational level. These compensatory roles have notdamaged the network biological plausibility as we couldsee that neurons worked as integrators that integrate highersynaptic weighted inputs to lower output and vice versa.Finally the network behavior was evaluated for other Poissoninputs with mean rates in the range of 10 Hz to 40 Hzand observed as the mean rate of the Poisson inputsincreases, the immediate postsynaptic neuron increases itssynaptic weights, while the immediate presynaptic neuronof those inputs was settle, into a complementary state to theimmediate postsynaptic neuron.

2. Method

A fully connected network with two neurons was created.Each neuron was attached to thousands of computationalunits which were either in active state or inactive stateaccording to the excitation and the inhibition of the attachedneuron. Units attached to a neuron were classified into twogroups based on the role they played on the neuron. Acomputational unit that transmitted the signal from theattached neurons to other neurons was called a transmitter

Receptor groups ofneuron A

Transmitters ofneuron Aθn−1

θ1

θn

θA

...

Figure 1: Structure of neuron A.

Synapse

RBnA,WBnA, θBnA

RB1A,WB1A, θB1A

RA1B ,WA1B , θA1B

TA, θA TB , θBA B

Figure 2: Structure of the neural network.

and a computational unit that received the signals to theattached neurons from other neurons was called a receptor.Further, receptors attached to a neuron were clustered intogroups so that transmitters from presynaptic neuron couldcontact the postsynaptic neuron simultaneously throughmultiple synaptic connections. Figure 1 shows the structureof our modeled neuron A, with n receptor groups and atransmitter set. Moreover, transmitters in our presynapticneurons were similar to the synaptic vesicles in real neuronswith a single neurotransmitter. The states, either active orinactive, of these transmitters and receptors were modeledusing two-state stochastic process as explained in the nextsection. Only when the units were in active states, they werereliable to successfully transmit or receive the signals to orfrom other neurons.

The transmitters from presynaptic neurons contactedthe receptors of a particular receptor group of postsynapticneurons by forming a synapse between the two neurons; seeFigure 2. Through multiple receptor groups of postsynap-tic neurons, presynaptic transmitters could make multiplesynaptic connections simultaneously forming dynamic andstochastic synapses. As depicted in the Figure 2 each receptorgroup R of postsynaptic neuron and transmitter set Tof presynaptic neuron jointly measured the excitation at

Page 4: Spike-Timing-DependentPlasticityandShort-Term

4 Computational Intelligence and Neuroscience

the attached synapse w and balanced the excitation usingthreshold θ as discussed in the next section.

2.1. Process at Dynamic Stochastic Synapses. When definingthe process under dynamic stochastic synapses we haveonly concerned with the properties and mechanisms of use-dependent plasticity from the few milliseconds to severalminutes time scales. Therefore, use-dependent activity toour modeled network was introduced using short-termplasticity; facilitation and depletion [25, 26]. When definingthe probability of neurotransmitter release at our modeledtransmitters it was assumed that facilitation at biologicalsynapses depends only on the external ca+2 ions that influxto the biological synapse after arriving of an action potentialand residual ca+2 ion concentrations that the synapse alreadyhas. And depletion has no influence on ca+2 ion concentra-tions and merely depends on use activity of the synapse. Thensignal release probability at a transmitter in a synapse wasadopted by the model proposed in [27] which determinesthe signal release probability pr as a function of ca+2 ionsinflux to synapse, vesicle depletion, and signal arriving timeto the transmitter. Only the influx of ca+2 ions after arrivingof neurotransmitters into receptors of postsynaptic neuronwas considered when determining states of the receptors.

If Ps(ti) is the probability that signal is released by atransmitter S at time ti and train t = {t1, t2, . . . , tn, . . .}consists of exact signal releasing times of the S, S(t) consistsof the sequences of times where S has successfully releasedthe signals. The map t → S(t) at S forms a stochastic processwith two states, that is, Release (R) for ti ∈ S(t) and Failureof Release (F) for ti /∈ S(t). The probability Ps(ti) in (1)describes a signal release probability at time ti by S as afunction of facilitation C(t) ≥ 0 in (2) and a depletionV(t) > 0 in (4) at time t. C0 and V0 are the facilitation anddepression constants, respectively. Function C′(s) given in(3) defines the response of C(t) to presynaptic signal that hadreached to S at time t− s; α is the magnitude of the response.Similarly V ′(s) given in (5) models the response of V(t) tothe preceding releases of the synapse S at time t− s ≤ t and τcand τv are time decay constants of facilitation and depression.Maass and Zador [27] allowed S to release the received signalat time t, if Ps(ti) > 0. We updated this rule by introducinga new θ threshold so that if Ps(ti) > θ, a transmitter S isallowed to release the received signal. And we called it asin active state. Receptors in the postsynaptic neuron weremodeled using the same model of Maass and Zador exceptthat they were not involved in the process of vesicle depletion.Therefore, the states of the receptors were determined bysetting the depletion V(t) in (1) into a unit. According to therecent biological findings of [22], parameters were initializedto C0 = 20, V0 = 10,τc = 100 ms, τv = 800 ms, and α = 4:

Ps(ti) = 1− exp(−C(ti) ·V(ti)), (1)

C(t) = C0 +∑

ti<t

C′(t − ti), (2)

C′(s) = α · exp(− s

τc

), (3)

V(t) = max

⎛⎝0,V0 −

ti<t,ti∈S(t)

V ′(t − ti)

⎞⎠, (4)

V ′(s) = exp(− s

τv

). (5)

A modeled neuron maintained threshold values θ foreach receptor groups and set of transmitters. Let RJj I denotethe jth receptor group in postsynaptic neuron J that contacttransmitters in the presynaptic neuron I , and let XJj I(t) theoutput and θJj I(t) be the threshold value of RJj I at time stept. Similarly let TI denote the transmitters in neuron Iand letOI(t) be the output and θI(t) the threshold value of TI attime step t. The threshold value of the receptor group RJj I

was defined as in (6) and it was exponentially increased asthe activity of RJj I to TI is increasing (or decreased whenthe activity of RJj I to TI is decreasing). Threshold value fortransmitters in neuron I , that is, TI , was defined as a functionof total synaptic inputs from all its synaptic connections tothe neuron I into the total output of the neuron as in (7).Every 60 time steps threshold values of both neurons wereupdated:

θJj I(t) = f

(XJj I(t)

OI(t)

)(6)

Let g be the number of receptor groups a neuron has, then

θI(t) = f

⎛⎝OI(t) ·

g∑

i=1

XIj J(t)

⎞⎠ (7)

f (x) = 1/(1− e−x), XJj I(t) = |RActJ j I (t)|/|RJj I |; i = 1, 2, . . . , g;

OI(t) = |TActI (t)|/|TI |. |G| is the number of components in

G and |GAct(t)| is the number of active components in G attime step t.

Moreover, according to the following predefined behav-ioral rule signal was propagated between neurons.

Rule 1. When a receptor receives a signal from the corre-sponding presynaptic neuron at time step t, the signal ispropagated within the network according to the followingconditions.

Condition 1. Once a received signal is applied to a receptorif the receptor is updated to inactive state then the receivedsignal is inactivated otherwise the signal is propagated to arandomly selected transmitter of the same neuron.

Condition 2. Once a transmitter of a particular neuronreceives a signal at time step t, the signal is transmittedto a randomly selected receptor of the randomly selectedreceptor group of the postsynaptic neuron if updated stateof the transmitter is active otherwise the received signal isinactivated.

The above behavioral rule defines the underlying mecha-nism of signal transmission between the presynaptic neuronand the postsynaptic neuron; that is, when the relatedcomputational units from the two neurons are active only,

Page 5: Spike-Timing-DependentPlasticityandShort-Term

Computational Intelligence and Neuroscience 5

the signal is successfully transmitted. Therefore, the numberof active receptors in a receptor group of the postsynapticneuron and the number of active transmitters in thepresynaptic neuron jointly define the efficacy at a givensynapse. In addition to this short-term plasticity and home-ostatic synaptic plasticity [28, 29] adjustments (it was shownthat under similar conditions, neurons processed similar toHebbian neurons [30] and the defined threshold mechanismfunctioned as a homeostatic synaptic plasticity process; see[31]) our dynamic stochastic synapses are subject to long-term plasticity induced by STDP as discussed next.

2.2. Bin the Process at Synapses. The process at synapseswhere transmitters from the presynaptic neuron contactedthe receptors in a particular receptor group of the postsynap-tic neuron were binned to analyze the synapse’s excitation.Bin is an array of seven columns, nb = 7, which stored dataof a given synapse of successive seven time steps. A single cellof a bin contains data at a time step t, namely, the number ofactive transmitters in the presynaptic neuron, the number ofactive transmitters in the postsynaptic neuron, the numberof active receptors in the corresponding receptor group ofthe postsynaptic neuron, and the mean release probability oftransmitters in presynaptic neuron. Let Ci be ith cell of kthbin, the time gap between two consecutive cells is set to 5 msas in (8).

This allowed us to define the time represented by eachcell in a bin from its first cell as in (9); see Figure 3. Thisarrangement of bin was necessary in our model to satisfy thecondition (tc1 = 0 ms) < (τ+ = τ− = 20 ms) < (tc 7 = 30 ms),where τ+ and τ− are membrane constants for potentiationand depression (discussed later):

Δtci+1−ci = 5 ms, i = 1, . . . , 6, (8)

tci =(0, tci

) =7∑

i=1

5 · (i− 1). (9)

Let ATpre = {ATpre,1,ATpre,2, . . . ,ATpre,nb} be randomvariables of the number of active transmitters in presynapticneuron at successive seven time steps of a bin; similarlylet ATpost = {ATpost,1,ATpost,2, . . . ,ATpost,nb} be randomvariables of the number of active transmitters in postsynapticneuron and let ARpost,s = {ARpost,s,1,ARpost,s,1, . . . ,ARpost,s,nb}be random variables of the number of active receptors inreceptor group s that correspons to synapse s in kth bin (Bk).Since the activity between the presynaptic transmitters andreceptors in receptor group s is not independent, we definedmean, μBk ,s, and variance, σ2

Bk ,s , of the kth bin on synapse s asin (10) and (11):

μBk ,s = μATpre + μARpost,s , (10)

σ2Bk ,s = Var

(ATpre + ARpost,s

)

= σ2ATpre

+ σ2ARpost,s

− 2Cov(ATpre,ARpost,s

),

(11)

where μATpre and σ2ATpre

are the mean and variance of ATpre.

Similarly, μARpost,s and σ2ARpost,s

are the mean and variance of

ARpost,s. The mean and variance of both ATpre and ARpost,s

were estimated using maximum likelihood estimators, sothat μBk ,s in (10) can be written as in (12) if ATpre =∑nb

j=1 ATjpre/nb and ARpost,s =

∑nbj=1 AR

jpost,s/nb are the sample

means, and σ2Bk ,s in (11) can be written as in (13) if S2

ATpre=

∑nbj=1(AT

jpre −ATpre)/(nb − 1), and S2

ARpost,s=∑nb

j=1(ARjpost,s −

ARpost,s)/(nb − 1) are the sample variances of ATpre andARpost,s respectively. The covariance of ATpre and ARpost,s isdefined in (14):

μBk ,s = ATpre + ARpost,s, (12)

σ2Bk ,s = S2

ATpre + S2ARpost,s − 2Cov

(ATpre,ARpost,s

), (13)

Cov(ATpre,ARpost,s

)

=∑nb

j=1

(AT

jpre − ATpre

)(AR

jpost,s − ARpost,s

)

nb − 1.

(14)

The mean release probability of the presynaptic transmit-ters within a given bin, say B k, can be defined as in (15),if PTi be the mean release probability of the transmitters inpresynaptic neuron at time step t + i:

MTBk

,s=∑nb

i=1 PTi

nb. (15)

2.3. Defining Synapse’s Activity Using Bins’ Activity. STDP isa form of long-term modification to synaptic strength thatdepends on the action potential arriving timing betweenpresynaptic neuron tpre and postsynaptic neuron tpost [4] andcan be described by weight window function defined in (16).This weight function defines how strength between the twoneurons can be adjusted for a single pair of action potentialwithin the time window Δt = |tpre − tpost|. As definedin (16) if presynaptic action potential occurs before thepostsynaptic action potential then it strengths the synapticstrength and called long-term potentiation. Conversely ifpostsynaptic potential occurs before the postsynaptic actionpotential, it weakens the synaptic strength and called long-term depression:

W(Δt) ={

A+ · e−(tpost−tpre)/τ+ if tpre < tpost,

−A− · e−(tpre−tpost)/τ− if tpre ≥ tpost.(16)

Here A+,A− > 0 and τ+, τ are membrane constantsof long-term potentiation and long-term depression. Thevalues for A+ and A− need to satisfy the condition A+τ+ <A−τ− as it required the integral of the weight windowto be negative to generate stable synaptic strength basedon STDP [4]. Furthermore, recent biologically observations[32] have estimated τ+ and τ− roughly to 20 ms. Thus, inorder to generate stable synaptic strength, it is required tohave A+ < A−. In our model, the weight window functionwas applied in bin level at each synapse in order to applylong-term modifications to neuron. Let HBk

pre,cprebe the highest

amount of active transmitters recorded from the presynapticneuron during bin Bk and it was at cell cpre as defined

Page 6: Spike-Timing-DependentPlasticityandShort-Term

6 Computational Intelligence and Neuroscience

Cell number

Time gap between the first

cell and a given cell in ms

1 2 3 4 5 6 7

0 5 10 15 20 25 30

A single bin

t t + 1 t + 2 t + 3 t + 4 t + 5 t + 6 t + 7 t + 8 t + 9 t + 10

Process at a single synapse (Δt = t(+1)− t = 1s)

Figure 3: Bin the process at a single synapse.

in (17). Similarly let HBkpost,cpost

be the highest amount ofactive transmitters recorded from the postsynaptic neuronduring bin Bk and it was at cell cpost as defined in (18).Then STDP weight window function was applied on bin’slevel by mapping HBk

pre,cpreas an action potential occurred

in the presynaptic neuron during bin Bk which couldsignificantly update the synaptic strength presynapticallyat the corresponding synapse and HBk

post,cpostas the action

potential occurred in the postsynaptic neuron during binBk which could significantly update the synaptic strengthpostsynaptically on the same synapse. Here we have assumedthat within the duration of a bin only the highest hitterof that bin can significantly update the synaptic strength.Subsequently we mapped tpre to (cpre − 1) × 5 ms and tpost

to (cpost − 1)× 5 ms. Therefore, if postsynaptic hitter occursafter the presynaptic hitter, it leads to the potentiation, andif presynaptic hitter is followed by the postsynaptic hitter, itdepresses the synapses during the given bin:

∀i ATpre,i > ATpre. j , i = 1, . . . , 7, j = 1, . . . , 7, i /= j

HBkpre,cpre

=(ATpre,i, cpre = i

)

(17)

∀i ATpost,i > ATpost, j , i = 1 . . . 7, j = 1 . . . 7, i /= j

HBkpost,cpost

=(ARpost,i, cpost = i

) (18)

2.4. Mean and Variance of a Synapse. Learning based onSTDP was implemented on synapses assuming that bins ofa given synapse are mutually independent and the impactthat each bin made on the synapse sums linearly. Then meanμSsk and variance σ2

Sskof the sth synapse (Ss) can be defined

as in (19) and (20) when kth bin (Bk) is interacted with thesynapse. The mean μSsk and the variance σ2

Sskof the synapse

Ss were estimated using maximum likelihood estimators asshown in (21) and (22), respectively. Further, the total mean

release probability at synapse Ss at Bk was defined using bin’smean release probabilities as in (23):

μSsk = μSsk−1+ μBk ,s; μSs0 = 0, k = 1, 2, . . . , (19)

σ2Ssk= Var

(Ssk−1 + Bk

)= σ2

Ssk−1+ σ2

Bk ,s;

σ2Ss0= 0, k = 1, 2, . . . ,

(20)

μSsk = μSsk−1+ μBk ,s

XSsk = XSsk−1+ XBk ,s; k = 1, 2, . . . ,

(21)

σ2Ssk= σ2

Ssk−1+ σ2

Bk ,s

S2Ssk= S2

Ssk−1+ S2

Bk ; k = 1, 2 . . . ,(22)

MTSsk=MT

Ssk−1+ MBk ,s; MT

Ss0= 0, k = 1, 2, . . . . (23)

In order to generate action potentials real neurons arenecessary to be in a nonquiescence state. If a neuron is in aquiescence state, it is not possible for the neuron to generateaction potentials that can change the synaptic strengthsignificantly. Therefore, STDP was applied on synapses onlyif model presynaptic and postsynaptic neurons were notin quiescence states. We defined that a neuron is not in aquiescence state when the average output produced by theneuron during bin Bk is greater than the average output ithad produced so far. That is, if the current mean numberof active transmitters of a particular neuron is less thanthe mean number of active transmitters during the kth bin,neuron was recognized as in a nonquiescence state at bin

Bk. That is mathematically if MSsk−1ATpre

≤ MATBkpre

and MSsk−1ATpost

≤M

ATBkpost

, the weight was updated on the synapse Ss at bin Bk

as discussed next.

2.5. Learning Based on STDP and Release Probability.According to the model proposed in [33], the amplitude of

Page 7: Spike-Timing-DependentPlasticityandShort-Term

Computational Intelligence and Neuroscience 7

the excitatory postsynaptic current Ak of the kth spike in aspike train is proportional to the weight at that synapse andthe release probability at the kth spike. In our approach Ak

is proportional to the impact that made by transmitters inthe presynaptic neuron and receptors in the correspondingreceptor group of the postsynaptic neuron during Bk onthe synapse Ss. If we applied the model proposed in [33]to our kth bin instead of kth spike, we can express Ak

as in (24). Moreover, biological evidence supports the factthat the amount of change on weight is also dependenton the initial synaptic size [34]. Depression is independentof the synaptic strength, whereas strong synapses are lesspotentiated than weak synapses. By assuming that there is aninverse relationship between the initial synaptic strength andthe amount of potentiation, potentiation can be expressedfor the kth bin, (Bk) as in (25) if W

pSs,k is the amount of

potentiation during kth bin at synapse Ss:

Ak,Ss ∝Wk,Ss ·Uk,Ss , (24)

Wk,Ss ∝ 1

Wpk,Ss

. (25)

If we combined (16), (24), and (25), the amount ofweight updated during kth bin at synapse Ss, WSs(Δk), canbe defined as in (26) and the synaptic weight at Ss at the endof bin Bk is determined as in (27):

WSs(Δk)

=

⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩

γp · Ak,Ss

Uk,Ss· 1

Wpk,Ss

· e−(tpost−t pre )/τ+ if tpre < tpost

−γd · Ak,Ss

Uk,Ss· e−(tpre−t post )/τ− if tpre ≥ tpost

k = 1 . . . , s = 1 . . .(26)

WSs(k) =WSs(k − 1) + WSs(Δk), (27)

where γp = 0.005 and γd = 0.00525 are potentiation anddepression learning rates [11]. The amplitude Ak during thekth bin was estimated by the proportion of the deviation thatmade by the bin compared to its mean, to the deviation thatsynapse had made so far compared to its overall mean. Thatis, in statistically amplitude Ak during the kth bin can beexpressed as a proportion of the coefficient variation (CV =σ/μ) during the kth bin to the coefficient variation of thesynapse Ss has as given in (28). The release probability Uk,Ss

during the kth bin was determined as a proportion of meanrelease probability during kth bin to the total mean releaseprobability at synapse Ss has as in (29). Median of the weightdistribution at synapse Ss was taken as an estimator for W

pk,Ss

as in (30). This is merely because median provides a goodapproximation about the center of the weight distribution

than mean; that is, the median is not affected by the outliers,whereas the mean is affected by the outliers:

ASk,Ss =

CVBk ,s

CVSs=(σ2Bk ,s/μBk ,s

)

(σ2Ssk/μSsk

) ; k = 1 . . . , s = 1 . . . (28)

Uk,Ss =MT

Bk ,s

MTSsk

; k = 1, . . . , s = 1 . . . (29)

Wpk,Ss = median{WSs(i) | i = 1, . . . , k − 1, s = 1 . . .}. (30)

3. Balancing the Excitation of the Network

An experiment was arranged to find the possibility thatcan STDP and short-term plasticity together balance theexcitation of a network with two Hebbian neurons withoutdefining any constraints on the weight learning algorithm.A fully connected network with two neurons, say neuronA and neuron B, was developed; each neuron had tenreceptor groups, making a presynaptic neuron to contactthe postsynaptic neuron through ten dynamic stochasticsynapses simultaneously. Both neurons, A and B had equalnumber of transmitters nT = 30000 and receptors nR =30000; and receptors attached to neurons were uniformlydistributed among receptor groups. At the onset one percentof transmitters and one percent of receptors in each receptor-group was set to active state. Poisson inputs with meanfiring rates (λ) 10 Hz and 40 Hz were applied to all thereceptor groups of neuron A simultaneously while givingenough space for neuron B to adjust itself according tothe feedbacks of neuron A,see Figure 4. Each input wasapplied around two hours continuously to neuron A; andthe behavior of the network was analyzed after the synapticconnections have established the effect of the altered activityand network activity has developed. The values were fed tothe system according to following rule: the generated Poissondistribution was converted to byte stream by the followingrule: if generated value is greater than the median of thePoisson distribution then it was considered to represent value1 otherwise it was considered to represent value 0. Only whenthe represented value is equal to one, the signal was generatedand fed to neuron A.

Figures 5 and 6 show the distribution of weights ofboth postsynaptic neurons A and B at Poisson inputs withmean rates 10 Hz and 40 Hz. As shown in these figures, theweight distributions of both neurons at each, synapse havestabilized around 175 bins. After the weights distributionwas stabilized, the median of the weight distribution wascalculated and these calculated median values are shown inFigure 7. As shown in the figure of Poisson inputs with lowmean rate, that is, at 10 Hz, medians of the all the synapses ofthe postsynaptic neurons reach to higher value compared towhen Poisson inputs with higher mean rate, that is, at 40 Hz,were applied. The network balanced its excitation by pushingsynaptic weights towards higher values for inputs with lowmean rate and for inputs with higher mean rate the networkhas pulled down the synaptic weights into a lower weightvalues. This dynamic behavior of both neurons is a necessary

Page 8: Spike-Timing-DependentPlasticityandShort-Term

8 Computational Intelligence and Neuroscience

External inputs

TA

RA1B

RA10B

SB1A

SA10B

TB

RB10A

RB1A

A

B

(a)

Immediate postsynaptic

neuron

Immediate presynaptic

neuron

External poisson inputs

Information flow frompresynaptic neuron A to

postsynaptic neuron B

Information flow frompresynaptic neuron B topostsynaptic neuron A

A B

(b)

Figure 4: Network structure with ten synaptic connections. (a) shows the developed fully connected network to test how neurons couldbalance the excitation after external input was applied to part of it. TA and TB are the outputs (the number of active transmitters attachedto the neuron at a given time step) of neuron A and neuron B, respectively. Receptor groups RAB and receptor groups RBA symbolized thenumber of active receptors in the corresponding receptor groups of postsynaptic neuron A and postsynaptic neuron B, respectively. SAB arethe synapses where receptor groups of postsynaptic neuron A contact the transmitters of neuron B. Similarly, SBA are the synapses wherereceptor groups of postsynaptic neuron B contact the transmitters of neuron A. (b) shows how information flows between the two neurons.When signals are passing from A to B, A is called presynaptic neuron and B is called postsynaptic neuron. Similarly, when signals are passingfrom B to A, B is called presynaptic neuron and A is called postsynaptic neuron. Since external Poisson inputs are applied to neuron A only,neuron A becomes the immediate presynaptic neuron and B becomes the immediate postsynaptic neuron for the external inputs.

adjustment to balance neurons’ excitation and subsequentlyto balance the network excitation while adjusting to externalmanipulations.

Next we were interested to know what makes theneuron to stabilize its activity without being overexcitedor overdepressed in a network which has no controllingconstraints. To understand that we analyzed the internalbehaviors of neurons A and B in terms of their meanrelease probability (the mean of the release probabilitiesof transmitters attached to the neuron) and the coefficientvariation which measures the given synapse’s excitation(CV = CVBk ,s in (28)) in terms of the number of activetransmitters in the presynaptic neuron and the number ofactive receptors in the corresponding receptor groups of thesynapse. Thus, the value of CV shows the extent of variabilityof the given synapse in relation to the synapse’s mean andportraits effectively the synapse’s internal dynamics. So thathigher CV value implies the higher internal fluctuations andhigher deviation of the synaptic mean. Figure 5 shows themean release probability of the presynaptic neurons at 10 Hzwhile Figure 8 depicts what is happening inside the synapsein terms of CV at 10 Hz. As shown in these figures, theneuron that has made higher synaptic weight has maintainedhigher CV compared to the other neuron in the network.Notably, the neuron which had the higher synaptic weighthas produced a lower mean release probability. For example,if we analyze the behavior of neuron B, as shown in Figure 7,its synapses SBA, from presynaptic neuron A to postsynapticneuron B, have scored higher synaptic weights at 10 Hzcompared to synaptic weights of postsynaptic neuron A.Further, neuron B has maintained higher CV at all itssynapses SBA compared to the values of CV of the synapsesof postsynaptic neuron A, that is, SAB. In contrast, the

neuron B has maintained lower mean release probabilityas a presynaptic neuron at 10 Hz of Poisson inputs at itsall synapses compared to neuron A. These opposing andbalancing behaviors of the two neurons are consistent inPoisson inputs with mean firing rate 40 Hz as given in Figures6 and 9.

Moreover, if the difference of the value of CVs betweentwo neurons was considered, it is clearly shown in the Figures8 and 9 that this difference was reduced to amount 0.0001after the two neurons have adjusted to the external input andstabilized themselves. However, most importantly, even thatwe could see the stabilize activity of the two neurons in termsof synaptic weights and CV the mean release probabilitieshave not reached to any stable position, but instead eithercontinuously positively or negatively increasing. The positivecorrelation between the synaptic weights and the CV, andthe negative correlation between the synaptic weights and themean release probability at a same neuron have proven thatneurons could act as integrators that integrate the excitedsynaptic weights and controlled the excitation via higherCV fluctuations and produced balanced output that helpto balance the network activity. This reduced excitation inthe output flow that has allowed the other neuron to playcompensatory role and to balance the network activity.

Finally we would like to understand the behavior ofthe network when applying Poisson inputs in the rangeof 10 Hz and 40 Hz. The Poisson inputs with mean rates,15 Hz, 20 Hz, 25 Hz, 30 Hz, and 35 Hz were also presentedto neuron A’s receptor groups and studied the behavior ofthe both neurons on the same network. Figure 10 shows theaverage value of the medians of synapses of each neuron.When magnitude of the Poisson inputs mean rate is greaterthan the STDP potentiation and depression time constants

Page 9: Spike-Timing-DependentPlasticityandShort-Term

Computational Intelligence and Neuroscience 9

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y−0.05

0

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

1

1.2

1.4

1.6

1.8

2

Wei

ght

dist

ribu

tion

RARB

WBA

WAB

RARB

WBA

WAB

RARB

WBA

WAB

RARB

WBA

WAB

RARB

WBA

WAB

RARB

WBA

WAB

RARB

WBA

WAB

RARB

WBA

WAB

RARB

WBA

WAB

RA RB

WBA

WAB

1 50 100 150 200Bin number

1 50 100 150 200Bin number

1 50 100 150 200Bin number

1 50 100 150 200Bin number

1 50 100 150 200Bin number

1 50 100 150 200Bin number

1 50 100 150 200Bin number

1 50 100 150 200Bin number

1 50 100 150 200Bin number

1 50 100 150 200Bin number

λ = 10 Hz

mIA = 0.000798 mII

A = 0.00118

mIA = 0.000798 mII

A = 0.00118

mIB = 0.00126 mII

B = 0.00149

mIA = 0.000798 mII

A = 0.00118

mIB = 0.00126 mII

B = 0.00149

mIA = 0.000798 mII

A = 0.00118

mIB = 0.00126 mII

B = 0.00149

mIA = 0.000798 mII

A = 0.00118

mIB = 0.00126 mII

B = 0.00149

mIA = 0.000798 mII

A = 0.00118

mIB = 0.00126 mII

B = 0.00149

mIA = 0.000798 mII

A = 0.00118

mIB = 0.00126 mII

B = 0.00149

mIA = 0.000798 mII

A = 0.00118

mIB = 0.00126 mII

B = 0.00149

mIA = 0.000798 mII

A = 0.00118

mIB = 0.00126 mII

B = 0.00149

mIB = 0.00126 mII

B = 0.00149

mIA = 0.000798 mII

A = 0.00118

mIB = 0.00126 mII

B = 0.00149

Synapse number = 1 Synapse number = 2 Synapse number = 3

Synapse number = 6

Synapse number = 9

Synapse number = 5

Synapse number = 8

Synapse number = 10

Synapse number = 4

Synapse number = 7

Figure 5: Distribution of the weights and release probabilities of neurons at Poisson inputs with mean rate 10 Hz. Each subfigure in thefigure depicts the distribution of the weight algorithm and the mean release probability at the given synapse of postsynaptic neuron A andpostsynaptic neuron B at Poisson inputs with mean firing rate 10 Hz.For example, the leftmost top subfigure shows the variation of the meanrelease probability and the weight distribution at the first synapse of the ten synapses. As shown in the figure, the network, both neuron Aand neuron B, spent around 150 bins to adjust to the external Poisson inputs and subsequently reach the stability. WBA gives the distributionof the weights of the synaptic connections from presynaptic neuron A to postsynaptic neuron B. Similarly WAB gives the distribution ofthe weights of the synaptic connections from presynaptic neuron B to postsynaptic neuron A. RB is the distribution of the mean releaseprobability of transmitters of presynaptic neuron B and RA is the distribution of mean release probability of transmitters of presynapticneuron A at postsynaptic connections of neuron B. Moreover, the slopes of the mean release probabilities, mI

B and mIIB , were determined

using linear regression analysis and give the slope of mean release probability of SBA from bin 1 to 150 and bin 150 to 200, respectively.Similarly, mI

A and mIIA give the mean release probability of SAB from bin 1 to bin 150 and from 150 to 200, respectively.

25 < λ, the medians of the stabilized synaptic weights ofsynapses of neuron B as the immediate postsynaptic neuronsof external inputs have continuously increased as the meanrate of Poisson inputs is increased. Again the compensatorybehavior from neuron A could be seen as it has generallydecreased its medians of the stabilized synaptic weights asthe mean rate is increasing. These complementary behaviorsof two neurons seem to be necessary to stabilize the overallnetwork activity. When 20 > λ, both neurons have workedtogether to control the overall excitement of the network.Intriguingly, when λ = 20 and λ = 25, the excitationof the entire network was equally balanced between thetwo neurons as their average value of the medians of thestabilized synaptic weights becomes almost equal to eachother. This might be because of the effect of τ+ = τ =20 ms that we selected for STDP potentiation and depression

time constant. This is important observation which providesthe possibilitythat postsynaptic neurons could excited andstabilized into the same level of the presynaptic neuron ifSTDP time constants are highly correlated with the mean rateof the Poisson inputs applied. Therefore, STDP with differenttime constants for potentiation and depression might be agood solution to scale down external inputs effectively intoneuronal level.

4. Discussion

As per the literature, a synapse can be strengthened eitherby increasing the probability of transmitter release presy-naptically or by increasing the number of active receptorspostsynaptically. This general functionality at the synapsescan be varied by the interplay between long-term plasticity

Page 10: Spike-Timing-DependentPlasticityandShort-Term

10 Computational Intelligence and Neuroscience

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

−0.050

0.1

0.2

0.3

Mea

n r

elea

se p

roba

bilit

y

10.80.6

1.21.41.61.822.2

Wei

ght

dist

ribu

tion

1 50 100 150 200Bin number

RARB

WBA

WAB

λ = 40 Hz

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

mIB = 0.00118 mII

B = 0.00136

mIA = 0.000541 mII

A = 0.00156

Synapse number = 1

Synapse number = 4

Synapse number = 7

Synapse number = 2

Synapse number = 5

Synapse number = 8

Synapse number = 10

Synapse number = 3

Synapse number = 6

Synapse number = 9

Figure 6: Distribution of the weights and release probabilities of neurons at Poisson inputs with mean rate 40 Hz. Each subfigure in thefigure depicts the distribution of the weight algorithm and the mean release probability at the given synapse of postsynaptic neuron A andpostsynaptic neuron B at Poisson inputs with mean firing rate 40 Hz. For example, the leftmost top subfigure shows the variation of the meanrelease probability and the weight distribution at the first synapse of the ten synapses. As shown in the figure, the network, both neuron Aand neuron B, spent around 150 bins to adjust to the external Poisson inputs and subsequently reach the stability. WBA gives the distributionof the weights of the synaptic connections from presynaptic neuron A to postsynaptic neuron B. Similarly, WAB gives the distribution ofthe weights of the synaptic connections from presynaptic neuron B to postsynaptic neuron A. RB is the distribution of the mean releaseprobability of transmitters of presynaptic neuron B and RA is the distribution of mean release probability of transmitters of presynapticneuron A at postsynaptic connections of neuron B. Moreover, the slopes of the mean release probabilities, mI

B and mIIB , were determined

using linear regression analysis and give the slope of mean release probability of SBAfrom bin 1 to 150 and bin 150 to 200, respectively;Similarly, mI

A and mIIA give the mean release probability of SAB from bin 1 to bin 150 and from 150 to 200, respectively.

and short-term dynamics, especially short-term depression.Short-term depression is mainly based on vesicle depletionwhich is the use-dependent reduction of neurotransmitterrelease in the readily releasable pool [4]. When long-termplasticity is interacting with the short-term depression it iscalled synaptic redistribution [7]. The role of this synapticredistribution is not yet clearly identified. However, thissynaptic redistribution allows the presynaptic neuron toincrease the probability of release and thereby increase thesignal transmission between the two neurons. In our devel-oped network, the two neurons have simulated a behaviorsimilar to the effect of synaptic redistribution. Neuron B asthe immediate postsynaptic neuron of the external inputs hasscored the higher synaptic weights compared to neuron A.That is, its synapses, where the transmitters from immediatepresynaptic neuron A contacted each receptor group of

the postsynaptic neuron B, have scored higher weightscompared to the other neuron. The presynaptic neuron A inthis functional process has maintained higher mean releaseprobability. Therefore, first, the synaptic weights of neuronB have been increased presynaptically by increasing theprobability of neurotransmitter release. Second, the analysisof CV of these synapses of postsynaptic neuron B shows thatit is laying higher range than the function of CV of neuron A,confirming the possibility of increasing the synaptic weightsby higher turnover rate of active receptor component ofpostsynaptic neuron. This behavior of postsynaptic neuronB is also supported at Poisson inputs with mean firingrate 40 Hz. Intriguingly, if the behavior of postsynapticneuron B at 40 Hz after neuron has adjusted to the externalinputs and stabilized, was analyzed, the higher fluctuationsof CV and comparatively lesser synaptic weights of neuron

Page 11: Spike-Timing-DependentPlasticityandShort-Term

Computational Intelligence and Neuroscience 11

1 2 3 4 5 6 7 8 9 100

0.5

1

1.5

2

Synapse number

1 2 3 4 5 6 7 8 9 10

Synapse number

Med

ian

0

0.5

1

1.5

2

Med

ian

λ = 10 Hz λ = 40 Hz

SynapseABSynapseBA

Figure 7: Distribution of the medians of synaptic weights at each synapse at Poisson inputs. The figure shows the variation of the median ofthe weight distribution at each Poisson input; λ is the mean Poisson firing rate. The median in the above figures was determined after synapticconnections have established the effect of external inputs and network stabilized. This stabilization happened after 170 bins; see Figure 5.Medians of postsynaptic neuron B, SynapseBA show the variations of the medians of the weight distributions of synaptic connection frompresynaptic neuron A to postsynaptic neuron B. Similarly, SynapseAB show the variations of the median of the weight distributions of synapticconnections from presynaptic neuron B to the postsynaptic neuron A.

B at Poisson inputs with mean rate 40 Hz were observedcompared to the postsynaptic neuron B’s behavior at Poissoninputs with mean rate 10 Hz, showing the possibility thatsynaptic redistribution can increase the synaptic weights forPoisson inputs with low mean rate at steady state and notfor Poisson inputs for higher mean rate. And for highermean rates it is only a higher turnover rate of active receptorcomponents. Further, STDP potentiation and depressiontime constants have made higher impact on the behaviorof those two neurons; that is, it has controlled the levelof excitation of each neuron equally when the magnitudeof the mean rate laid near the magnitude of the STDPtime constants. As the mean rate of the Poisson inputs isincreasing, the complementary roles have been initiated intothe two neurons.

STDP has successfully interplayed with short-term plas-ticity to control the excitation or inhibition of neuralnetwork according to external adjustments. Notably, theseadjustments are consistent and are also biologically plausible.The stabilization of synaptic weights in an operationallevel without controlling constraints seems to be possible ifSTDP as long-term plasticity interacts with the short-termdynamics. The dynamic behavior of short-term activity isnecessary to propagate and balance the excitation of neuralnetwork without damaging the synaptic weight distribution;similar to how CV and probability of release have playedwith STDP to balance the excitation. When compared toLuz and Shamir [19] findings, instead of specifically usinginhibitory neurons to generate negative feedback to stabilize

the excitation and inhibition of the network, here we haveused ground plasticity mechanisms that observed in biologyto alleviate the excitation and inhibition of the network. Eventhough two approaches have used different derivatives oftemporally asymmetric STDP to implement the stochasticresponse of neurons, both have once again proven thepossibility of stabilization of the network excitation dueto Hebbian plasticity using STDP. However, our approachdiffers from their mechanism because of the integrationof the sensitivity of the release probability and turnoverrate of active components attached to a synapse. Instead ofthe inhibition made on network by the inhibitory neuronby negative feedback to impinge the excitation generateby excited correlated spikes, our mechanism absorbed thehigh firing frequency excitation or overcomes the low firingfrequency inhibition in terms of appropriate turnover rate ofactive components attached to a given synapse and adjustingrelease probabilities of the attached active transmitters.The mechanism under this manipulation of excitation andinhibition is similar to the synaptic redistribution discussedin biology, which drives our approach more towards tothe biological plausibility. However, both systems are stillneeded to be evaluated and examined on larger networksand the approach of Luz and Shamir [19] needs to be testedwhen correlation between the inhibitory and excitatorysynapses is present. Moreover, we compare the result of[11] against our findings on how excitation was balanced.In their model excitation was balanced by introducingsynaptic competition in which synapses competed against

Page 12: Spike-Timing-DependentPlasticityandShort-Term

12 Computational Intelligence and Neuroscience

0

1

2

3

4

0 50 100 150 200

Bin number

0 50 100 150 200

Bin number

0

1

2

3

4

0 50 100 150 200

Bin number

0

1

2

3

4

0 50 100 150 200

Bin number

0

1

2

3

4

0 50 100 150 200

Bin number

0

1

2

3

4

0 50 100 150 200

Bin number

0

1

2

3

4

0 50 100 150 200

Bin number

0

1

2

3

4

0 50 100 150 200

Bin number

0

1

2

3

4

0 50 100 150 200

Bin number

0

1

2

3

4

0 50 100 150 200

Bin number

0

1

2

3

4

λ = 10 Hz

mIA = −1.66e − 006 mII

A = −2.84e − 006

mIB = −1.03e − 005 mII

B = 6.3e − 007

mIA = −1.54e − 006 mII

A = −3.43e − 006

mIB = −1.04e − 005 mII

B = 7.92e − 007

mIA = −1.59e − 006 mII

A = −2.99e − 006

mIB = −1.02e − 005 mII

B = 1.08e − 006

mIA = −1.63e − 006 mII

A = −2.73e − 006

mIB = −1.01e − 005 mII

B = 6.45e − 007

mIA = −1.53e − 006 mII

A = −3.44e − 006

mIB = −1.02e − 005 mII

B = 6.22e − 007

mIA = −1.76e − 006 mII

A = −2.53e − 006

mIB = −1.01e − 005 mII

B = 7.97e − 007

mIA = −1.43e − 006 mII

A = −2.75e − 006

mIB = −1.03e − 005 mII

B = 7.89e − 007

mIIA = −2.82e − 006

mIB = −1.02e − 005 mII

B = 9.54e − 007

mIA = −1.36e − 006 mII

A = −2.33e − 006

mIB = −1.03e − 005 mII

B = 9.3e − 007

mIA = −1.84e − 006

mIB = −1.03e − 005 mII

B = 5.74e − 007

mIIA = −2.8e − 006

mIA = −1.7e − 006

CV

CV

CV

CV

CV

CV

CV

CV

CV

CV

SBA

SAB

SBA SBA

SBA

SBA

SBA

SBA

SBA

SAB SAB

SAB

SAB

SAB

SAB

SAB

SBA

SAB

SBA

SAB

×10−3 ×10−3 ×10−3

×10−3×10−3×10−3

×10−3 ×10−3 ×10−3

×10−3

Synapse number = 1

Synapse number = 4

Synapse number = 7

Synapse number = 2

Synapse number = 5

Synapse number = 8

Synapse number = 3

Synapse number = 6

Synapse number = 9

Synapse number = 10

Figure 8: Distribution of the coefficient of variation (CV) at each synapse at Poisson inputs with 10 Hz. Each subfigure Figure 8 depictsthe distribution of the CV at the given synapse of both neuron A and neuron B at Poisson inputs mean firing rate 10 Hz. For example, Asshown in the leftmost subfigure SAB shows the variation of CV of synapses from presynaptic neuron B to postsynaptic neuron A. Similarly,SBA depicts the distribution of CV of synapses from presynaptic neuron A to postsynaptic neuron B. Moreover, the slopes of the mean releaseprobability, mI

B and mIIB , were determined using linear regression analysis and give the slope of CV of SBAfrom bin 1 to 150 and bin 150 to

200, respectively. Similarly, mIA and mII

A give the CV of SAB from bin 1 to bin 150 and from 150 to 200, respectively. As depicted in all thesesubfigures, at all the synapses when its around 200 bins, the difference of the CVs of neuron A and neuron B was about 0.001 and timefunctions of CVs of both neurons were paralleled to each other.

each other to control the postsynaptic firing times; further,this competition was introduced by scaling the synapticefficacy using hard boundary conditions. This model bal-anced the excitation of Poisson inputs 10 Hz and 40 Hz, sothat for 10 Hz more synapses approach to the higher limitof synaptic efficacy and for higher inputs more synapsesremained in lower limits. However, once the system reachedstability it was hardly disturbed by the presynaptic firingfrequency. Therefore, the stability that the system reachedis moderately stronger than in our case. Although ourmodel also exhibited similar characteristics at Poisson inputs10 Hz and 40 Hz, no boundary conditions were defined

to achieve this stability. Furthermore, compared to theirmoderately strong equilibrium discussed in terms of synapticefficacy, internal dynamics of our neurons were continuouslyfluctuating around the equilibrium allowing neurons toremain dynamically active even at the equilibrium as similarto many natural systems.

The model proposed in this research is a computa-tional model to investigate the internal dynamics of neuralnetworks when STDP, Hebbian plasticity and short-termplasticity, are interacting with each other. The model itselfhas few drawbacks; mainly the neural network of our modelhas spent around 150 bins to show the adjustment to external

Page 13: Spike-Timing-DependentPlasticityandShort-Term

Computational Intelligence and Neuroscience 13

50 100 1500 200

Bin number

0

1

2

3

4

50 100 1500 200

Bin number

0

1

2

3

4

50 100 1500 200

Bin number

0

1

2

3

4

50 100 1500 200

Bin number

0

1

2

3

4

50 100 1500 200

Bin number

0

1

2

3

4

50 100 1500 200

Bin number

0

1

2

3

4

50 100 1500 200

Bin number

0

1

2

3

4

50 100 1500 200

Bin number

0

1

2

3

4

50 100 1500 200

Bin number

0

1

2

3

4

50 100 1500 200

Bin number

0

1

2

3

4

λ = 40 Hz

mIA = −1.26e − 006 mII

A = −4.08e − 006

mIB = −8.98e − 006 mII

B = −4.76e − 006

mIA = −1.27e − 006 mII

A = −3.65e − 006

mIB = −8.84e − 006 mII

B = −5.13e − 006

mIA = −1.48e − 006 mII

A = −3.83e − 006

mIB = −8.79e − 006 mII

B = −4.91e − 006

mIA = −1.52e − 006 mII

A = −3.94e − 006

mIB = −8.81e − 006 mII

B = −4.69e − 006

mIA = −1.58e − 006 mII

A = −3.77e − 006

mIB = −8.74e − 006 mII

B = −4.98e − 006

mIA = −1.32e − 006 mII

A = −3.14e − 006

mIB = −8.98e − 006 mII

B = −4.93e − 006

mIA = −1.47e − 006 mII

A = −4.17e − 006

mIB = −8.88e − 006 mII

B = −4.97e − 006

mIA = −1.37e − 006 mII

A = −4.19e − 006

mIB = −8.93e − 006 mII

B = −5.07e − 006

mIA = −1.53e − 006 mII

A = −4.18e − 006

mIB = −8.82e − 006 mII

B = −5.03e − 006

mIB = −8.74e − 006 mII

B = −4.88e − 006

mIA = −1.4e − 006 mII

A = −3.8e − 006

CV

CV

CV

CV

CV

CV

CV

CV

CV

CV

×10−3 ×10−3 ×10−3

×10−3×10−3×10−3

×10−3 ×10−3 ×10−3

×10−3

Synapse number = 1

Synapse number = 4

Synapse number = 7

Synapse number = 2

Synapse number = 5

Synapse number = 8

Synapse number = 3

Synapse number = 6

Synapse number = 9

Synapse number = 10

SBA

SBA

SBA

SBA

SBA

SBA

SBA SBA

SBA

SBA

SAB

SAB

SAB

SAB

SAB

SAB

SAB SAB

SAB

SAB

Figure 9: Distribution of the CV at each synapse at Poisson inputs with mean firing rate 40 Hz. Each subfigure in the figure depicts thedistribution of the CV at the given synapse of both neuron A and neuron B at Poisson inputs mean firing rate 40 Hz. For example, As shownin the leftmost subfigure SAB shows the variation of CV of synapses from presynaptic neuron B to postsynaptic neuron A. Similarly, SBAdepicts the distribution of CV of synapses from presynaptic neuron A to postsynaptic neuron B. Moreover, the slopes of the mean releaseprobability, mI

B and mIIB , were determined using linear regression analysis and give the slope of CV of SBA from bin 1 to 150 and bin 150 to

200, respectively. Similarly, mIA and mII

A give the CV of SABfrom bin 1 to bin 150 and from 150 to 200, respectively. As depicted in all thesesubfigures, at all the synapses when its around 200 bins, the difference of the CVs of neuron A and neuron B was about 0.001 and timefunctions of CVs of both neurons were paralleled to each other.

modifications; this is mainly because we selected the medianof the weight distribution as the amount of synaptic potenti-ated as a response to the pair of presynaptic and postsynapticspike (in (30)). This static quantifier is less sensitive to thesudden changes that occurred in the tail of the distributionuntil those changes are visible via many elements of thedistribution. On the other hand, this quantifier effectivelyquantifies the distribution of the network into a range wheremany elements of the distribution are approximately lying.Even though mean could also be a good option for such

an indicator, is very sensitive to the sudden changes andcould easily forget the history of the distribution. Thereforemedian is better than the mean, but still necessary to lookfor an unbiased statistical quantifier to estimate the amountof potentiated as a response to presynaptic and postsynapticspike pair which can represent the history as well as thesudden changes to the weight distribution effectively. Theother main drawback we see in our approach is the useof bins to chunk the process at synapses. The size of thebin might be a possible constraint that also limits the

Page 14: Spike-Timing-DependentPlasticityandShort-Term

14 Computational Intelligence and Neuroscience

10 15 20 25 30 35 40

λ (Hz)

0.5

0.7

0.9

1.1

1.3

1.5

1.7

1.9

Ave

rage

MS A

B

MSAB

0.5

1

1.5

2

Ave

rage

MS B

A

MSBAτ+ = τ− = 20 ms

Figure 10: Distribution of the median of synaptic weights ofneurons at different mean firing rates. Figure 10 illustrates theaverage of the weight medians calculated after both neurons werestabilized after applying Poisson inputs with mean rate λ. MSAB isthe average of the medians value of all the synapses of postsynapticneuron A from presynaptic neuron B, and similarly MSBA is theaverage of the medians value of all the synapses of postsynapticneuron B to presynaptic neuron A. τ+ = τ− = 20 ms are STDP timeconstants for potentiation and depression.

performances of STDP on short-term dynamics. However,the model proposed in this research successfully balanced thesynaptic excitation of the two neurons in an operational levelwithout damaging their biological plausibility.

References

[1] D. O. Hebb, The Organization of Behavior. The First Stage of thePerception: Growth of the Assembly, John Wiley & Sons, NewYork, NY, USA, 1949.

[2] K. D. Miller and D. J. MacKey, “The role of constraints inHebbian learning,” Neural Computation, vol. 6, pp. 100–126,1994.

[3] E. L. Bienenstock, L. N. Cooper, and P. W. Munro, “Theory forthe development of neuron selectivity: orientation specificityand binocular interaction in visual cortex,” Journal of Neuro-science, vol. 2, no. 1, pp. 32–48, 1982.

[4] L. F. Abbott and W. Gerstner, “Homeostasis and learningthrough spike-timing dependent plasticity,” in Presented atSummer School in Neurophzsics, Les Houches, France, July2004.

[5] G. J. Goodhill and H. G. Barrow, “The role of weightnormalization in competitive learning,” Neural Computation,vol. 6, no. 2, pp. 255–269, 1993.

[6] G. Q. Bi and M. M. Poo, “Synaptic modification by correlatedactivity: Hebb’s postulate revisited,” Annual Review of Neuro-science, vol. 24, pp. 139–166, 2001.

[7] L. F. Abbott and S. B. Nelson, “Synaptic plasticity: taming thebeast,” Nature Neuroscience, vol. 3, pp. 1178–1183, 2000.

[8] J. Lisman and N. Spruston, “Postsynaptic depolarizationrequirements for LTP and LTD: a critique of spike timing-dependent plasticity,” Nature Neuroscience, vol. 8, no. 7, pp.839–841, 2005.

[9] D. A. Butts and P. O. Kanold, “The applicability of spikedependent plasticity to development,” Frontier in SynapticNeuroscience, vol. 2, p. 30, 2010.

[10] R. Legenstein, C. Naeger, and W. Maass, “What can aneuron learn with spike-timing-dependent plasticity?” NeuralComputation, vol. 17, no. 11, pp. 2337–2382, 2005.

[11] S. Song, K. D. Miller, and L. F. Abbott, “Competitive Hebbianlearning through spike-timing-dependent synaptic plasticity,”Nature Neuroscience, vol. 3, no. 9, pp. 919–926, 2000.

[12] M. C. W. van Rossum, G. Q. Bi, and G. G. Turrigiano, “StableHebbian learning from spike timing-dependent plasticity,”Journal of Neuroscience, vol. 20, no. 23, pp. 8812–8821, 2000.

[13] M. C. W. van Rossum and G. G. Turrigiano, “Correlationbased learning from spike timing dependent plasticity,” Neu-rocomputing, vol. 38-40, pp. 409–415, 2001.

[14] T. Masquelier, R. Guyonneau, and S. J. Thorpe, “Spike timingdependent plasticity finds the start of repeating patterns incontinuous spike trains,” PLoS ONE, vol. 3, no. 1, Article IDe1377, 2008.

[15] F. Henry, E. Dauce, and H. Soula, “Temporal pattern identi-fication using spike-timing dependent plasticity,” Neurocom-puting, vol. 70, no. 10–12, pp. 2009–2016, 2007.

[16] T. Tetzlaff, M. Helias, G. T. Einevoll, and M. Diesmann,“Decorrelation of neural-network activity by inhibitory feed-back,” PLoS Computational Biology, vol. 8, no. 8, Article IDe100259610, 2012.

[17] R. Gutig, R. Aharonov, S. Rotter, and H. Sompolinsky,“Learning input correlations through nonlinear temporallyasymmetric Hebbian plasticity,” Journal of Neuroscience, vol.23, no. 9, pp. 3697–3714, 2003.

[18] J. S. Haas, T. Nowotny, and H. D. I. Abarbanel, “Spike-timing-dependent plasticity of inhibitory synapses in the entorhinalcortex,” Journal of Neurophysiology, vol. 96, no. 6, pp. 3305–3313, 2006.

[19] Y. Luz and M. Shamir, “Balancing feed-forward excitation andinhibition via Hebbian inhibitory synaptic plasticity,” PLoSComputational Biology, vol. 8, no. 1, Article ID e1002334,2012.

[20] T. Branco and K. Staras, “The probability of neurotransmitterrelease: variability and feedback control at single synapses,”Nature Reviews Neuroscience, vol. 10, no. 5, pp. 373–383, 2009.

[21] T. Branco, K. Staras, K. J. Darcy, and Y. Goda, “Local dendriticactivity sets release probability at hippocampal synapses,”Neuron, vol. 59, no. 3, pp. 475–485, 2008.

[22] R. S. Zucker and W. G. Regehr, “Short-term synaptic plastic-ity,” Annual Review of Physiology, vol. 64, pp. 355–405, 2002.

[23] P. A. Appleby and T. Elliott, “Multispike interactions in astochastic model of spike-timing-dependent plasticity,” NeuralComputation, vol. 19, no. 5, pp. 1362–1399, 2007.

[24] H. S. Seung, “Learning in spiking neural networks by rein-forcement of stochastic synaptic transmission,” Neuron, vol.40, no. 6, pp. 1063–1073, 2003.

[25] A. M. Thomson, “Facilitation, augmentation and potentiationat central synapses,” Trends in Neurosciences, vol. 23, no. 7, pp.305–312, 2000.

[26] L. F. Abbott and W. G. Regehr, “Synaptic computation,”Nature, vol. 431, no. 7010, pp. 796–803, 2004.

[27] W. Maass and A. M. Zador, “Dynamic stochastic synapses ascomputational units,” Neural Computation, vol. 11, no. 4, pp.903–917, 1999.

[28] G. G. Turrigiano, “Homeostatic plasticity in neuronal net-works: the more things change, the more they stay the same,”Trends in Neurosciences, vol. 22, no. 5, pp. 221–227, 1999.

[29] G. G. Turrigiano and S. B. Nelson, “Homeostatic plasticity inthe developing nervous system,” Nature Reviews Neuroscience,vol. 5, no. 2, pp. 97–107, 2004.

[30] S. D. Fernando, K. Yamada, and A. Marasinghe, “Observedstent’s anti-Hebbian postulate on dynamic stochastic com-putational synapses,” in Proceedings of International Joint

Page 15: Spike-Timing-DependentPlasticityandShort-Term

Computational Intelligence and Neuroscience 15

Conference on Neural Networks (IJCNN ’11), pp. 1336–1343,San Jose, Calif, USA, 2011.

[31] S. Fernando, K. Ymamada, and A. Marasinghe, “New thresh-old updating mechanism to stabilize activity of Hebbianneuron in a dynamic stochastic ‘multiple synaptic’ network,similar to homeostatic synaptic plasticity process,” Interna-tional Journal of Computer Applications, vol. 36, no. 3, pp. 29–37, 2011.

[32] G. Q. Bi and M. M. Poo, “Synaptic modifications in culturedhippocampal neurons: dependence on spike timing, synapticstrength, and postsynaptic cell type,” Journal of Neuroscience,vol. 18, no. 24, pp. 10464–10472, 1998.

[33] H. Markram, Y. Wang, and M. Tsodyks, “Differential signalingvia the same axon of neocortical pyramidal neurons,” Proceed-ings of the National Academy of Sciences of the United States ofAmerica, vol. 95, no. 9, pp. 5323–5328, 1998.

[34] G. G. Turrigiano, K. R. Leslie, N. S. Desai, L. C. Rutherford,and S. B. Nelson, “Activity-dependent scaling of quantalamplitude in neocortical neurons,” Nature, vol. 391, no. 6670,pp. 892–896, 1998.

Page 16: Spike-Timing-DependentPlasticityandShort-Term

Submit your manuscripts athttp://www.hindawi.com

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttp://www.hindawi.com

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Applied Computational Intelligence and Soft Computing

 Advances in 

Artificial Intelligence

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporation

http://www.hindawi.com Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Modelling & Simulation in EngineeringHindawi Publishing Corporation http://www.hindawi.com Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014