5
Second Order Spiking Perceptron Xuyan Xiang Hunan University of Arts and Science College of Mathematics and Computing Science Changde, China [email protected] Yingchun Deng Xiangqun Yang Hunan Normal University College of Mathematics and Computer Science Changsha, China Abstract According to the usual approximation scheme, we present a more biologically plausible so-called Second Or- der Spiking Perceptron with renewal process inputs, which employs both first and second statistics, i.e. the means, vari- ances and correlations of the synaptic input. We show that such perceptron, even a single neuron, is able to perform complex non-linear tasks like the XOR problem, which is impossible to be solved by traditional single-layer percep- trons. Here such perceptron offers a significant advantage over classical models, in that it includes the second order statistics in computations, and that it introduces variance in the error representation. We are to open up the possibility of carrying out a random computation in neuronal networks. 1. Introduction Recently, computational neural models exists, owe more to their biological counterparts than their predecessors. It shows that renewal process inputs represent a more accu- rate approximation of synaptic inputs. However, given the difficulties in dealing analytically with a system whose in- puts are of renewal form, suitable continuous approxima- tions are required. Therefore in [7], Tuckwell shows that renewal processes can be approximated using diffusion ap- proximation. Feng in [3] further develops a more accurate usual approximation scheme (UAS). In addition, it has emerged that the spike activity of a neuron is decided not only by the mean rate, but also by higher order statistics of its input [2, 5]. The question then arises of whether it is possible to develop a frame- work of neural computation which includes not only the first (means) but the second (fluctuations, correlations) and pos- sibly higher-order statistics of firing. In this letter we will demonstrate that this is indeed possible. According to the above UAS, we will present a more biologically plausible so-called Second Order Spiking Per- ceptron (SOSP) based on IF model with renewal process inputs, which employ both first and second statistical repre- sentation, i.e. the means, variances and correlations of the synaptic input. We will show that such perceptron, even a single neuron, can successfully perform complicated non- linear task like the XOR problem, which is impossible to be solved by traditional single-layer perceptrons. Here such perceptron offers a significant advantage over classical models, in that it includes the second order statis- tics in computations, and that it introduces variance in the error representation. This paper is organized as follows. We first lay the foundation of the second order spiking networks framework based on the UAS in Sec. 2. Then we apply an error mini- mization technique to train the network and derive the cor- responding learning rule in Sec. 3. Finally we demonstrate their applications in Sec. 4. 2. The Second Order Spiking Network We begin by considering the IF model [7] as follows. Suppose the i-th neuron in the k +1-th layer receives exci- tatory postsynaptic potentials (EPSPs) and inhibitory post- synaptic potentials (IPSPs). When the membrane potential V k+1 i (t) is between its resting state V r and its threshold V t , it satisfies the following dynamics: dV k+1 i (t)= L(V k+1 i (t) V r )dt + dI k+1 i,syn (t) (1) where L is the decay rate, I k+1 i,syn (t) is the synaptic input dI k+1 i,syn (t)= p k j=1 ω E,k ij dN E,k j (t) q k j=1 ω I,k ij dN I,k j (t) (2) Here ω E,k ij is the magnitude of EPSPs, N E,k i renewal pro- cess arriving from the i-th and j-th synapse (the superscript ’I’ for IPSPs), p k and q k are the total number of active excitatory and inhibitory synapses in the k-th layer. Once V k+1 i (t) is greater than or equal to V t , it is reset to V r . Global Congress on Intelligent Systems 978-0-7695-3571-5/09 $25.00 © 2009 IEEE DOI 10.1109/GCIS.2009.376 155

[IEEE 2009 WRI Global Congress on Intelligent Systems - Xiamen, China (2009.05.19-2009.05.21)] 2009 WRI Global Congress on Intelligent Systems - Second Order Spiking Perceptron

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 WRI Global Congress on Intelligent Systems - Xiamen, China (2009.05.19-2009.05.21)] 2009 WRI Global Congress on Intelligent Systems - Second Order Spiking Perceptron

Second Order Spiking Perceptron

Xuyan XiangHunan University of Arts and Science

College of Mathematics and Computing ScienceChangde, China

[email protected]

Yingchun DengXiangqun Yang

Hunan Normal UniversityCollege of Mathematics and Computer Science

Changsha, China

Abstract

According to the usual approximation scheme, wepresent a more biologically plausible so-called Second Or-der Spiking Perceptron with renewal process inputs, whichemploys both first and second statistics, i.e. the means, vari-ances and correlations of the synaptic input. We show thatsuch perceptron, even a single neuron, is able to performcomplex non-linear tasks like the XOR problem, which isimpossible to be solved by traditional single-layer percep-trons. Here such perceptron offers a significant advantageover classical models, in that it includes the second orderstatistics in computations, and that it introduces variance inthe error representation. We are to open up the possibility ofcarrying out a random computation in neuronal networks.

1. Introduction

Recently, computational neural models exists, owe moreto their biological counterparts than their predecessors. Itshows that renewal process inputs represent a more accu-rate approximation of synaptic inputs. However, given thedifficulties in dealing analytically with a system whose in-puts are of renewal form, suitable continuous approxima-tions are required. Therefore in [7], Tuckwell shows thatrenewal processes can be approximated using diffusion ap-proximation. Feng in [3] further develops a more accurateusual approximation scheme (UAS).

In addition, it has emerged that the spike activity of aneuron is decided not only by the mean rate, but also byhigher order statistics of its input [2, 5]. The questionthen arises of whether it is possible to develop a frame-work of neural computation which includes not only the first(means) but the second (fluctuations, correlations) and pos-sibly higher-order statistics of firing. In this letter we willdemonstrate that this is indeed possible.

According to the above UAS, we will present a morebiologically plausible so-called Second Order Spiking Per-

ceptron (SOSP) based on IF model with renewal processinputs, which employ both first and second statistical repre-sentation, i.e. the means, variances and correlations of thesynaptic input. We will show that such perceptron, even asingle neuron, can successfully perform complicated non-linear task like the XOR problem, which is impossible to besolved by traditional single-layer perceptrons.

Here such perceptron offers a significant advantage overclassical models, in that it includes the second order statis-tics in computations, and that it introduces variance in theerror representation.

This paper is organized as follows. We first lay thefoundation of the second order spiking networks frameworkbased on the UAS in Sec. 2. Then we apply an error mini-mization technique to train the network and derive the cor-responding learning rule in Sec. 3. Finally we demonstratetheir applications in Sec. 4.

2. The Second Order Spiking Network

We begin by considering the IF model [7] as follows.Suppose the i-th neuron in the k + 1-th layer receives exci-tatory postsynaptic potentials (EPSPs) and inhibitory post-synaptic potentials (IPSPs). When the membrane potentialV k+1

i (t) is between its resting state Vr and its threshold Vt,it satisfies the following dynamics:

dV k+1i (t) = −L(V k+1

i (t) − Vr)dt + dIk+1i,syn(t) (1)

where L is the decay rate, Ik+1i,syn(t) is the synaptic input

dIk+1i,syn(t) =

pk∑j=1

ωE,kij dNE,k

j (t) −qk∑

j=1

ωI,kij dNI,k

j (t) (2)

Here ωE,kij is the magnitude of EPSPs, NE,k

i renewal pro-cess arriving from the i-th and j-th synapse (the superscript’I’ for IPSPs), pk and qk are the total number of activeexcitatory and inhibitory synapses in the k-th layer. OnceV k+1

i (t) is greater than or equal to Vt, it is reset to Vr.

Global Congress on Intelligent Systems

978-0-7695-3571-5/09 $25.00 © 2009 IEEE

DOI 10.1109/GCIS.2009.376

155

Page 2: [IEEE 2009 WRI Global Congress on Intelligent Systems - Xiamen, China (2009.05.19-2009.05.21)] 2009 WRI Global Congress on Intelligent Systems - Second Order Spiking Perceptron

Based on the usual approximation scheme (UAS) [3], theequation (2) can be approximated as

dIk+1i,syn(t) ≡ μk

i dt + σki dB(t) (3)

where

μki =

pk∑j=1

ωkijμ

kj (1 − r)

(σk

i

)2= (1 + r2)

pk∑j=1

ωkijσ

kj

(ωk

ijσkj + ρ

pk∑n�=j,n=1

ωkinσk

n

)(4)

with

μki =

1T k

i + Tref, (σk

i )2 =TT k

i

(T ki )3

T k+1i =

2L

∫ Aki (Vt)

Aki(Vr)

g(x)dx,

TT k+1i =

4L2

∫ Aki (Vt)

Aki(Vr)

a(x)dx,

(5)

Aki (y) =

yL − μki

σki

√L

g(x) = exp (x2)∫ x

−∞exp (−u2)du

a(x) = exp (x2)∫ x

−∞exp (−u2)g2(u)du

(6)

Furthermore, the means and variance of the output in(n + 1)-th layer can be written as follows:

μn+1i =

1Tn+1

i + Tref

≡ fn+1i ≡ fn+1

i (μnj , σn

j )

(σn+1

i

)2=

TTn+1i

(Tn+1i )3

≡ gn+1i ≡ gn+1

i (μnj , σn

j ).(7)

The above relationship between inputs and outputs lays thefoundation of the Second Order Spiking Network (SOSN).

We now confine ourselves to feed-forward networks, al-though the results presented below can be easily general-ized to feedback and recurrent networks. Here we consideran output layer (label number l + 1) of h = 1, · · · , P neu-rons; l − 1 hidden layers (label number i = 2, · · · , l) withj = 1, · · · ,M i neurons on each layer; and an input layer(label number 1) with j = 1, · · · , N input units.

3. A Learning Rule of SOSN

For the purpose of this definition we assume the outputunits are themselves second order spiking neurons, thoughthis need not be the case. The learning algorithm we used

seeks to minimize the error between the actual network out-put and the desired target output response patterns. Wetherefore define the error function as follows:

E =12

P∑h=1

(μl+1

h − dh

)2+ γ

P∑h=1

(σl+1

h

)2(8)

where μl+1h and σl+1

h are the mean and variance at outputneuron h, as defined above in (7), dh is the desired targetresponse of output neuron i and γ is a penalty factor.

The learning rule can be written as follows:

Δωmij = −η

∂ E

∂ωmij

, m = 1, · · · , l. (9)

where η is the learning rate.

3.1. The Output Layer

Let hnh ≡√gn

h for n = 1, 2, · · · , l + 1, and replace μl+1h

and σl+1h in (8) by f l+1

h and gl+1h in (7), then we get

∂E

∂ωlhi

=∂E

∂f l+1h

∂f l+1h

∂T l+1h

∂T l+1h

∂ωlhi

(∂gl+1

h

∂T l+1h

∂T l+1h

∂ωlhi

+∂gl+1

h

∂TT l+1h

∂TT l+1h

∂ωlhi

)

≡ δ1h∂T l+1

h

∂ωlhi

+ δ2h∂TT l+1

h

∂ωlhi

.

(10)Where

δ1h =∂E

∂f l+1h

∂f l+1h

∂T l+1h

+γ∂gl+1

h

∂T l+1h

, δ2h = γ∂gl+1

h

∂TT l+1h

(11)

Furthermore,

∂E

∂f l+1h

= f l+1h − dh,

∂f l+1h

∂T l+1h

=−1

(T l+1h + Tref )2

,

∂gl+1h

∂T l+1h

= −3TT l+1h

(T l+1h )4

,∂gl+1

h

∂TT l+1h

=1

(T l+1h )3

,

∂T l+1h

∂ωlhi

=2L

g(Al

h(y)) ∂Al

h(y)∂ωl

hi

∣∣∣y=Vt

y=Vr,

∂TT l+1h

∂ωlhi

=4L2

a(Al

h(y)) ∂Al

h(y)∂ωl

hi

∣∣∣y=Vt

y=Vr,

∂Alh(y)

∂ωlhi

=− ∂μl

h

∂ωlhi

σlh − (yL − μl

h

) ∂σlh

∂ωlhi(

σlh

)2 √L

,

∂μlh

∂ωlhi

= f li (1 − r),

∂σlh

∂ωlhi

=

2ωlhig

li + ρhl

i

M l∑n�=i,n=1

ωlinhl

n

2σlh

(1 + r2

)

(12)

156

Page 3: [IEEE 2009 WRI Global Congress on Intelligent Systems - Xiamen, China (2009.05.19-2009.05.21)] 2009 WRI Global Congress on Intelligent Systems - Second Order Spiking Perceptron

3.2. The Hidden Layers

For the hidden layer neurons we can write the chain rule:

∂E

∂ωl−1ij

=∂E

∂f li

∂f li

∂ωl−1ij

+∂E

∂hli

∂hli

∂ωl−1ij

=

P∑h=1

δ1i∂T l+1

h

∂f li

∂f li

∂T li

∂T li

∂ωl−1ij

+

P∑h=1

δ2i1

2hli

·∂TT l+1h

∂hli

(∂gl

i

∂T li

∂T li

∂ωl−1ij

+∂gl

i

∂TT li

∂TT li

∂ωl−1ij

)

≡ δl1i

∂T li

∂ωl−1ij

+ δl2i

∂TT li

∂ωl−1ij

.

(13)

Where

δl1i =

P∑h=1

δ1h∂T l+1

h

∂f li

∂f li

∂T li

+P∑

h=1

δ2h∂TT l+1

h

∂hli

12hl

i

∂gli

∂T li

,

δl2i =

P∑h=1

δ2h∂TT l+1

h

∂hli

12hl

i

∂gli

∂TT li

.

(14)

and∂f l

i

∂T li

,∂gl

i

∂T li

,∂gl

i

∂TT li

can be calculated with the induction

according to Eq.(12).Furthermore, by (5-6), we get

∂T l+1h

∂f li

=2L

g(Al

h(y)) ∂Al

h(y)∂f l

i

∣∣∣y=Vt

y=Vr

∂TT l+1h

∂hli

=4L2

a(Al

h(y)) ∂Al

h(y)∂hl

i

∣∣∣y=Vt

y=Vr

∂Alh(y)

∂f li

= −∂μlh

∂f li

· 1σl

h

√L

,

∂Alh(y)

∂hli

=− (y · L − μl

h

)(σl

h

)2 √L

· ∂σlh

∂hli

.

(15)

Finally, by (6),

∂μlh

∂f li

= ωlhi(1 − r),

∂σlh

∂hli

=

hliω

lhi + ρ

M l∑n�=i,n=1

ωlhnhl

n

2σlh

ωlhi

(1 + r2

) (16)

So we can give a general δ rule for any hidden layer:(i). For layer l connected to output layer l + 1,

δl1i =

∂f li

∂T li

P∑h=1

δ1h∂T l+1

h

∂f li

+1

2hli

∂gli

∂T li

P∑h=1

δ2h∂TT l+1

h

∂hli

δl2i =

12hl

i

∂gli

∂TT li

P∑h=1

δ2h∂TT l+1

h

∂hli

.

(17)

(ii). For all other hidden layers,

δl−11i =

∂f l−1i

∂T l−1i

M l∑h=1

δl−11h

∂T lh

∂f l−1i

+1

2hl−1i

∂gl−1i

∂T l−1i

M l∑h=1

δl−12h

∂TT lh

∂hl−1i

,

δl−12i =

12hl−1

i

∂gl−1i

∂TT l−1i

M l∑h=1

δl−12h

∂TT lh

∂hl−1i

.

(18)

Thus the weight correction for the hidden layers is

Δmij = −η

(δm+11i

∂Tm+1i

∂ωmij

+ δm+12i

∂TTm+1i

∂ωmij

)(19)

where m = 1, · · · , l − 1.A SOSN combined with the above learning rule was

termed a Second Order Spiking Perceptron (SOSP).

4. Applications

For all simulations in this letter, the decay rate, thresholdand resting potential are set equal to L = 0.05, Vt = 20mV, and Vr = 0 mV respectively.

4.1. Decision Boundary I

In the experiments shown here a single neuron with twoweights inputs and initial CV = 1 was tested. Where CVis the abbreviation of the coefficient of variation of the ISIs,

CV =√

TT(T )2 . In simulations, the initial value of CV is

used to initialize the variance of ISIs.

Varying from the ratio r between excitatory and in-hibitory inputs. To exhibit the output decision boundary(ODB) varying from the ratio r, we first choose an approx-imate correlation coefficients ρ = 0.2 as previous experi-mental research in [4] has shown and fixed unitary value ofweights.

Figures 1 to 3 show the contour plots (left panels) andsurface planes (right panels) of ODB - shown here for inputpairs in the range (-2, -2) to (2, 2) - for 3 different cases: r =1, r = 0.5 and r = 2. For instance, Fig. 1 shows that forthe case r = 1, i.e. a neuron receiving balanced excitatoryand inhibitory inputs, the ODB is a ’quasi-lozenge’.

It is shown that the ratio between excitation and inhibi-tion input is able to change the ODB. The comparison of thefour (ratio) perceptron models, however, highlight the roleplayed by inhibition in information transmission. In fact,this appears to have a more sophisticated role in spikingnetworks than the dampening effect it exhibits in classicalmodelling.

157

Page 4: [IEEE 2009 WRI Global Congress on Intelligent Systems - Xiamen, China (2009.05.19-2009.05.21)] 2009 WRI Global Congress on Intelligent Systems - Second Order Spiking Perceptron

Input1

Inpu

t2

THE 2−D PLOT OF DECISION BOUNDARY

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2r=1, ρ=0.2

−2−1

01

2

−2

−1

0

1

20

10

20

30

40

50

Input1

THE 3−D PLOT OF DECISION BOUNDARY

Input2

Out

put

r=1, ρ=0.2

Figure 1. The ODB for the case r = 1.

Input1

Inpu

t2

THE 2−D PLOT OF DECISION BOUNDARY

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2r=0.5, ρ=0.2

−2−1

01

2

−2

−1

0

1

20

10

20

30

40

50

60

Input1

THE 3−D PLOT OF DECISION BOUNDARY

Input2

Out

put

r=0.5, ρ=0.2

Figure 2. The ODB for the case r = 0.5.

Input1

Inpu

t2

THE 2−D PLOT OF DECISION BOUNDARY

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2r=2, ρ=0.2

−2−1

01

2

−2

−1

0

1

20

20

40

60

80

Input1

THE 3−D PLOT OF DECISION BOUNDARY

Input2

Out

put

r=2, ρ=0.2

Figure 3. The ODB for the case r = 2.

Input1

Inpu

t2

THE 2−D PLOT OF DECISION BOUNDARY

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2r=1, ρ=−0.5

Input1

Inpu

t2

THE 2−D PLOT OF DECISION BOUNDARY

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2r=1, ρ=−0.2

Figure 4. The contour plots of ODB for thecase r = 1, ρ = −0.5 and r = 1, ρ = −0.2.

Input1

Inpu

t2

THE 2−D PLOT OF DECISION BOUNDARY

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2r=1, ρ=0

Input1

Inpu

t2

THE 2−D PLOT OF DECISION BOUNDARY

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2r=1, ρ=0.5

Figure 5. The contour plots of ODB for thecase r = 1, ρ = 0 and r = 1, ρ = 0.5.

Varying from the correlation coefficients ρ. To illus-trate the effect of input correlations on the system’s output,we then run a series of simulations for the case r = 1 byvarying the input correlation coefficient ρ. Figures 4-5 (in-cluding Fig. 1) show the ODB for ρ ranging from -0.5 to0.5. It is apparent that various types of non-linear ODB canbe reached by adjusting the input correlation coefficient ρ.More details, the ODB is quadrate when ρ = 0, some othersare more and more contractive with the increasing of ρ (seethe left of Fig. 1 and Fig. 5), while the others are more andmore outstretched with the decreasing of ρ (see the left ofFig. 5 and Fig. 4). For instance, the contour plot of the ODBwith ρ = −0.5 resembles a ’flower’, and that of ρ = 0.5seems like a ’star’. Therefore we can perform various dif-ferent non-linear tasks by adjusting the input correlation co-efficient. More details, the correlation coefficients changesthe scale of the boundary, and the output is monotone in-creasing with respect to the correlation coefficients. It canbe interpreted that a neuron with a mean input was morelikely to fire when there were fluctuations about this meanand while the correlated inputs increased these fluctuations.

In conclusion, our perceptrons can produce various com-plicated non-linear output decision boundaries. More im-portant, the ratio r and correlation coefficients ρ seem playthe roles of ’shape’ and ’scale’ parameters, respectively.

4.2. XOR Problem

We consider to solve the XOR problem for SOSP withCV = 1. The model with r = 1 and ρ = 0.2 was testedusing polar coordinate input pairs: (-1, 0), (1, 0), (0, 1) and(0, -1). During the training cycle the weights were adjustedusing the algorithm introduced in Sec. 3. The left plot ofFig. 6 shows the ODB changing over the input space (-2,-2) to (2, 2) during training process. Prior to training theweights were given unitary values and a learning rate ofη = 0.0003 was set. The contour plot shows how the ODBchanges at each iteration in the training cycle. The right plotof Fig. 6 shows the final surface plane after training. The fi-nal ODB clearly encompassed the input pair (-1, 0) and (1,0), effectively classifying the XOR input with a single neu-ron. However it is impossible to be solved by traditionalsingle-layer perceptron.

4.3. Decision Boundary II

The models presented above are merely single neuronsand a progression would be to examine some of the proper-ties of a multilayer network.

Without loss of generality, we only discuss the Multi-layer SOSP with 1 hidden layer. Here simulations are basedon the unitary values of weights. They could be provided byvarying on the ratio as well as the correlation coefficients

158

Page 5: [IEEE 2009 WRI Global Congress on Intelligent Systems - Xiamen, China (2009.05.19-2009.05.21)] 2009 WRI Global Congress on Intelligent Systems - Second Order Spiking Perceptron

Input1

Inpu

t2

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2r=1, ρ=0.2

The initial decision boundary

The final decision boundary

−2−1

01

2

−2

−1

0

1

20

20

40

60

80

Input1

THE FINAL DECISION BOUNDARY

Input2

Out

put

r=1, ρ=0.2

Figure 6. The ODB for a SOSP trained on theXOR problem with CV = 1, r = 1.

−2−1

01

2

−2

−1

0

1

20

20

40

60

80

100

Input1

THE 3−D PLOT OF DECISION BOUNDARY

Input2

Out

put

r=0.5 0, ρ=0.2 0.2

−2−1

01

2

−2

−1

0

1

20

5

10

15

20

25

30

Input1

THE 3−D PLOT OF DECISION BOUNDARY

Input2

Out

put

r=1 2, ρ=0.2 0.2

Figure 7. The ODB for r = 0.5 0, ρ = 0.2 0.2 andr = 1 2, ρ = 0.2 0.2.

on each layer. Some of them, for instance, were exhib-ited in Figures. 7- 9. Where Fig. 7 shows the ODB withr = 0.5, ρ = 0.2 on the first layer and r = 0, ρ = 0.2 onthe second layer.

It is not difficult to know that the networks with two lay-ers can produce more complicated ODB because it couldbe accomplished by the adjustment of the ratios r and thecorrelation coefficients ρ. Moreover the decision boundarymay be more and more complicated with the increasing ofnetwork layers. Here we omitted the corresponding detailsfor the simplicity. We can draw a general conclusion fromabove that the non-linear tasks that can be performed bymultilayer SOSP will more complicated and various thanthat by single-layer SOSP.

−2−1

01

2

−2

−1

0

1

20

50

100

150

200

Input1

THE 3−D PLOT OF DECISION BOUNDARY

Input2

Out

put

r=2 0, ρ=0.2 0.2

−2−1

01

2

−2

−1

0

1

20

10

20

30

40

50

Input1

THE 3−D PLOT OF DECISION BOUNDARY

Input2

Out

put

r=0 2, ρ=0.2 0.2

Figure 8. The ODB for r = 2 0, ρ = 0.2 0.2 andr = 0 2, ρ = 0.2 0.2.

−2−1

01

2

−2

−1

0

1

20

50

100

150

200

Input1

THE 3−D PLOT OF DECISION BOUNDARY

Input2

Out

put

r=0 0, ρ=0.5 −0.9

−2−1

01

2

−2

−1

0

1

20

50

100

150

200

Input1

THE 3−D PLOT OF DECISION BOUNDARY

Input2

Out

put

r=0 0, ρ=−0.9 1

Figure 9. The ODB for r = 0 0, ρ = 0.5 − 0.9and r = 0 0, ρ = −0.9 1.

5. Discussions

It is obvious that the total error is composed of the meanerror and the variance error. During the training, not onlythe mean but the variance can be trained. It is thereforenatural to ask whether both of them can reach the desiredtolerance simultaneously, whether the variance error can beminimized when the mean error has reached the desired tol-erance, whether we can easily achieve the trade-off betweenthe mean and variance error by the adjustment of the penaltyfor a certain learning task, and so on. We will explore it inour further publications.

Acknowledgments

The work was supported by National Natural ScienceFoundation of China (10571051) and Scientific ResearchFund of Hunan Provincial Education Department (08C588).

References

[1] J. F. Feng, Computational Neuroscience-A ComprehensiveApproach. CRC Press, Chapman-Hall (2003).

[2] J. F. Feng, D. Brown, Impact of correlated inputs on the out-put of the integrate-and-fire models, Neural Computation, 12:671-692 (2000).

[3] J. F. Feng , Y. C. Deng and E. Rossoni, Dynamics of momentneuronal networks. Phys. Rev. E., 73: 041906 (2006)

[4] P. B. C. Matthews, Relationship of firing intervals ofhuman motor units to the trajectory of post-spike after-hyperpolarization and synaptic noise, Journal of Physiology,492: 596-628 (1996).

[5] E. Salinas and T. J. Sejnowski, Correlated neuronal activityand the flow of neural information, Nat Rev Neurosci, 2: 539-550 (2001).

[6] R. Stein, Some models of neuronal variability, Journal of Bio-physics, 7: 37-68 (1967).

[7] H. C. Tuckwell, Introduction to Theoretical Neurobiology.Cambridge University Press, Cambridge (1988).

[8] X. Y. Xiang, Y. C. Deng and X. Q. Yang, Spike-rate per-ceptrons. Proceedings of Fourth International Conference onNatural Computation 3: 326-333 (2008).

159