Upload
others
View
9
Download
0
Embed Size (px)
Citation preview
Effects of spike-frequency adaptation on neural
models, with applications to biologically inspired
robotics
David Mchlillen
A thesis submitted in conformity with the requirements
for the degree of Doctor of Philosophy Graduate Department of Aerospace Engineering
University of Toronto
National Library Bibliothèque nationale du Canada
Acquisitions and Acquisitions et Bibliographie Services sewices bibliographiques
395 Wellington Street 395. nie Wellington OttawaON KIAON4 Ottawa ON K1A O N 4 CaMda canada
The author has granted a non- exclusive licence allowing the National Library of Canada to reproduce, loan, distribute or sell copies of this thesis in microform, paper or electronic fomiats.
The author retains ownership of the copyright in this thesis. Neither the thesis nor substantial extracts fiom it may be printed or otherwise reproduced without the author' s permission.
L'auteur a accordé une licence non exclusive permettant à la Bibliothèque nationale du Canada de reproduire, prêter, distribuer ou vendre des copies de cette thèse sous la forme de microfiche/film, de reproduction sur papier ou sur format électronique.
L'auteur conserve la propriété du droit d'auteur qui protège cette thèse. Ni la thèse ni des extraits substantiels de celle-ci ne doivent être imprimés ou autrement reproduits sans son autorisation.
"1 hate quot ations." -Ralph !Valdo Emerson
Thesis M e : Effects of spike-fiequency adaptation on neural models, with applications to
biologicaily inspired robot ics
David Ross MciLIiUen
Doctor of Philosophy
Year of convocation: 2000
Department of Aeroçpace Engineering
University of Toronto
Abstract
.himals are impressive biological machines. and their ability to handle unstructureci environ-
ments is somet hing roboticists wish to emulate. The behavioural competence of animals derives
largely from the functioning of their nervous systems. Mathematical modelling of the function-
ing of neurons may enable us to extract usefui principles from biology to be applied in robotics.
Here. severai systems with r e l e ~ n c e to biologically inspired robotics are analyzed. The quali-
tative dynamics of a biologicai property c d e d spike-frequency adaptation are added to exîsting
analog neural models. and analysis shows the conditions under which the augmented mode1 can
generate oscillatory solutions. X network of these augmented analog neurons is then used to
generate a walking gait for a six-legged robot in such a way that the system recovers rapidly
fiom perturbations to the legs. The dpamics of oscillations arising in two coupled populations
of integrate-and-fire neurons are studied; an analysis of the system provides good predictions of
the osdatory period and the range of coupling strengths for which oscillations will occur. A
signal-processing phenomenon known as noise-shaping, wherein noise in a system is shifted out
of the low frequencies up into higher frequency ranges. is demonstrated in networks of integrate-
and-fire and conductancebased neurons: it is shown that spike-frequency adaptation provides
certain signal-processing advant ages in such networks. The effect of spike-Erequency adaptation
on the Mnability in integrate-and-fire neurons' h g records is analyzed.
Acknowledgements
1 wish to thank Gabriele DYEleuterio for his support and guidance during this thesis, and Janet
Halpeiin for her patience in helping me to l e m about the ini t idy foreign world of biology. 'rlany other people have provided helpfd advice during this thesis; 1 would especially Iike to
t hank James Collins and Raymond Kapral. .And though it may be a bit of a cliché to t hank one's
spouse in the Acknowledgements, I'm going to do it anyway: Cqnthia, th& for everything.
Contents
Abstract iii
Acknowledgements iv
Lists of Tables and Figures viii
1 Introduction and Background 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 'iervous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Xeuron morpholog-y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Dendrites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Soma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 .luon
1.3.4 .-Lu on terminais . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Xeuron electrical properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.41 Membrane potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Action potential generation . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 S ~ a p t i c coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 A.4 Spike-hequency adaptation . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Xeural models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 Cornpartment a l modeb . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . 5.2 Conductance-based (Hodgkin- HU,^ q) models . . . . . . . . . . . . . . . . 1.5.3 FitzHugh-Xagumo model . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.4 Integrate-and-fire mode1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.5 Hopfield's analog mode1 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Thesis overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Local abbreviations
2 Phasic analog neurons 24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Localabbreviations 24
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Introduction and background 25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Phasic analog neurons 28
. . . . . . . . . . . . . . . . . . . . . . . . 2.4 Oscillatory solutions: Hopf bifurcation 29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Discussion 44
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Futuredirections 4.4
3 Walking gait generation 46 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction 46
3.2 Gaits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Coupled neural osciUatorç 47
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Individual oscillators 47 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 'ieural coupling 49
3.4 Single leg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Two legs 52 3.6 Sklegs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Future directions 56
4 Oscillations in pools of coupled neurons 59
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Local abbreviations 59 4.2 Introduction and background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3 Yeuron mode1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1 Dimensional fonn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.3.2 Dimensionless f o m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.3.3 Synaptic coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.4 Popdationactivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4 Xctivi l in a single population . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4.2 Xctivil in two coupled populations . . . . . . . . . . . . . . . . . . . . . 73 4.5 Onset of oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6 Period of oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.7 Osciilator death . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.8 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5 Noise-shaping in populations of coupled neurons 93
5.1 Xcknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
. . . . . . . . . . . . . . . . . . . . . . 5.2 Background: h a l o g to digital conversion 93
5.3 Neural noise-shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
. . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Calculation of power spectra 99 . . . . . . . . . . . . . . . . . . . . . . . . . . 5 -4 Adapting integrate-and-fire neurons 99
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Xeuron mode1 99 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 . 4 Effect of Poisson noise 100
. . . . . . . . . . . . . . . . . . . . . 3 Effect of adaptation on DR and SNR 101 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Conductance-basecl neurons 104
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.51 Xeuron mode1 104 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.52 Xoise-shapuig results 109 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Effectofadaptation 111
6 Effect of adaptation on neural variability 115
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Local abbreviations 115 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Introduction and background 115
. . . . . . . . . . . . . . . . . . . . . 6.2.1 Yotation for probability calcidations 116 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 'ieuron mode1 118
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Random voltage reset 119 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Calcuiation of CV 119
. . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Approximations for c dkmamics 121 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Random threshold reset 123
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Calculation of CV 123 . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 .l pproximation of c dynarnics 125
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Disaission 125
. . . . . . . . . . . . . . . 6.6.1 Difference betweeeen voltage and threshold rems 125 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Effect of adaptation 128
7 Sumrnary and Conclusions 130
List of Tables
1.1 'iurnbers of neurons in mrious species . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Functional roles of neural regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Xeuron models 14
4.1 Parameten for adapting inregrate-and-fire mode1 . dimensional form . . . . . . . . 63 4.2 Dimensionless constants for nondimensionaiized adapting IF model . . . . . . . . . 65
4.3 Values of Cl and CI: theory and numerical simulation . . . . . . . . . . . . . . . . 79
5.1 Currents used in the conductance-based made1 . . . . . . . . . . . . . . . . . . . . 104
5.2 Parameters in conductance-based mode1 . . . . . . . . . . . . . . . . . . . . . . . . 109
List of Figures
Schernatic pictures of biological neurooç . . . . . . . . . . . . . . . . . . . . . . . . 4
A section of a neuron's cellular membrane . . . . . . . . . . . . . . . . . . . . . . . 8
Gradients &ecting ion Boas across the neural membrane . . . . . . . . . . . . . . 9
. . . . . . . . . . . . . . . . . Voltage trace hom a rat sornatsensory cortex neuron 11
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structure of a diemical synapse 13
Channel variables in the Hodgkin-Huxley equat ions . . . . . . . . . . . . . . . . . Li
Spiking in a conductance-based (Hodgkin-Huxley type) mode1 . . . . . . . . . . . 19 . . . . . . . . . . . . . . . . . . . . . . . . . Schematic of integate-and-fire mode1 20
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 . i .*Ha& cent ei ' osciliator 26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . '2.2 Phasic and tonic analog neurons 27
2.3 Frequency response of phasic analog neuron . . . . . . . . . . . . . . . . . . . . . . 30
2.4 Sigrnoidal firing rate hinction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.5 St ability boundaries . sigrnoidal actiwt ion hincrion . . . . . . . . . . . . . . . . . . 36
2.6 Stability coefficent a vs . 0 . sigrnoidal activation hnction . . . . . . . . . . . . . . . 37 2.7 'ionsigrnoidal firing rate function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.8 Stability boundaxies . nonsigrnoidal activation hincti~n . . . . . . . . . . . . . . . . 40 2.9 hlultistability in the half-center oscillator . . . . . . . . . . . . . . . . . . . . . . . 41
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 'ieural outputs . stable regime 42 2.11 'ieural outputs . unstable regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.1 Phase relationships for two common hexapod gaits . . . . . . . . . . . . . . . . . . 48
3.2 Base of support in the tripod gait . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 Oscillators coupled to give anti-phase oscillations . . . . . . . . . . . . . . . . . . . 50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Single leg position ts . time 51
3.5 htiphase coupling for m*o legs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Trajectories of two coupled legs 54
-t 3.7 Coupling pattern for tripod gait generating network . . . . . . . . . . . . . . . . . ai^
3.8 Tripod gait, generated by six coupled oscillators . . . . . . . . . . . . . . . . . . . 57
. . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Tripod gait i~ the presence of noise 58
. . . . . . . . . . . . . . . . . 4.1 Two coupled pools of inditidually spiking neurons.
4.2 Altemating bursts of firing in two coupled pools . . . . . . . . . . . . . . . . . . . 4.3 Response of adapting IF neuron to constant input curent . . . . . . . . . . . . . . 4.4 Synaptic curent output from a single neuron . . . . . . . . . . . . . . . . . . . . . 4.3 Raster plot illustrating population activity . . . . . . . . . . . . . . . . . . . . . . 4.6 Tirne course of activie and average îalri~irn b v 4 in an tincni?p!ed p q ~ k t i m . . .
4.7 Determination of the activity in two coupled pools . weak coupling . . . . . . . . . 4.8 Eigenvalues for calcium dpamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Movement in Ci - C2 spacet weak coupling . . . . . . . . . . . . . . . . . . . . . . 4.10 Strong coupling leads to multiple intersections in Fi . f i c w e s . . . . . . . . . . . 4.11 Oscillatory solution for two coupled pools . . . . . . . . . . . . . . . . . . . . . . . 4.12 Plot of g(Ci) . a function used to solve for Cl . . . . . . . . . . . . . . . . . . . . . 4.13 First derivative of g(Cl) . a function used to solve for Cl . . . . . . . . . . . . . . . 4.14 Cornparison of theory with simulation resuits: T vs . K, T~ . . . . . . . . . . . . . 4.15 Fluctuation-induced transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 Effect of quantization on a signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Schemat ic of a delta-sigma converter . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Effect of AC modulator on noise power spect nim. . . . . . . . . . . . . . . . . . . 5.1 Schernatic of an integrate-and-fire neuron . . . . . . . . . . . . . . . . . . . . . . . 5.5 Power spectra with Poisson noise vs . random reset . . . . . . . . . . . . . . . . . . 5.6 Power spectra with and without adaptation . IF mode1 . . . . . . . . . . . . . . . . 5.7 Voltage and calcium traces of conductance-based mode1 . . . . . . . . . . . . . . . 5.8 Firing rate vs . applied cwent for conductance-based rnodel . . . . . . . . . . . . . 5.9 Firing in conductancebased neuron with Poisson noise . . . . . . . . . . . . . . . 5.10 Noise-shaping in a network of conduct ance-based neurons . . . . . . . . . . . . . . 5.11 Raster plots: cornparhg coupled network with and without adaptation . . . . . . . 5.12 Effective number of neurons in network vs . Y . . . . . . . . . . . . . . . . . . . .
6.1 Effect of adaptation on IF neuron variabili t4-. nwnericai results . . . . . . . . . . . 6.2 Random voltage m e t t uniform pd f'. . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Random voltage met : theory and numerical results . . . . . . . . . . . . . . . . . 6.4 Random threshold reset . uniform pd f. . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Random threshold reset: theory and numerical results . . . . . . . . . . . . . . . . 6.6 Difference between voltage and threshold m e t s . . . . . . . . . . . . . . . . . . . . 6.7 Variation of v, with firing rate . with and aithout adaptation . . . . . . . . . . . .
Chapter 1
Introduction and Background
1.1 Motivation
When watching an animal moving around in its environment. it is always impressive to consider
the fluidity of its movement. the cornplexity of the decision-making it e-xhibits. and the speed
with which it reacts to new situations. Considered simply as a device. biologicd organisms are
remarkable machines. They have been tuned by aeons of natural selection to be able to handle a
cornplex world. moving over difficult surfaces and making rapid and effective decisions in response
to a barrage of sensory stimuli.
Robots are not currently able to match this level of performance. and an? roboticist obsewing
an animal feels a sense of awe. and of e n v . We would like to be able to build devices as robust
and flexible as mimals-imagine a robot capable of scampering around on the surface of Mars and gathering rock samples as competently as a squirrel collects and caches nuts. This sense of
wonder has led, in recent years. to an increased interest in understanding the operation of these
biologicd "machines," with the goal of extracting principles that may be applied in robotics.
The idea is essentiaily to perform a sort of reverse engineering: the mechanisrns that generate
these behaviours exist. and are available for study. so why not use this information in future
designs? This approach has led to formation of a new field. c d e d variously "biological robotics."
"bioIogically inspired robotics ," or "biorobotics."
Of course, the concept of emdating animal forms when building mificiai devices is nothing
new. For centuries, engineers have looked to nature for inspiration for the design of various au-
tomata. Biologically inspired robotics may be seen as simply a revival of this tradition. However.
the tremendous advances in modem neuroscience offer the possibility that we may be able to
move beyond emulating the extemal forms of anirnals. and start to gain some understanding of
the mechanisms which underlie their behaviour.
Strictly speaking, everything about an animal's physiology contnbute to its behaviour. It is
clear. however, that the key player in generating behaviour is the nervous system, with its huge
networks of coupled. signal-processing ceh , the neurons. Most worken have focused on neurons
as the keys to understanding animal behaviour, and I will do so in this thesis. It seems clear
that if we are to transfer insights from biological bwetware" to artificial hardware. n-e will require
an understanding of the operation of nervous systems at a mathematical Lewl. M y thesis work
has thw concentrated on mathematical models of aeurons. CIearly we are nowhere near t r d y
understanding how nervous systems operate: the work presented here consists of a few problerns
to ivtish matheniatical orid coruputacionai teciiniques may be successniiiy appiied. In particuiar.
1 consider the effects of a cornmon property of bioiogical neurons. spike-frequency adaptation. on
various neural models. showing how it alters the dynamics of neurons in ways which alter their
osciilatory behaviour. their signal-processing capabilities. and their response to noise. Sly hope
is that work such as this may serve as a starting point for hture attempts to transfer information
between the realms of b i o l o ~ and robotics.
The following sections will provide an introduction to the structure and function of neurons.
then proceed to discuss a few of the common mathematical rnodeis used to represent them. The
chapter will conclude with an overview of the contents of the remaining chapters.
1.2 Nervous systems
.4n organismk nervous system is simply the complete set of neurons (and associated support
cells) that it uses to process signais and produce behaviourd responses. In vertebrates. this is
divided into two main sections: the central nervous system (CSS). consisting of the brain and
spinal cord: and the peripheral nervous system (PXS). consisting of the nerve ceUs extending out
from the spinal cord into the body. carrying signais to and from the brain.
The most striking feature of netvous systems is the massive number of individual neurons
involved. .Uthou& the simplest nervous systems have a relatively s m d number of individual
cells, the number grows rapidly as the size and %omplexitf' (a word I wiU not attempt to define
here) of the organism increases: see Table 1.1.
In addition to neurons. nervous systerns contain large numbers of glial (or neuroglial) ceus.
outnumbering neurons by about nine to one [SI. The giia serve a number of support fùnctions
for the neurons: they act as a structural scaifold: certain glial cells produce the myelin sheath
which coats the neural avons (see secticn 1.3.3) : and they provide active buffering to maintain
the required ionic concentrations in the vicinity of neurons. It is possible that glial cells may have
important signal-processing properties [61, but the work presented here will focus exclusively on
neural modelling, leaving aside the d u e n c e of glial cells.
h great deal is known about the neuroanatomy of various organisms (the structure, inter-
connections, and functional roles of dinerent areas of the central nervous system). Rather than
Table 1.1: Numbers of neurons in tarious species. Exact numbers of neurons are difficdt to establish, so these numbers are appro-date: sources for the figures are given in the third calumn. The 6gure for the number of neurons in the human brain is particularly variable: the most commonly cited figures range from 100 billion to 1 trillion.
discussing anatom? here. the remainder of the chapter will concentrate on the basic structure
and operation of individual neurons (sections 1.3 and 1 A), and on the some of the mathematical
models that have been proposed (section 1.5).
There is a t?ist literature on neuroscience. Good introductions can be f o n d in [i. 8. 9. 10. 11.
121; any of rhese will provide a staning point for the interested reader. and these introductoq
remarks have drawn extensively on these sources. Zigmond et al. (71 and Kandel and Schwartz (91
are particularly comprehensive and clearly mi t ten. Arbib [81 provides an encyclopaedia-sty le
reference. Nith shon revient pieces on a wide range of topics.
The sheer variety of ceus and organisms in the biological world means that it is difficult to
make definitive statements of the form. **'Yeurons always display property S." hevitably. there
dl be examples of cells thar do not display S. or indeed display the opposite of I. The material
presented here will address the most common properties rather than dweUng on the exceptions.
Approximate number of neurons
300 [l] 340 x 103 [21
850 x 10"worker); 1.2 x 106 (drone) 121 1 16 x 106 [3]
Organism
1.3 Neuron morphology
Latin name Caenorhabditis elegans
~Musca domestica Apis mellifica
Rana esculenta
There are maxy dinerent mrieties of neurons. perhaps as many as a thousand distinct types [SI. However. they ma? all be placed into one of three broad categories: sensory neurons: motor
newons: and interneurons. Sensory neurons t ransduce ext emal ener gies (light . mechanical vi-
brations. heat, and so on) into neural electricai signals. Motor neurons stimulate muscle spindes
and thus cause movements in the organism's body. Lnterneumns do not have direct connections
to either sensory or motor systems: rather. they receive their inputs homo and send their outputs
to, other neurons. (For those with a background in artificial neural network models: sensory
neurons, i~terneurons~ and motor neurons are roughly quivalent to input, hidden layer, and
output nodes in artdicial neural networks.)
Common name Nemat ode worm
House 0y Honeybee
Frog Octopvs vvlgaris
Loxodonta af7lcana
I ~ ù m ù supic IW
Octopus Elephant
Dave
30 x 106 1-11 200 x log [dl t
i üû x 16 - IV:- i4. j j
(b) Motar (CI Sensory neuran neuron
t Myelin sheath
Neuron-muscle swapse
\
Figure 1.1: Schematic pictures of various types of biologicai neurons. From Moleculor Cell Biology by Lodish et al. [13] 01986, 1990, 1996 by ScientSc Amencan Books. Inc. Used with permission by W. H. Freeman and Company.
Xeurai region
Dendrites
Table 1.2: Functional roles of the four main neural regions. These are' of course. not the only functions these portions of the neuron c m out. but they are the most significant in t e m of sigalliny in the n~rvni~$ ystem.
Main hct ional role
Input Soma ,luon
Figure 2 -1 shows stylized representations of the main types of neuron. Xeurons are divided
into four main morphologicaliy distinct regions (to be discussed in the next four sections): the
dendrites; the soma (or celi body): the mon: and the avon terminals. Table 1.2 summarizes the
fuoctional roles of each of these regions. The flow of signals through a neuroo is typicdy as foUows:
Integration of input Propagation of output signai
incoming signals (from sensory receptors or from the axons of other cells) are received at
the dendrites
Axon terminah I Output
the signals from the dendrites are integrated at the soma
when the cellular membrane voltage at the beginning of the avon (the avon hillock) rises
far enough. an action potential (see section 1 A.?) is generated
the action potential. a brief pulse of increased membrane voltage. travels doan the avon
when an action potential (or spike') arrives at the end of the auon. it tnggers the release
of diemicals known as neurotransmitters. which dif ise across a gap (the synapse) between
the &von terminals and the dendrites of another ceil. infiuencing the generation of action
potentials in this second ceU
The electricd properties of the neural membrane will be discussed in section 1.1. In the
foilowing sections. 1 briefly describe each of the four main divisions found in the -pical neuron.
1.31 Dendrites
The dencirites are a set of fine. highly branched stnictures which convey inputs from other
neurons to the ceIl body; lypical neurons will receive dendritic inputs from hundreds or thousands
of other ce& [5: 71. These inputs arrive in the form of neurotransmitters d a s i n g across a
gap (the synapse) between the dendrite and the axon terminal of an incoming auon. These neurotransmitters are chernicals which affect the electrical potential of the ceil (increasing or
decreasing it ; see section 1.4.3) and this voltage change is conveyed dong the dencirite to the
ceii body. Unt il recently. dendrites were viewed as passive ;tables'' conveying voltage changes
toaards the ceii bod- but it is now known t hat some dendrites are active elements that generate
pulses similar to the action potential produced in the axon [14, 151.
1.3.2 Soma
The soma. or ceii body (also called the perikaryon). contains the cell's nucleus and much of the
biochemicai rnachinem of the neuron. It is where gene expression takes place. and whew inpiits
from the dendritic tree are combined. Like dl cells, the neuron is surrounded by a cellular membrane. This membrane has an
electrical potential difference across it. arising from difFerences in ionic concentrations on the
interior and exterior surfaces of the membrane. This potentiai is the key to the neuron's ability
to process signals. See section 1.4 for more detail: briefly, the dendritic inputs are integrated by
combining their effects on the soma's membrane voltage. Wlen the membrane voltage rises far enough. an action potential is generated in the auon.
1.3.3 Axon
The mon is a tubular structure that emerges from the cell body and extends for some distance
away from it. rivons generally reach much farther away from the soma than the dendrites do.
and some avons may be over one meter long. (For example' in humans there are single avons
that extend fIom the hands and feet to the spinal cord.) At the end of the auon. many fine
branches calied aïon terminals (or presynaptic terminals. or synaptic boutons) emerge kom the
acon and connect it to other cek.
The auon's functional role is to propagate the neuron's output signals from the cell body
to the avon terminals. where contact is made with other cells (either neurons or muscle tells):
which may be iafluenced by the output. The neuron's output takes the form of action potentids.
brief pulses of hi& membrane voltage that propagate in a self-regenerating rnanner down the
axon's length. Many axons have an insulating sheath made of a fatty substance calied myelin:
this Insulation is broken at intervals. and the exposed sections of bare avon are called the nodes
of Ranvier (see Figure 1.1). The myelin sheath greatly increases propagation speed in the auon.
by dowing the action potential to "jump" Erom one node of Ranvier to the next, a process called
saltatory conduction: see (13, 161 for more detail.
1.3.4 Axon termin&
The mon terminals are speciaüzed bulbs at the ends of h e branches emerging from the end of
the axon. They rest n e z the cellular membranes of other cells (muscles or other neurons), and
affect the state of these other cells when action potentials arrive after behg propagated down
the auon.
Yotor neurons have axon terminais that attach to muscle fibres (see Figure 1-l), and the
effect of incoming action potentials is to stimulate these fibres to contract. The avon terminals
of interneurons typicaliy make contact with the dendrites of other neurons (aw-dendritic con-
nections), but it is &O possible for them to contact the cell bodies (am-somatic connections) or
the avons (auo-auonic connections) of at her c e k [171. The srnail distance between the avon terminal and the cell it influences is called the s-vnapse
(or synaptic cleft): see Figure 1.5 on page 13. The cell whose avon terminal is doing the sig-
nalling is therefore known as the presynaptic cell. while the cell which is being influenced is
c d e d the postsyuaptic cell. There are two main tvpes of synapse: chernical and electrical. In chemical synapses. the presynaptic ce11 exudes neurotransmitters which affect the postsynaptic
cell. Electrical spapses have direct electrical coupling through membrane-bound proteins called gap junctions. through which the two ceUs c m directly exchange ions.
1.4 Neuron electrical properties
1.4.1 Membrane potential
Yeurons. like most other cells. have a n electricai potentid difference-called the membrane
potential or membrane voltagebetween the inside and outside of t heir cellular membrane,
maintaineci by differences in the distribution of ions on the interior and exterior membrane
surfaces. The main ions which v a q across the cellular membrane are: potassium (K'): sodium
(Nar): chloride (Cl-): and calcium (ci2+). The cellular membrane is selectively permeable. Most of the membrane consists of a Lipid
bilayer which prevents almost aii substances from crossing, but bound into this are proteins
called ion channeis ehat span the membrane and allow ions to travel from the extracellular
fluid to the cytoplasm. Ion channels are typicaily ion-specific. o d y allowing ions of a particular species to pass through. Some ion channeis are voltage-dependent, m g their permeability as
the membrane voltage changes. Figure 1.2 shows a section of the cellular membrane. with ion
channels indicated.
There are two gradients which an to dnve ions across the membrane: electncal potential,
and concentration. See Figure 1.3. Positive ions Bow to regions of negative electrical potential.
while negative ions seek positive potentials. At the same thne, ions tend to flow from high concentration to low concentration regions. A steady state is reached when the f l ue s induced by the electrical potential and concentration gradients are equd, and ions Bow out of the ceU as
quiddy as they flow in. The voltage at which this o c m is calleci the equüibrium potential (also
knoan as the Xemst potential) , and is given by the Xernst equation,
Figure 1.2: A section of a neuron's cellular membrane. The structures shown spanniog the membrane are ion channels. each of which is associated with a particular species of ion. X e ~ t to each channel. typical concentrations of its affiliated ion are given (in 111,hf. except for intraceliular Ca2+). The values ENa, EK. and so on are the equilibrium potentials for each ion (in mv). (In the te.*? the more common notation VK1 and so on has been used, as in equation (1.1)). The 3af-Kf pump is shown at the bottom of the figure: this is an ion pump that acts to keep the intraceilular and extracellular concentrations of sodium and potassium from approaching their equilibrium dues; the pump expends energy to bring K%to the ceU. while removing Na? From findamentu1 Neumscience, edited by Zigmond et al. (7) @ 1999 by Academic Press. Used with permission by Academic Press.
Figure 1.3: Gradients decting ion flows across the neural membrane. The ion-here. potassium (K+)-is at a higher concentration inside the cell rhm outside. and thus rends to flow down the concentration gradient and out of the cell. Opposing this is the voltage difference across the membrane: the inside is more negative than the outside, so that the positive K' ions tend to BOW down the voltage gradient and inro the ceil. At the equilibrium ('iemst) potential, the Bows induced by the two gradients are balanced, and there is no net &ange in concentration. From findamental Neumscience, edited by Zigmond et al. [71 @ 1999 by Academic Press. Used with permission by Academic Press.
Yon =
where: R is the gas constant (8.313 J K-' day's constant (96,485 C mol-'); z is the
- In [ionIout zF [ion], '
T is the absolute temperature: F is Fara-
charge of the ion: and [ionIout and [ion], are the
concentrations of the ion outside and inside the cell, respectively.
If the membrane were permeable to only a single ion species, the membrane potential would
be exactly the equilibrium potential for that ion; for example, a membrane permeable o d y to
potassium would have an equilibrium potential of VK = RT/zF Ln(3/135) = -102 mV. In fact,
the membrane is permeable to multiple species of ions. and the membrane voltage is the result
of the combined effects of ail of rhem, weighted by the relative permeability of the membrane to
each ion. The resulting membrane voltage varies among different types of neurons. from about
-75 mV to -40 mV [Tl. Since the membrane potential sits at a value which is not equal to the equilibrium potential for
any particular ion species, the 0ows of ions into and out of the cell do not balance. For example,
a ceii wit'n a membrane voitage ot -6U mL' is abwe the equilibrium potential for potassium. and beIow the equilibrium potential for sodium. This means that KT ions flow out of the ceil while
Nat ions flow into it. each following their concentration gradient. To maintain the high potassium
and low sodium concentrations inside the celi. proteins called ion pumps act to transport ions
against their concentration gradients. expending energy to do so. The Sai-K- pump b r h p
potassium back into the ceil while removing sodium. obtaining the energy for this from the
hydrolysis of ATP (as s h o w in Figure 1.2). Other ion pumps perform similar operations for the
ot her ionic species.
Early neurophysiologists discovered this potentiai difference across the cellular membranes
of newons. and described the neuron as being polarized. This has led to the foilowing use of
terminologp. which can be rat her confusing init ially : increasing the membrane voltage is called
depolarization (making the ce11 less polarized. i e . moving it away from its negative resting d u e
and up towards O mV). while decreasing the voltage is called hyperpolarîration (polarking the
ce11 hinher. i.e. moving it to a more negative value. further away from O mV). Synaptic or
other idluences on a neuron are often described in this language. as either depolarizing or h -
perpolarizing. Equivalent t e m for s p a p t ic inputs are excit at ory (depolarizing) and inhibit ory
(hyperpolarizing) .
1.4.2 Act ion potent i d generation
The action potential is a brief. largemagnitude increase in the membrane potential: typicai
action potentials last 1 to 10 rns. and increase the membrane voltage by 70 to 110 rnV [SI. Action potentials are initiated at the avon hillock. and propagate down the auon- being actively
regenerated as they travel. Figure 1.4 shows a typical sequence of action potentials fiom a real neuron.
The main players in action potential generation are voltage-dependent sodium and potassium
ion charnels. As the membrane voltage (described in section 1.4.1) rises to a certain level (which
varies from neuron to nemn) . voltagedependent Xa+ channels open. allowing more sodium
ions to flow into the cell. Past a certain threshold voltage, this becornes a positive feedback
process: the voltage increase induceci by the influx of Xa' ions causes more sodium channels to open, causing even more depolarization. and thus the membrane voltage "euplodes' upwards.
The action potential is terminateci by two effects. First. the sodium channels spontaneously
inactivate, reducing the influx of sodium ions. And second. the high membrane voltage activates
voltagedependent potassium channels, which open and permit K+ ions to flow out of the ceU;
this has the effect of hyperpolarizing the membrane.
. ! ter an action potential. the membrane voltage generdiy drops to below its resting value
before recovering. Immeàiateiy after a spike is generated. the neuron enters a phase knom as
the refractory period. during which it is diEcult or impossible to elicit another action potential.
During the absolute refractory period. no amount of stimulation wiIi generate a spike: t his period
is typically 2-3 rns in length [18]. FoUowing this is the relative refractory period. during which a
higher stimulus level is required to elicit an action potentid than wodd be required in a resting
neuron: this period may last on the order of 5-10 ms [Ml. 'iote chat the absolute refractory
period places an upper limit on the maximum firing rate a neuron may achieve. regardless of the
input intensity meaaing thaï neurons cannot act as high-frequency devices: with an absolute
reflactory period of 3 ms, for exarnple. the spiking Erequency is lirnited to less than 500 Hz. The dparnics of action potential generation are weil described by the Hodgkin-Hu'dey modei:
see section 1.5.2.
1.4.3 Synaptic coupling
In a chemicai synapse. the increased membrane voltage associated with an incorning action
potential causes synaptic vesicles to release their contents. chemicals c d e d neurotransmitters.
into the synaptic cleft. These chemicals diffuse across the gap and attach to receptors on the
other cell, causing changes in the membrane voltage of the posrsynaptic ceil: see Figure 1.5.
1.4.4 Spike-frequency adaptation
Every chapter in this thesis relates' in one way or another. to the effects of a behaviour displayed
by many neurons. called spike-frequency adaptation. Rat her t han responding to a constant
stimulus with a constant rate of nring (or a constant ayerage rate' allowing for noise), spike-
frequency adaptation causes a neuron to respond with less and less frequent action potentials
as the input is sustained. The main mechanism underljlng this effect is thought to be calcium-
dependent potassium currents 120. 2 11 : each action potential triggers an influx of Ca2' ions. and
the accumulation of calcium tnggers KC cments that slow down the rate at whidi the newon
approaches the threshold for action potential generation. Each neuron. then. maint ains a trace
of its past activity in the fonn of its curent intemal level calcium ions: this trace decays over
tirne, as the Ca2+ ions leak back to their resting levels in the absence of new action potentials.
Sections 2.3.4.3 and 5.5 all discuss neural rnodeis incorporating, at increasing levels of biophysical
detail, the dynamics of spike-frequency adaptation.
Axon of
neurotransmitter
Figure 1.5: Structure of a chernical synapse. n%en an action potentid arrives at the &von ter- minal of the presynaptic cell. the associated depolarization causes spaptic vesicles to release their contents (the neurotransmitters) into the synaptic cleft, a process c d e d exocytosis. When the neurotransmitters bind to recepton on the postsynapric ce& they have an excitatory or inhibitory effect on the postsynaptic ceU's membrane voltage, depending on the type of neuro- transmitter involved. From iMolecular Cell Biology by Lodish et al. [19) @1986, 1990. 1996 by Scientific .American Books, Inc. Csed with permission by W. H. Freeman and Company.
I Static I l
Xonlinear units t 1
1 r
I 'u'eurons in slice preparations 1 I
?-
Dynumic 1
Table 1.3: Summary of some of the neural models that have been proposed. The table indicates two sets of divisions for the models: static vs. dpamic. and rate vs. spiking. Static models do not have internal dpamics, simply producing an output for a given input; dynamic models have internal states governed by some appropriately chosen dparnics. Rate models generate anaiog \dues as their output. representing the firing rate of a neuron (or the collective average firing rate of a group of neurons); spiking models produce individual spikes (of twying degrees of cornpleuity) as their output. Binaq units refer to highly simplified models in which the neuron's state is given as either excited (1) or resting (O): it is also possible to employ dynamic binary models. where the st ate remains b i n q but is governed by some set of internal d p d c s . Linear units use a linear relationship to map inputs to outputs. Sonlinear units replace this mapping with some nonlinear function. typicaliy some forrn of sigrnoid. The next five models in the table (andog neurons through to biochernicai compartmentai rnodels) are discussed in the text. Note that the bai two entries. although they involve actual. biological neurons. are still ~nodels': isolaced neurons ni vitro. or even in slices sectioned out of a brain. do not have precisely the same behaviour observed in neurons inside a fully hinctioning nemous system.
Binary units Linear units
1.5 Neural models
h a l o g neurons Integrate-and-fie [l-DI FitzHugh-Sagumo (2-Dl Hodgkin-Huxley [&DI
Biochemical compart mental models [many-Dl
Various mathematical models have been used to represent neurons. One major division can be
Rate Spgking
3.
made between static and dynamic models. Static models (used mainly in artificial neural network
(AXN) research) act as hinctions mapping inputs into outputs. while dynamic modeis have
internal states governed some set of dpamical (dinerential or ciifference) equations. M y thesis
I n vitro rieurons 1
i I
work has been entirely on the dpamic side of this division. Within dj-namic modelso a distinction
may be made between rate-based (or analog) models and spiking models. In rate-based models.
the output of each neuron is considered to be the rate (kequency) with which it produces spikes; this is a real-valued quanti@ and the individual spiking times are not considered. Spiking models
generate individual action potentials as their output. In this work. 1 have used both rate-based
and çpiking models. Table 1.3 Lists a few common models. arrangeci approxhately in order of
increasing complexi@.
1.5.1 Compart mental models
Some of the most elaborate neural models are based on breaking the neuron into many coupled
regions known as compartments, then modelling the ionic flows and conductances in each com-
partment, dong with appropriate coupling te- between compartrnents [22.23]. The mode! b~ Traub et al. [24], for example. uses 19 compartments to represent a pyramidal ceii in the C.\3 region of the guinea pig hippocampus. Each cornpartment has up to s k active ioaic conduc-
tances. controlied bv up to 10 ion channe1 variable. Leading tn a yqtern with !It~ral!y huncireds
of dimensions. None of the models uçed in this thesis approach this leveI of detail.
The individual compartmeEts in cornpartmental models often obey dynarnics similar to those
described in the next section.
1.5.2 Conductance-based (Hodgkin-Huxley) models
Hodgkin and Huxley 1251. in work that ultimately won t hem t be Nobel prize. carried out a series
of experiments on the Gant avon of the squid. measuring the conductances (inverse of resistance)
associated with the YaT and IC- ions under varqing voltage conditions. They then constmcteà a
model that fit the observed behaviour using a smali number of dynamical variables: see Weiss !!BI and Koch [261 for useful discussions.
The model consists of an equation for the membrane potential.
dV C- = dt
Io + ILva + IK + IL . where C is the membrane capacitance. Io is the applied current. and
Ilva is the current associated with flows of sodium ions across the membrane, with m a - u m
conductance g ~ . and reversal (Xemt) potentiai &va (see section 1A.l). Similady. IK is the
potassium current with conductance g~ and reversal potential VK. and is a leak curent with conductance g~ and reversal potentiai VL. The sodium and potassium conducrances are
modulated by the gating variables m. h, and n. each of which is in the range [Ot 11 and represents
the degree to some h-ypothetical voltage-sensitive gate is open (O being M y closed and 1 being
M y open). The eqonents on rn and n in equations (1.3) and (1.4) represent the assumption
that 3 and 4 such gates, respectively, must be simultaneously open for maximum conductance to occur. The gating variables are assumed to o b - k t -order kinetics: gating d a b l e x makes
transitions from closed to open a i th rate constant a, (V). and £rom open to closed with rate
constant A(V) :
These kinetics correspond t O the following set of differential equations:
where r, i n,/(cr, + Bz) and T, r l/(cr, + pl,). Thus. each of chese variables asymptoticdy
approaches the value z,(V). with time constant i,(V).
The voltage-dependent rate constants were obtained by fitring curves to experimentally mea-
surable currents and conductances (see [181 for more information). The original equations ob- tained for the squid giant avon were (18. 251:
Csing these expressions to h d x, and r, for each of the tariables fields the plots in Fig-
ure 1.6.
The Hodgkin-Hdey model captures the dpamics of action potential generation, as follows.
Imagine that we have a celi sit ting near its resting voltage, V& < O. For V « O, rn, + O. h, + 1. and n, -t O. From (1.3- 1.4): we see that the sodium m e n t I,V, = gx.m3h[v~a - V ] + O as
m -+ O: and the potassium m e n t IK = gKn4[VK - VI + O as n -r O. Xear the resting voltage.
then, the cell's behaviour is dominateci by the leak curent h, and any applied m e n t Io. If we
now apply a depolarizing input to the ce& the membrane voltage increases towards O mV. This
change has the &t of incceasing mm: and the variable rn rises as it tracks m,; this leads to an
Figure 1.6: Asymptotic (top) and time constants (bottom) of the channel variables in the Hodgkin-Hdey equations, as functions of the membrane voltage? V. The dynamics of each channel Mnable is of the form x = [z,(V) - z]/r,(V). for x E {m. h. n).
increase in the magnitude of INa INa is positive for V < and since .va > O (typicdy), the
effect of increasing the sodium current at this point is to further depolarize the ceil. This leads to
a positive feedback cycle in which the increasing voltage leads to a larger sodium current (as rn
increases) , whidi increases IN, and causes the voltage to increase even faster; this generates the
upward spike of the action potential. The positive feedback is terminated by two factors as V increases: h, decreases and h falls. which decreases ILva (IN= + O as h -t 0): and n, increases.
causing an increase in the magnitude of IK. IK is negative for V > VI<. and since VK < 0. the
eifect of the potassium current during the upward spike is to hyperpolarize the cell. pulling the
voltage back d o m towards the resting potential. Cnder the influence of the potassium current.
the ceil's voltage typically -*ovexshoots." being reduced to somewhere below the original resting
potential. At this point both the potassium and sodium currents are once again inactivated.
and the cell converges back to its resting state. If a sustained depolarizing stimulus is appiied.
the ce11 will generate another action potential. and repeat this cycle with a frequency dependent
on the magnitude of the stimulus cunent: if no sustained depolarization is present. the ce11 will
remain in its rest ing s tate indefiait ely. until mot her s t imdus generates enough depolarizat ion to
start the positive feedback cycle leading to an action potential.
One obvious question is why the increase in m, and the decrease in h, do not simply cancel
one another out. preventing the increase in rsa and shutting down the positive feedback cycle
before it can begin. The answer lies in the time constants: r, << rh. so rn increases towards m,
much more quiddy than h falls t o m d s h,. The upward stroke of the spike takes place in the
time window during which 7n has increased but h and n have not yet -caught up."
The Hodgkin-Huxley model was specific to the squid giant auon. but the same basic formula-
tion is still commonly useci as a model for other neurons. These are c d e d ond duc tance-based"
models: ment examples include [21. 27). The spiking output from one such conductance-based
mode1 (from 1211) is shown in Figure 1.7.
1.5.3 FitzHugh-Nagumo model
The FitzHugh-'iagumo equations (28,291 represent a reduction of the four-dimensional Hodgkin-
Huxley dyamics (discwsed in section 1 .L2) t O a two-dimensional syst em. (Helpful discussions
of the FHX equations are found in 1301 and [SI) .) The k t simplification is c h e d out b - noting
that, since r, « 1 S. m r m,; taking m(t) = m, (V) reduces equation (1.7) to an algebraic
relationship. The ne* step is to take h(t) = ho. ï h i s is not biophysically realistic. but the
reduced system still retains the desired characteristics: the system has a single fixeci point for
small inputs; it is excitable in the sense that a large perturbation can cause a large-magnitude
excursion through phase space (corresponding to a single spike) before it retunis to the Gxed
point; and there is a bihircation to an osciiiatory state for some sufnciently high input level.
A hirther simplification is then made, replacing the remaining V and n equations with the
Figure 1.1: Spiking in a conductance-based (Hodgkin-Huxley type) model. The plot shows mem- brane voltage versus time for a conductance-based model proposed by W'g (211: see section 5.5.
Figure 1.8: Schematic of integrate-and-fire (IF) model. An input current I ( t ) is applied to a pardel resistance (R) and capacitance (C). The output is a membrane voltage V. which is reset when some threshold is reached. For finite R. this is known as a -1eakf IF neuron. As R + cx. the Leak term disappears. and the unit becomes a 'perfect" IF neuron.
qualitatively similar dirnensionless equat ions
where O < a < 1. b > 0. and 7 > O .
The equations of the FitzHugh-Yagumo model appear very different from the original Hodgkin-
Hilxiey dynamics. but the model retains the correct qualitative behaviours while being much more
analyticaiiy tractable.
1.5.4 Integrate-and-fire model
The integrate-and-fire (IF) model was h t discussed by Lapicque 1321 (a helpful discussion is
found in [33]). It is a very simple model that treats the cellular membrane as a parde l capacitance
and resistance to which an input current is applied: see Figure 1.8. This leads to the dinerential
equation
for the membrane voltage V. For h i t e R. the model is cailed a -1eakf' inregrateand-fire neuron,
since the -V/R term makes the voltage into a leaky integrator of the input m e n t : this simulates
the presence of leakage currents passing through the membrane. -4s R + cc' the unit becomes
a nonleaky or perfect" IF model.
Spiking in the IF model is simulated by resetting the voltage to some value Vreser when a
threshold value Kh is crossed. At this point. the neumn is considered to have produced an
action potential. Xo attempt is made to replicate the action potential shape: in IF modeis. ac-
tion potentials are instantaneous, point events. generally written as 6-functions in mat hernatical
descriptions.
Although it greatly simplifies the dynamics of real neurons. the IF model captures the two
most crucial features of neural spiking dyamics: a prethreshold. inregrating phase. lollowed bp
rhe generation oi stereotjpical, brief impulses once threshold is reached 1331. In this t hesis. IF modets will be discussed in chapters 4, 3. and 6.
1.5.5 Hopfield's analog model
In [34. Hopfield presents an analog neural model that uses essentidy the same equation as the
integrate-and-Cire model.
where x may be seen as the mean membrane potential of a neuron (though Hopfield also discussed
other possible interpretations. see [31]). C and R are capacitance and resistance values. and I ( t ) is an input current. Rather than producing individual spikes wit h a threshold mechanism. the
output is taken to be a firing rate. calcdated as a nonlinear function y = f (s). where y is the
firing rate output and f (x) is called the firing rate function. Often a sigmoidal f i n g rate function
is used. such as
Hopfield was able to show that coupled networks of these analog neurons possessed fked-
point attractors. and that desired attractors could be created using a simple algorithm [31]. This enables such networks. now often called "Hopfield networks," to perform associative memory
ta&: by inserthg an attractor corresponding to each desired 'merno. ' the network will perform
reconstruction on corrupted versions of the original pattern. usually converging to the stored
pattern which the compt version most closely resembles. (See Hertz et al. 1351 for a good discussion of the applications of the Hopfield model.)
1.6 Shesis overview
This document will address the foilowing sequence of topin:
A technique for adding a version of spike-fiequency adaptation to any evisting analog
neuron model is described: the resulting models are c d e d 'phasic analog neurons." FVhen two phasic analog neurons are coupled in mutuai inhibition, oscillatory solutions can emerge
where otherwise ody fkxed-point solutions would be possible. .ln application of techniques
from nonlinear dynarnics reveals the conditions under which oscillations occur. and the
stability properties of the oscillatory cycles which arise. Such a two-cell system is a v e l
simple model of a common biological mechanism known as a central pattern generator. [Chapter 21
As an application of the simple pattern generators anaiyzed above. a network of phasic
anaiog neurons is used to generate the gait for a hexapod walking robot. Simple insights
Erom biology help structure the network. leading to an architecture that generates the
appropriate phase relationships arnong the six legs. and recovers the gait quickly when the
legs are perturbeci. [Chapter 31
Moting from analog to individually-spiking madels. 1 consider the behaviour of populations
of integrate-and-£ire neurons coupled in mutual inhibition and displaying spike-frequency
adaptation. -4s in the phasic analog neuron case. oscillatory behaviour occurs for sufficiently
strong coupling, and it is possible to analyze the system's behaviour at the population Ievel
deçpite the large numbers of individual elements involved. Reasonably accurate predictions
are made for the point of onset of oscillations, the period and amplitude of the oscilIations.
and the point at which oscilIator death occurs. [Chapter 41
0 Setworks of coupled neurons cm carry out a signal-processing operation knonm as noise-
shaping, in which noise is shiited from low to high frequencies. The addition of spike- frequency adaptation improves noise-shaping in networks of integrate-and-fire neurons:
this extends previous work by )Iar et al. [36). Setworks consisting of more cornplex
conductance-based neurons also show the noiseshaping behaviour. In the conductance- based case, the noise-shaping performance is not directly improved by introducing adapta-
tion. but adaptation does offer an advantage in tenns of distributing the signal represen-
tation more evenly across a heterogeneous network. [Chapter 51
0 Random reset is a popular method of introducing variability into the otherwise N l y de-
terministic firing of integate-and-fire models. Two types of random reset are commonly
used: random voltage me t . in which the membrane voltage is reset to a stochastic initial
value afier each spike: and random threshold reset. in which the 6ring threshold is chosen
stochasticdy after each spike. At low £king fiequencies, the two fonns of random reset
have opposite effects on the level of variability seen in the neuronk firing record: in the
presence of spike-frequency adaptation, this ciifFerence is seen even at higher nruig rates.
A few simple cdculations senie to dernonstrate why this is the case. [Chapter 61
1.7 Local abbreviations
In some of the chapters of this thesis, the algebraic expressions become unwieldy d e s s tems are
collected into conveniently defined groups. In some cases these groupings have clear meanings,
and have been named appropriately. Other groups have no obvious physical meankg, and are
defined purely for algebraic convenience: I have used the symbols Zi, i an integer. for ai l such definitions. Each Zi is dehed at an appropriate place in the text. and also reproduced in a table at the ctmt î f each Chaptcr? dczg %$th a q - ûther Usfkitiûiw üaed in thr Aapttlr.
I use the term local abbreviations because each Zi (or other definition) applies only within
its chapter of origin. Thus. the definition of Zi used in chapter 2 is not the same as that in
chapter 4. This ailows a standard format to be used for d l such abbre~riations. without requiring
the reader to search through large gIobal Iists to find a particular definicion.
Chapter 2
Phasic analog neurons
2.1 Local abbreviations
The faliowing table lists the abbreviations used for convenience in this chapter: as discussed in
section 1.7, they are -'local" in the sense that they apply only within this chapter.
1 Xbbreviation 1 Definition 1 section 1
2.2 Introduction and background
-1s discussed in the introductory chapter. neurons respond to stimuli by generating action po- tentials. voltage spikes which travel d o m the cell's auon. ..riving at synaptic junctions. these
spikes influence, typicaily t hrough neurotransmitters difised across a synaptic gap, the st ates
of other neurons (or of muscles or other tissues). To model the behavior of a neuron, we may
work at the level of the biochemistry of the celi. or we may propose simpüfied models which c a p .,Jar xe, in jf hcresing t ~ p i , ~ + ~ ~ ~pec-s af U F A ~ . Scycrd -on**
abstractness: the Hodgkin-H~uley equations [25]. the FitzHugh-Yagumo equations [28. 291 (see
also discussions in 1301 and [311). and integrare-and-fire modeh [32. 35. 381: refer back to the
discussions in chapter 1 for more detail.
At a higher level of abstraction, we may replace the individual spiking times with a tirne-
averaged firing rate. Information is lost in this process (see 1391 for a discussion of this point). but
the result is a considerably simplified model in which each neuron may be considered to output
an analog value. its spiking rate. Such -analog or "graded-response" neural models have been
proposed by Hopfield [34] and Cohen and Grossberg [401. and may be applied in cases where the
time scale of interest is long reiative to the typical interspike time. or when each analog neuron
is taken to model a population of individually spiking neurons rather than a single cell. .-\nalog
models may be explicitly derived from spiking-time models by c a q i n g out the time averaging
process (38. 411 ;\nalog neurons have proven useful in m o d e b g associative memory [34. 421. as
behavior controllers for autonomous robots [43. 441. and in solving optimization problems [Gl. In associative memory or optirnization problems. networks of analog neurons produce their
"answer" by converging to a fiued-point attractor. In the memory problem, we create an attractor
corresponding to each stored pattern. and expect the network to recover the original pattern
when presented with a noisy version of it. For such applications. we always want the network
to converge to a 6 - d point, and oscillatory solutions are to be avoided. Extensive andysis has
been perfonned on networks of the =es introduced in [34. 401, and it has been shown (see. in
addition to the onginal papes, [16,4T. 48. -19? 50. 511) that they do indeed have the property of
always converging to a &xed point.
There are many biological situations. however. in which oscillations are necessary, for example
to drive autonomie funaions and in locomotion (see [521 and references therein). It is thus of
interest to examine situations in which the much-studied analog neuron models may be made to
generate oscillatory solutions. Many of the oscillatory neural signals seen in biology are generated
by centrai pattern generators (CPGs): networks of neurons whose interconnections are such that
the neurons collectively produce rhythmic outputs. CPGs often work on the principle of mutual
inhibition, in which neurons (or groups of neurons) are reciprocally connected so that the output
of each neuron inhibits the other [521. Perhaps the earliest description of a CPG of the type
Figure 2.1: *'Half-centef' oscillator. Each analog neuron receives a constant input Ii and is coupled to the other in mutual inhibition ( w i j < 0).
shown in Figure 2.1 as Brown's *half-center model" [53, MI. As Brown noted. oscillations in
two mutually uihibitory neurons can occur if the inhibition is limited in duration. If an initial
asymmetry d o w s the b t neuron to dominate. it will "gain the upper hand." suppressing the
other while firing strongly itself. If this inhibition is of limited duration. the second neuron will
eventudy cease to be suppressed. aliowing it to dominate and inhibit the k t . and so on. yielding
a cycle of alternating bursts of activity in the two neurons. Despite its sirnplicity the haif-center
model does capture the essential dpamics of CPGs ac tudy observed in biologu: Satterlie [%l. for example. describes the signals used in swimming in the pteropod moliusc Clione lamacina as
being generated by this mechanism.
What causes the limited duration of inhibition which the haif-center mode1 assumes? There
are several possibile neurophysiological mechanisms. including fat igueo post-inhibit o. rebound.
and spike-frequency adaptation 152. 561. This chapter wiLi focus on the 1 s t of these. iihile
some biological neurons are '~onic," responding wit h a steady firing rate output when stimulated
with a constant input. many others are 'phasic' or "adapting initia@ responding to a constant
stimuluso but gradudiy ceasing to respond as the stimulation persists [Ill. (Figure 2.2 shows the
dinerent responses of tonic and phasic analog neurons to a constant input.) Clearly if the two
neurons in Figure 2.1 are phasic, oscillations become posihle: once a given neuron has corne to
dominate. its input becomes constant and it will eventualiy "adapt out ." reducing its output and
a.üowing the ot her neuron to take over.
Suppose chat we wish to model the haif-center CPG using analog neurons. If we use two
standard analog neurons [34,40], the Wtem will converge to a fked point. and no oscülations will o c m . If we wish this simple tweneuron system to oscillate, we must introduce some mechaaism
to Limit the inhibitory duration. We shail do this by proposing a simple means by which the
1 t 1 1 I 1 1 I 1 1 1 2 3 4 5 6 7 8 9 10
Time
Figure 2.2: Firing rate outputs for single phasic and tonic analog neurons nrith no self- connections. The dpamics are rx = -z + I: & = k(z - a). The firing rate output is given by y = f ( ~ ( x - a) + 0) . where 7 = 4 and B = -2 are shifting and scaling parameters and f (x) = l/(l + e-=). (Dashed line) Tonic neuron: k = O: r = 1: I = 1. (Solzd izne) Phasic neuron: k = 1: T = 1: ï = 1.
qualitative dynamics of neural adaptation may be added to existing analog neuron models.
Beyond simply allowuig us to model the haif-center CPG, the addition of neural adaptation to
existing analog neurons &ches their dynamics and evtends the range of neurological phenomena
to which they may be applied. 1 will begin by introducing the units I have called 'phasic analog neurons." then proceed to
discuss a model of the half-center CPG formecl by connecting ta.0 such neurons with mutual
inhibition. A Hopf bifurcation analysis of the model wil l enable us to calculate the inhibitory
connection strength at which oscillations begin to occur. and show us how to tune the system
parameters to field cycles wit h desired characterist ics.
2.3 Phasic analog neurons
Setworks of the Hopfieid or Cohen-Grossberg type capture the essential dpamics of temporal summation: biological neurons maintain a decaying trace of their past excitation leveis (111.
SIodels of this type generdy omit. however. the dynamics of spike-frequency adaptation (but
see [5T. 581): many real neurons (called 'phasic" or **adapting') respond only at the onset of a
constant or slow--varying stimulus. then cease responding as the stimulus penists 111): biological
neurons which respond steadily to constant input also esist. and are cailed 'ronic." (Adaptation
is most often discussed in relation to sensory neurons. so it is perhaps worth pointing out that
motor neurons c m also display this behaviour. .4tnrood and Nguyen [59]. for erample. discuss
phasic and tonic motor neurons in crayfish. j 1 propose a simple. computationally efficient method by which a form of neural adaptation
ma?; be added to evisting analog neuron models. Consider a variant of the equations introduced
by Hopfield [341 (the tonic version of these equations has also been used by Beer and Gallagher [43. UI), and augment each neuron's description with a second Iinear differential equation. The dynamics of a netxork are then written as
for i = 1. . . . . n. The xi represent activation levels (correspondhg to a membrane potential)
with time constants ri > O. The a, represent firing thresholds. with rate constants 2 O. The pi represent output h g rates. and are functions of the difference between x and a: we use
l/i = f ( ~ ( 5 ~ - ai) + O ) ? a-here f (-) is the f i n g rate function and 7 > O and 8 are scaling and
shifting parameters. I dl not spec@ the form of firing function a t this point: in section 2.4 1 wil i discuss the effects of two different forms. 1 take the comection strengths (wij £rom neuron i to neuron j ) to be constant. Each neuron receives an externd input Ii , which rnay be time-
mying. Figure 2.2 shows the r d t of integrating (2.1) for a single node with no self connection
(WH = 0). Matsuoka [ s i , 381 proposes a simiiar approach to adding neural adaptation to an analog
model, but one in which an adaptation term is incorporated directly into the activation qua- tion; in this model, the & equation may be appended to any form of actiiation equation (for
concreteness, we will use the form in (2.1) throughout this chapter) . Hom and Csher 1601 describe
a form of adaptation for discrete-time, binary-state neurons, as does Halperin [611.
The addition of the c i equation is equivalent to passing z through an RC hi&-pas nIter
circuit, with k = 1/RC. Since the effect of temporal summation is low-pass filtering of the
input [62, 631, a phasic neuron acts as a band-pass filter. Consider a single neuron of the type
given in ( 2 4 , with no self-connection: sk = -x + I ( t ) ti = k(x - a). With input I ( t ) = cos wt ,
the steady-state output may be shown to be (x - a) ( t ) = Acos(wt - Q). with
and
The amplitude A drops to zero as w -t O and as w + S. readiing a maximum value of A =
1/(1 + k ~ ) at w = m. The phase t/~ is zero at w = m. approaches ?r/2 as w + 0. and
approaches -n/2 as w + cc. See Figure 2.3.
2.4 Oscillatory solutions: Hopf bifurcation
I will now consider the behaviour of two phasic neurons. reciprocally connected as shown in
Figure 2.1. This represents the dpamin of a simple CPG. the ha-center model 152. 53, 561, and 1 will show that osciilatory solutions anse for sufficiently strong mutual inhibition. Consider
the case of two identical neurons (7, = Q = r , kl = k2 = k) with a symmetric connection
(w12 = w21 = W ) and no self-connections (wii = 2 ~ 2 2 = O). The system has a single Lxed point: shifting this point to the origin. the equations becorne
where 1 have defined iZi = X* - Ii - ZU f (8 ) and ai = ai - Ii - wf (8) .
Figure 2.3: Frequency response of a single phasic analog neuron (wi th no self-connection) to an input I ( t ) = cos ut. ( Top) Stead-state output ampiitude. A. From equation (2.2). (Bottom) Phase, +, fiom equation (2.3). Parometers: r = k = 1.
Treating w as a bifurcation parameter? 1 will perforrn a Hopf bifurcation analpis of (2.4-
2.7) using standard techniques. (See (641 for a discussion of Hopf bifurcations in a general class
of coupled nonlinear oscillators. and [37] for a demonstration of the bifurcation in a pair of
asymmetrically connected neurons with self-connections.) For the reader's convenience, 1 will
reproduce the Hopf theorem here (the following is a slightly modified version of the staternents
given in [65] and [66]):
Theorem 2.1 (Hopf bifurcation theorom) , C . q q m e thot L- = P ( i . y+), y = Gir. y: (whcre
p is a pammeter such that the bzfurcation point occurs nt p = O ) , with F(0. O. p ) = G(O. 0. p ) = O
and that the Jacobian matriz ( 8::; ) emluakd ot the origin when p = O is
for some w # O ; this implies that the Jacobian has the purely imagànary eigenvnlues I i w . I f
and
is a constant [all partial derivatives (F, = a F / a x , and so on) in (2.8) and (2.10) are eualuated ut
(OI 0' O)/, then a curue ofperiodic solutions bifurcates from the origin into p < O zfa(F,+G,,) > O or into p > O if a(F, + Gpy) < O . The origin is stable for p > O (resp. p < 0) and unstable
for p < O (resp. p > 0) ij Fm + Gpy < O (resp. Fp + Gpy > O). If a < O the periodic solutions
are stable, whàle z fa > O the periodàc solutions are repelling; the bifurcation i s supescn'tiuil i j the
b i fvmt ing periodic orbits are stable, othenuise it is subcritical. The amplitude of the periodic
orbits gmvs as lCi$ whikt their periods tend to 2 as )pl tends to zero.
(Xote t hat the Hopf t heorem addresses a two-dimensional system. Higher-dimensional sys-
tems may be reduced to two dimensions by considering only the dynamia on the center manifold;
see Theorem 2.2 on page 33. .U1 of the caiculations relemnt to the Hopf bifurcation analysis may be carried out in this reduced system; see [65: 661 for more information on this point.)
Evaluating the Jacobian of (2.12.7) at the origin, the Grst condition of the Hopf theorem is
that we must have a pair of purely imaginary eigendues. This condition is satisfied at w = I w * :
where w' = (1 + kr) / y f ( B ) with fl(x) = a/(x)/ax: at these points, we have the eigenvalues
AL,* = [-(1+ k ~ ) i 41 + k~ + ( k ~ ) q / r and X3,4 = *J-kl+ = * *W. where w I m. Xote
that AL and X2 are real and both strictly negative for k > O. Setting w = -w* - p (or w = w* + p )
puts the bifurcation point at p = O, as in the statement of Theorem 2.1.
To examine the second condition of the theorem (F' +G,, # O). let us appiy a linear change of coordinates to (2.4-2.7). bringing the system into the normal form
where the & contain ail the nonlinear terxns. The foilowing set of local abbreviations aliow the
The second condition of the Hopf bifurcation theorem, inequality (2.8), then becomes
where the partial derivatives are evaluated at the origin and the 4 4 derivative vanishes since 44 = O. Using (2.11), the partial derivative may be found to be
!?Y21 = *-. r f '(6) (2.17) apa~3 ,=,-,,=O 21
Since -y > O and r > O. condition (2.16) is satisfied for j t ( 9 ) # O. Thus. as long as the firing rate
function f (a) and the shifting parameter B are such that t ' ( O ) # 0, the first two conditions of
the Hopf theorem are satisfied for w = dm'. The next step is to consider the stability coefficient
a. given by equation (2.10); recail that the sign of this coefficient t ek us whether the periodic
solutions are attracting or repelling.
Since the half-center mode1 relies on mutual inhibition. let us consider .w = -w* - p. and
examine what o c c m as p crosses from negative to positive values. Examining the expression for
a in equation (2.10). ive see that in the nomenclature of (2.11). we have F = 43 and G = #4: since G = = O. equation (2.10) simplifies considerably. yielding
where as before the partials are evaluated at the origin.
To evaluate the partial derivatives in (2.18). we need to h d a local expression for an invariant
manifold called the center manifold. The center manifold t heorem 165. 661 is a well-known result
that allows the center manifold for a nonhnear system to be calculated (in sorne local region of
interest) fiom a linearized version of the systern. I reproduce the theorem here for the conveaience
of the reader (this statement of the theorem is from [65] ) :
Theorem 2.2 (Center manifold theorem) Let F E Cr(%") meth F(0) = O . Divide the eigenvalues, A, of DF(0) (the Jacobian mat* evaluated at the origan) into three sets. a,, os.
and a,, whele X E O, if Re(X) > O . X E a, i f Re(X) < 0. and X E O, 3j Re(X) = O . Let P. ES+ and EC be the correspondzng genemlized ezgenspaces. Then there exist Cr unstable and sta- ble manifolds (Wu and W S ) tangential to EU and Es respectàvely ut x = O and a Cr-' center
manifold, W C , tangential to EC at x = O . AI1 are invariant, but W C is not necessady unique.
Equation (2.11) indicates that o u system has a two-dimensional stable eigenspace. the ql -q2
plane in the transformed coordinates (recall that XI < O! X2 < O). It also has a tapciimensional
center eigenspace. the 93 - qd plane. The center manifold is an invariant subspace of the full four-dimensional space. whicb from Theorem 2.2 is tangent to the center eigenspace at the origui.
I will approximate the center manifold in the t-icinity of the origin using
and
Xote that zero- and first-order terms have been omitted, since the requirement that the center
manifold is tangent to the q~ - q4 plane at the ongin means that al1 these terms must vanish.
The coefficients are obtitined by comparing terms in
[Erom equation ( X I ) ] with those in
and simiiarly
bersome but
for q'L. The algebra involved in detennining the coefficients in (2.19-2.20) is cum-
straightforward: 1 will not reproduce the full details. here. Defining the local
abbreviation Zs = [y f " ( O ) (1 + Z1)]/[ f ' ( B ) z ~ z ~ ( ~ ~ z : + 41Zi + l6)]. the coefficients are:
With these coefficients in hand, it is possible to evaiuate the partial deriwtives in (2.18);
when this is dûne (once again. the algebra is lengthy but uninteresting), we obtain
using the local abbreviations
Both supercritical (a < O; stable oscillatory solutions) and subcritical (a > O: unstable
oscillatory solut ions) Hopf bihcations occur for the haif-center oscillator rnodel, depending on
the choice of the parameters k, T: 7 and B. The sign of a is a h c t i o n only of 0 and the product
kr, which reflects the ratio of the time scales of the activation and adaptation equations. The
Figure 2.4: Sigmoidal firing rate function f ( x ) = 1/(1+ e-=) . The plot shows fi (7s + 0). where y and 0 are scaling and shifting parameters. (Dashed line) The unscded. unshifted function (7 = 1. 0 = O ) . (Dash-dotted lane) Scaled but not shifted (y = 4, 0 = O ) . (Solid fine) Shifted and scaled (y = 4. 8 = -2).
magnitude of a depends on a.ii four parameters: note from (2.21), however. that the dependence
on -y is a simple proportionality to y. The details of the stability of the oscillatory solutions depend on the form of the firing
rate hinction. f (a). First. consider a cornmon sigrnoidal choice. f i (x) = 1/[1 + exp(-x)] (see
Figure 2.4). Since f i ( O ) > O for a l l O ! and the condition (2.8) is satisfied for all parameter
settings. The stability boundaries for this case are s h o w in Figure 2.5. Xote that for this firing rate function, whether the bifurcation is supercritical or subcritical depends rnainly on the value of 0: which is associated with the spontaneous h g rate of each neuron. Figure 2.6 shows a plot
of a(8) with the other parameters Lced at k = T = lt 7 = 4.
Figure 2.5: Stability boundarîes for the sigrnoidal actibaation function fi(x) = 1/[1 + exp(-z)]. The periodic solutions arising at the Hopf bifurcation point are stable in the central region ( a < O), and unstable above and below it (a > 0).
Figure 2.6: Stability coefficient a vs. shifting parameter 0. for sigrnoidal actikation function fi(x) = 1/[1 + exp(-x)]. The other system parameters are âued at T = k = 1. 7 = 4. A transition from stable ( a < O) to unstable (a > O) osciUations occurs at B = k1.68.
Xthough the sigmoidal h g rate function is a popular choice in work Nith analog neurons,
and in particular with the Hopfield equations, it does not reproduce the form of the Gring rate
function often seen in real neurons. The f -I curves (firing rate vs. applied current) of biological
neuron frequently have the general shape seen in Figure 9.7: many conductance-based model
neurons aiso have this form, as does the refractory integrate-and-fire model. (See Figure 5.8 and
[33.38].) h simplifiecl Function which has the desired qualitative form is j2(x) = 'fl(x)/[l +h(l+
l/x)], where R(z) is the Heaviside step function (X(z) = 1 for r > 0. X(z) = O otherwise. For
this choice of firing function, we m u t consider only 0 > O: otherwise we have f ' ( O ) = O. which renders the bifurcation value w' = (1 + k ~ ) / y f ' ( O ) undehed, and violates the second condition
of the Hopf theorem. inequality (2.8). Figure 2.8 shows the stability boundaries when this firing
rate function is used. Xote that this choice of fwiction makes the product kr the main factor in
determiaing the stability of solutions: this product represents the ratio of the time scales of the
excitation and adaptation processes.
'Yumerical simulation has been used to test the algebraic results. (-111 numerical results have
beec generated using the sigmoidal firing rate function JI(-) . introduced above.) With k = T = 1. y = 4. 8 = O. we find w' = 2. a = - 1. With ur = -,w8 - p. the Hopf theorem predicts circula
trajectories on the center manifold (here. the q3-qq plane) with radius r = J-7 f1 (B)p /2~a =
. Setting p = 0.02 and integrating (2.1 1) numericallc .ne find that the projection of the
trajectory onto the 93-94 plane converges to a circle of radius r = = 0.1. as e'cpected. Sote that Hopf bifurcation analysis is strictly local: it teils us that osc i l la to~ solutions will
aise in the vicinity of the origin of (2.4-2.7) when Iwl exceeds w'. It does not guarantee that
oscillatol solutions will not occur for smaller d u e s of W . With a > O. a largemagnitude
limit cycle appears: this cycle is globaüy stable for p > O. while for p < O the system becomes
multistable. with some solutions converging to the origin and some to the limit cycle. Figure 2.9
shows such a case. With a < 0. the simulations indicate that the ongin is globaliy stable for
p < 0. This analysis dows us to choose the system parameters in (2.4-2.7) to yield the type of
oscillatory solutions we desire. Selecting parameters for which a < O and taking p to be s m d , we obtain small limit cycles in the vicinity of the fixed point. corresponding to srnall fluctuations
in the base h g rate of the two neurons, as shown in Figure 2.10. With a > O and p > 0, the oscillatory solutions near the ongin are unstable. and the trajectories are repeUed outwards, finally being intercepted by a larger limit cycle in a-hich the two neurons are alternatek strongly
activated and strongiy inhibited: see Figure 2.11. Either of these cases may be used to represent
a half-center CPG . I have exaInined oniy the vicinity of -do but another Hopf bifurcation with identicai stability
properties occurs for w = w* + p . The oscillatory solutions arising for this case have the two
neurons becoming active in phase with each other, rather than being activated in alternation as
- - - p l , 0=0 ~ 4 , e=o p4,8=2
Figure 2.7: 'ionsigrnoidai firing rate fuocrion f2(4 = 8 (x) /[l + h(l + l/x)]. The plot shows j2(7x + 6): where 7 and 13 are scaling and shifting parameters. (Dashed line) The unscaled. unshifted function (7 = 1, t9 = O). (Dash-dotted h e ) Scaled but not shifted (y = 4. 9 = 0). (Solàd line) Shifted and scaled (7 = 4. 0 = 2).
Figure 2.8: Stability boudaries for the nonsigrnoidal actimtion function f2(x) = Q(x)/[l + 1 + l x ) . Osciilatory solutions are stable to the right of the boundary line ( a < 0) and unstable to the left (a > 0).
Figure 2.9: 'vIuitistability in the half-center oscillator. The plot shows trajectories projected onto the q3 - 44 plane? for r = k = 1. y = 4. and 6 = -4: the sigmoidai firing rate function fl(x) has been use& The bifurcation point is w* = 28.308' and the stabüity coefficient is a = 0.973. With w = -w* - p, the plot shows trajectories (obtained by numerical integration) for p = -15. The system is multistable, with coexistence between a stable fked point and a stable limit cycle.
Figure 2.10: Xeurai outputs vs, tirne: obtained by numerical integration with parameters: .r = k = 1. y = 4. B = O' w' = 2' w = -2.1 ( p = 0.1). The stability coefficient in this case is a = -1 < O. so the oscillatory solutions in the vicinity of the origin are attracting; past the bifurcation point. small Iimit cycles appear in the region around the origin, and the trajectories remain on rhese cycles; movement on such a Limit cycle causes the fluctuations in firing rate output seen here.
!
0.8
O.€
2i
0.4
o.:
l
Figure 2.11: Xeural outputs vs. t h e . obtained by numerical integration with parameters: r = k = 1. y = 4. 0 = O, w' = 28.308. w = -28.408 ( p = 0.1). The stability coefficient in this case is a = 0.973 > O. so the oscillatory solutions in the cticinil of the 0rigi.n are repelling; trajectories move away fiom the origin and are captured by a large-magnitude limit cycle similar to the one shown in Figure 2.9. Motion on this large limit cycle causes the alternathg bursts of activity seen here.
in the half-center model.
2.5 Discussion
I have introduced a simple means of adding the qualitative dynamics of neural adaptation to
any existing analog (&O known as graded-response) neuron model. Csing these phasic analog
neuronç, 1 have shown that one may model the dynamia of the simplest central pattern generator,
the half-center model: two phasic neuroos connecteci in a mutually inhibitory fashion, producing
altemating bursts of activity. A Hopf bifwcation andysis shows the inhibitory strength past
which osciliatory solutions will certainly &se, and d o w s oscillations of a desired type to be
produced by tuning the system parameters.
In the absence of ueural adaptation. two rnutudy inhibitory neurons will end up with one
neuron fully inhibiting the other. a situation known as "oscillator death." This has been discussed
in the context of both analog and integrate-and-6re neural models. by Atiya and Baldi [371 and
by Bressloff and Coombes [381. As we have seen in this chapter. mutuai inhibition can in fact
lead to oscillatory behavior in a pair of neurons? provided that the inhibitory effect is of ümited
duration. This limited duration is the key to the appearance of oscillatory solutions. and thus 1
would not e-xpect the details of the time course of the neural adaptation to affect the existence
of a bifurcation to oscillatory solutions.
Adaptation in biologicd neurons is a complex process. depending on the details of the bio-
chernicd dynarnics. The firing thresholds introduced in (2.1). while yielding the qualitative
dpamics of adaptation. are not proposed as a physiologically re&stic rnodel. More realistic
adaptation equations could replace the linear firing t hreshold equat ions wit h physiologicaily mo-
tivated nodinear equations; see chapters 4 and 5 for more physiologicdy detailed models of
adaptation.
Analog neurons have proven to be a usefui tool both in m o d e h g some of the functions of the brah and in attempting to reproduce animal behavior in the conte.= of robotics and artificial
intelligence. The addition of neural adaptation to these models may enhance t heir usefulness in
each of these areas.
2.6 Future directions
Ln the model presented here. a constant input leads to complete adaptation: that is. the fully-
adapted firing rate is the same as the rate in the absence of any input. namely f (8). -1s the more
detailed models used in chapters 4 and 5 wül show, it would be more realistic to assume only
partial adaptationt in which the neuron drops to some Baction of its initial rate, but not all the
way to f (8 ) . 1 have made one attempt dong these lines [671, modifying the network dynamics
(equation (2.1)) to have the form
with the firing rate output being yi = / (?(x i - ai) + O ) as before. The M n g threshold a now
has two new properties: it is always positive (due to the Heaviside step Function R ( x ) ) : and it
bas a "le&' term with rate r > O. For a fked point 3 > 0. the fully-adapted firing rate nonr
draps o n l ~ . to y = j [ r z ( k ) + el > f(e). This new model is somewhat more physiologically reaüstic than the model discussed above.
but it is no Longer so amenable CO complete mathematical analysis. It may be worthwhile
to pursue this line of inquiry further. but as chapter 4 will show. it is also possible to start
fiom populations of individual neurons and generate a set of analog equations corresponding to
the aggregate behaviour of the whole population. The phasic analog mode1 is valuable for its
simplicity. but for biological realism 1 believe that the approach of chapter 4 will be the more
fruitful one to pursue.
Chapter 3
Walking gait generat ion
3.1 Introduction
For getting around over a wide variety of terrains. legs have a substantial advantage over wheels
or tracks: animals can waik or r u over ground on which wheeled machines would quickly become
stuck. Only a small percentage of the Earth's surface is reachable with wheeled vehicles. and
of course other planets are notorious for their la& of adequately paved roads. The robustness
of legged iocomotion has led to an interest in the robotics cornmunity in constructing wdking
machines [68. 69. 70. 71. 72. 731 and in studying the control systems used by biologicai organisms
to coordinate the motions of their legs (74. 75. 76. 77. 78. 79. 80. 811. Work aimed at transfer-
ring principles of biological locomotion to robotics has been one of most productive areas in
bioIogically inspired robotics.
.A great deal of work bas been done on the generation of locomotor patterns in biological
organisms, both e-xperimental [55. 82. 83. 84. 83. 861 and theoretical, computationai (80. 81. 87. 88. 89. 90. 91. 921. In this chapter. I do not propose to offer any new theoreticai insight on
wallting gait generation. but simply to apply some principles from biology to create a surprisingly
simple network which robustiy generates the leg position commands for a sklegged robot. The
work illustrates: 1) the usefulness of the phasic anaiog neurons discussed in chapter 2. which
allow an elegant network architecture: and 2) the fact that drawing on simple principles from
biology cm aid in the implementation of a control system for a robot.
The standard picture of how rhythmic locomotor patterns arise in animds is a combination
of central pattern generators (CPGs) and sensory feedback [93. 94. 951. CPGs are groups of
neurons which produce oscillatory output in the absence of any sensory input. and these are
thought to provide the main rhythm in locomotor tasks such a s swimming, Bying, and walking. The rhythm is modulateri by sensory feedback £rom the limbs involveci, which adjusts the stepping
(or flapping/swimming) pattern to compensate for perturbations from the outside worid. The
CH-.TER 3. 'tK5LKEVG GAJT GE-NERATION 4'7
simplest form of sensory feedback is the stretch reflex, wherein a muscle contracts when it is
stretched, pulling the limb back towards a central position.
Here, 1 describe a simple network which generates a gait for a hevapod robot. using these two
principles: central pattem generators to produce the main rhythmic pattern, combined with a
stretch re%eu to compensate for perturbations. The network produces the tripod gait ( s e below),
and recovers neatly from perturbations to the legs, quickly reestablishing the proper gait.
3.2 Gaits
Wlen an animal (or robot) walks, each of its legs cycle through two main types of motion: the
stance phase. wherein the Ieg is in contact with the ground. supponing the animal and propelling
it fornard; and the swing phase. wherein the leg is off the ground. moving forward to prepare for
the next stance phase. If we view each leg as an oscillator and choose some reference point in the
stance swing cycle (for example. the start of the stance phase). then each leg may be assigneci a phase based on how far dong the cycle it is. Then. different possible gaits may be descnbed
simplg as different sets of phase relationships among the legs [64, 82. 96. 971. Six-legged insects
use a variety of gaits 198). IWO of which are shown in Figure 3.1. By far the most common
hexapod gait is the tripod gait [991. in which one group of three legs (forming a tnangle across
the body) is s m g forward while the other three legs remain on the ground. propeiiing the body
fom-ard and providing a tripod of support for the body so that static equilibrium is maintained
at all times. See Figure 3.2. This work considers only the problem of generating the appropriate phases for the six legs of
a hexapod walking with the tripod gait. For a complete control system. additional commands
are of course required to control the detailed position of each leg, and in particular to raise and lower the legs appropriately However. once the correct phases have been obtained. it is a
simple matter to generate a function that maps the phase into both front : badc and up; down
positions [72. 1001.
3.3 Coupled neural oscillators
3.3.1 Individual oscillators
The spontaneously osciliating output corresponding to the central pattem generator portion of
the neural control system is produced by two couplrd phasic andog ueurons. as desaibed in
chapter 2; the relevant equations are:
Tripod gait
(Front)
Li@ @ Rl
Figure 3.1: Phase relationships for two common hexapod gaits: the tripod gait ( l e f t ) and the metachrond gait (fight ). Relative phases for each leg are shown inside the circle representing the Ieg, and are given as a fraction of unity (with O and 1 being identical). (Lep) In the tripod gait. Legs LI. R2. and L3 are in phase airh one another. and half a cycle out of phase wirh legs RI. L2. and R3. The tripod gait is shown again in Figure 3.2. (Right) In the metachronal gait. a 'ïuave" of stepping proceeds from back to front along one side of the body. then from back to front along the other side.
(Front)
Figure 3.2: Base of support in the tnpod gait. Dashed circles indicate legs off the ground (swing phase); filleci circles indicate legs in contact wit h the ground (stance phase). The body's base of support is indicated by a triangle connecting the stancephase legs. T h e gait alternates between the left-hand state (legs LI, R2. and L3 down) and the right-hand state (legs RI- L2, and R3 down).
where Yi = f(7(xi - ai) + O ) , i = 1' Z1 is the ûring rate output of each analog neuron. The
hinction f (-1 maps the net activation level (xi - a,) to a firing rate output. with 7 > O and 0 as scaling and shifting parameters. Throughout this chapter the hing rate hinction is taken to
be the sigmoid function f (x) = 1/[1+ e x p ( - r ) ] : which maps al1 values into the range [O. 11. For
w < O and lwl su£Eciently large. equations (3.1-3.4) have a stable k t cycle with y1 and y2 out
of phase with one another [92); see chapter 2 for more detail.
The firing rates of the two analog neurons are used the control signals driving the k g positions
in a very simple simulated hexapod. In fact. it is a hexapod only in the sense that six leg positions
are simulated; no kinematics or dynamics are included to represent the body these legs should
be canying.
3.3.2 Neural coupling
To generate gaits. an oscillator unit is employed to produce the back and fonh rhythm for each
leg: these oscillators are then coupled together so that the tripod gait emerges from the network.
Since gaits are defined by a set of phase relationships among the legs. the problem of designing
a gait-generating network becomes one of coupling the individual oscillators such that they
produce the desired phase relationships between the six legs. In the tripod gait. the requirement
is that every neighbouring pair of legs have an antiphase relationship (one-half of a cycle out
of phase). Coupling between nonlinear oscillaton often leads to antiphase relationships [871.
'Yumerical simulations of two coupled oscillators of the t-ype described in section 3.3.1 confirm
that antiphase behaviour is quite easiiy obtained in a pair of oscillators by presenting the Firing
rate outputs (yi) of each oscillator as inputs to the other in the arrangement s h o m in Figure 3.3.
3.4 Single leg
We move on now to the problem of using the oscillator outputs to drive the position of a leg.
The leg of an insect, or even of a walking robot. is a cornplex piece of machinery, and would
require detailed modelling to represent accuratel. This wu not be addressed here; instead.
a very simple representation of each Leg will be employed: a position variable 9: ranging over
[-1? 11, with the boundaries of the range representing the extreme positions (%a& as far as possible" and "forward as far as possible") attainable by the ümb.
The motor neurons in insects often output velocity commands rather than positions or
torques [101]. In this form of control, the firing rate outputs of an individual oscillator (yl
Figure 3.3: Oscillators coupled to give anti-phase oscillations. Two oscillators are shonm. in horizontal pairs: one pair has its member neurons drawn with solid lines: the other with dashed lines. In the synaptic comect ions. filled circles indicate excitatory comect ions. while open circles indicate inhibitory connections. Each individuai oscillat or is driven by constant excitatory input Io, and has an intemal mutual inhibition of strength ,W. Coupling between the oscillators consists of inhibitory and excitatory connections with strength w,.
and y2) are used as velocity commands: the teg dparnics are then
where V is a rate parameter.
Equation (3.5) is open loop: it contains no feedback from the leg's position. Animal waiking
is believed 1931 to depend on a combination of the open-Ioop signai from a neural oscillator and
proprioceptive feedback based on the limb's position. A simple form of feedbadc thought to be
significant in leech swimrning, locust flying? and codvoach wallùng [931 is known as a stretch
reflex. Wlen a muscle is stretched past a certain length, this reflex acts to oppose further
lengthening of the muscle. This may be added to our simple Leg mode1 as follows:
q5,, 1 0 is the position at which the stretch reflex is actiiated and g is the gain. h t e that 4 = O
is the limb's neutral position, and that the refi~x is activated both for 4 > &, and for 4 < -&,; in each case it acts to move the leg back towards 4 = O. The reflex becornes more strongly
-2.5 I I I 1 I J 10 15 20 25 30 35 40
Time
Figure 3.4: Single 1eg position vs. cime. .A perturbation is introduced by k i n g d, = O between t = 20 and t = 22. ( Top) Gsing the open-loop equation (3.5). we see that the perturbation per- manently alters the oscillatory range of the Leg. (Bo ttorn) Using the stretch reflex equation (3.6), the perturbation is rapidly erased. and the original oscillatory range is recovered. Pamrneters: T = k = 1: 7 = 4: 8 = O: ut' = 2: w = -3: Io = 1: 4,, = 0.5: g = 2; V = 1.
activated the further fontard or back the leg is moved. This simple addition stabilizes the system, allowing rapid recovery frorn perturbations. Fig-
ure 3.4 shows the results of numerical integration of equations (3.5) and (3.6). In each case,
a perturbation has been introduction by setting Q> = O between t = 20 and t = 22. .As the
upper plot demonstrates. the open-loop equations may be pushed into a new range by such a
perturbation. while the lower plot shows the -tem recovering to the onginal cycle under the
-duence of the stretch reflex.
3.5 Two legs
To obtain antiphase coupling between two legs, the legs' osciliators may be coupled in the manner
indicated in Figure 3.3. However. this is an open-loop form of coupling: each oscillator receives
information only about the command signais being sent to each leg, not about the leg's actual
position. To close the loop, the legs are coupled through their actual positions 4. using a mixture of excitatory and inhibitory co~ec t ions as shown in Figure 3.5. (The idea of coupling
cscilatiüg h b s thûügS thek î-iüxut püaiiiû~w riiilirr thin tlü-uugh tlieir c u ~ ~ l r i u c l signais is
discussed in [1021.) Labeiling the two legs L and R (for *'left" and vright ." since we want this type
of antiphase coupling for legs on opposite sides of the body). the relevant ODES are as foUows.
Denote the leg positions by $i, with 1 E {L. R). Each leg has its owo individual neural oscillator.
with variables X ~ J , x1.3 ct t ,~ . and ar.2, and associated with each neuron in the oscillator is its
£king rate output. 9i.i = f (?(xlBi - 0i . j ) + O ) for i = 1. 2. There are ten equations in total:
where S(x) is as defined in equation ( 3 3 , and the other parameter values are as previously
discussed.
-1s Figure 3.6 shows. the closed-loop stretch reflex coupling both maintains an antiphase
relationship between the two legs and recovers quickly kom perturbations.
3.6 Six legs
The tnpod gait is generated simply by extending the couphg descnbed in the previous sections
to a set of six legs. In the tripod gait, each leg has an antiphase relationship to its ipsiiateral
(same side) and contralateral (opposite side) neighbours. The couphg described in section 3.5
Leg L, position QL
Leg R, position OR
Figure 3.5: .hiphase coupling for two legs. FiUed circles indicate exch atory connections. open circles indicate inhibitory connections: symbols next to connections represent coupüng strengths. The upper oscillator produces a pair of outputs y ~ , l . yL.n? which drives the leg position 4~ according to equation 3.6. Similarll the Iower osciilator drives leg position 4 ~ .
Time
Figure 3.6: Trajectories of two legs. coupled as indicated in Figure 3.5. Perturbations are introduced by setting di = O for t E [20, 221 and setting & = O for t E [35. 401. Xote that the system recovers rapidly fiom these disturbances, ret urning to the original osciliat ory solution. Pammeters: T = k = 1: -y = 1: t9 = O; w* = 2: w = -3: toc = 0.25: 1. = 1: qî,, = 0.5: g = 2: and v = 1.
Figure 3.7: Coupling pattern for tripod gait generating network. The dashed Lines indicate antiphase coupling of the type shown in Figure 3.5: this is made explicit in equations (3.18-3.23).
is used to connect each neighbouring pair of legs. as shown in Figure 3.7. The full set of equations used to generate the tripod gait is as follows. Each leg is represented
by a position &. with 1 E {LI, L2. L3. R1. R2. R3). The neural oscillator associated with each
leg has four variables. denoted q , l o ~ 1 . 2 : ai.1. and ai;?: associated with each analog neuron in
the oscillator is its firing rate output. 91.i = /(+y(zi,* - + 8 ) for i = 1. 2. There are thirty
differential equations in total. five for each leg:
where as before. T is the neural activation time constant, t is the adaptation rate, w < O is
the strength of the mutual inhibition within each neural oscillator, Io is the tonic (constant)
excitation to which the neurons are subjected, w, > O is the strength of the coupling between legs. the stretch reflex function S(z) is as defined in equation (3.7). g is the stretch reflex gain, and V scales the rate of leg movement. The actual coupling between legs occurs through the Il te=, whose values define which neighbouring legs infiuence leg 1's position:
Figure 3.8 shows the result of numerically integratinp the thirty equations definecl (3.18-
3.22), with the legs started at random positions. q.ji E [-1. 11 and the other variables initialized to
small random values. The differential equations have been implemented in Sirnulink. a MATWB package for ODE solving. Runs consistently show that a tripod gait is quickly established. and
the same gait is rapidiy resumed if the legs are perturbed. Note that we do not need to use a
dinerent value of coupling strength w, for the central legs L2 and R2. even though they have
three neighbours where the other legs have only two.
Figure 3.9 shows the result of adding white noise to the velocity of each leg. so that the ODE for each leg position becornes
where o(t) is a stochastic band-limited white noise term (supplied' in this case. by the Simulink
ODE solver). The proper phase relationships are maintained despite the presence of the noise.
3.7 Future directions
The outputs of the network described in this chapter have been used to control the w&ng of a six-legged robot named Kafia, built during my Master's work 1721: the implernentation was carried out by Joseph Yang [1001. Lnfortunately. technical difficulties prevented the use of reai- tirne feedback. so that the legs were controlled in an open-loop manner. It should be possible to
correct this difEculty and feed the leg position feedback into the differential equations.
In [1001, the set of ODES were numerically integrated on a 486 computer in real tirne. How-
ever. the form of the equations is such that it shodd be possible to implement them in analog
electronics direct- [103, 1041, rneaning that no external computer would be required to carry
out the numerical integration.
In terms of analysis, the basic features of the individual oscillators would be preserwd if they
were replaced by two-dimensional van der Pol oscillators (see [65]). or some other simple form of relaxation oscillator; this would halve the dimensionality of the system and facilitate more
detailed andpis of issues such as the global stability of the limit cycles seen in the network.
First tripod: legs LI, R2, L3
Second tripod: legs RI, L2, R3 1 I I 1 1
Figure 3.8: Tripod gait, generated by six coupled osciliators. The upper plot shows one set of three legso {$ti,4m,#Ls}o while the iower plot shows the other trîpod, {4R1:4L2.4R1}. Perturbations are introduced by setting IL = = O for t E [15' 201 and &2 = O for t E [301 351. Pammeters: T = k = 1: y = 4: t9 = O: w* = 2: ut = -3: w, = 0.1: cp,, = 0.5: g = 2: and v = 1.
First tripod: legs LI. R2, L3
Second tripod: legs R I , L2, R3
Figure 3.9: Tripod gait. generated by six coupled oscillators. in the presence of noise. White noise with power 0.005 has been added to the veiocity commandsf see text. The upper plot shows one set of three legs. {dt i , (bR2, 4L3), while the lower plot shows the other trîpod, {4R1, 4~~~ 4R3)- Perturbations are introduced by setting = QRi = a(t) for t E 115. 201 and Irn = o(t) for t E [30. 351; during these periods, the affecteci legs dnft under the influence of the white noise a(t) (independent noise sources are provided for each leg). Parameters: T = k = 1: 7 = 4: 8 = O: W* = 2: w = -3; wc = 0.1: #sr = 0.5: g = 2: and V = 1.
Chapter 4
Oscillations in pools of coupled neurons
4.1 Local abbreviations
The following table Lists the abbreviations used in this chapter: as discussed in section 1.7. they
are "local" in the sense that they apply o d y within this chapter.
1 -1bbreviation 1 Definit ion 1 section 1
I fi I= f(ci)I I Io - k[Ci I C I
4.2 Introduction and background
Chapter 2 presented the concept of half-center oscillators, groups of neurons which oscillate due to mutual inhibition cornbinecl with a limited duration of i n h i b i t o ~ activity. The limited duration
of inhibition was provided by a forrn of spike-Bequency adaptation, and an analog neuron mode1
was used.
.halog neurons. while useful for rnany aoalytical and practical purposes, are not generally
regsdid as redistic m d è b fur iudiviclud ururuiw i39. iû5i. Auce uiauy sigriiEcaric mutai pro-
cessing events happen on such shon time scales that only a small number of action potentials is
irivolved. making the notion of an **average fuuig rate' problematic 1331. Rather t h m viewing an
analog neuron as a single neuron whose output is its own individual firing rate. we cm conceive
of an analog neuron as representing a population of individually spiking neurons. and view its
output as a £king rate averaged over the entire population [-II. 105. 106, 1071; a time average
for a single neuron is replaced with an ensemble (population) average for a group of neurons.
thus avoiding the problern of the long averaging tirnes required to define a meaningful rate in
the single-neuron case [1031.
In this chapter. this population-averaging approach will be used to create and analyze a
half-center oscillator made up of two popdations of individualiy spiking neurons. For simplicity.
1 will consider oniy the case in which each population of neurons has no internal connections.
and has all-to-all connections between the pools. as shown in Figure 4.1. Once again. the factor
which Iimits the duration of inhibition di be spike-frequency adaptation. t his time implemented
at the level of individuai neurons rather than in the analog neuron equations.
Two pools coupled as shown in Figure 4.1 will. given that the individual neurons display spike-
Frequency adaptation and t hat the mutual inhibition is sufEciently strong, produce alt ernating
bursts of nring in the two pools. as shown in Figure 4.2. For large populations of individual neurons. each of which has its own internal dpamics,
the complete system representing the two coupled pools has a large number of dimensions. In this chapter. 1 will show that it is possible to reduce this high-duneusional systern to a two-
dimensional set of dyamics. a drastic simpüfication. These simplified dynamics allow quite accurate predictions to be made about the behaviour of the bill, high-dimensional s l t e m . in-
cluding the period of the oscillations and the approximate range of coupling strengchs for which
oscillations wilI occur.
The connection to robotics is the prospect of using oscillators of the type described in this
chapter to produce rhy thm required for Iocomotion (or other functions) in a robot. Such oscil-
lators would e n j o one of the central advantages of biological -stems. namely redundancy: since
the osdat ions corne kom a large number of individual units, loss of any single element wodd
not halt the oscillations.
Figure 4.1: Two coupled pools of individudy spiking neurons. There are no connections within each pool. Between the pools. coupling is dl-teail: each neuron in pool 1 sends its output to every neuron in pool 2 with an inhibitory coupling strength K. and vice versa. Each pool receives a constant (tonic) input. Io: see section 4.3.
4.3 Neuron mode1
4.3.1 Dimensional form
The model neurons used in this chapter are of the integrate-and-lire (IF) type (see section 1.3.4).
The model. described in [1081, includes spüte-frequency adaptation. an effect wherein an accu-
rnulating ionic concentration causes the neuron to fire l e s~ rapidly: the equations for an individual
neuron are
where V is the membrane voltage (see section 1.4): [ca2+] is the concentration of calcium ions
inside the neuron, 6(*) is the Dirac delta function. and t is the (dimensional) time. marked with
a tilde to distinguish it from the dimensionless ;tirne" appearing below in section 4.3.2. The meanings and typical ranges of m h e s for the other parameters appearing in (1.1-4.2) are given in Table 4.1. Xote that a very simila.. adaptation mode1 for IF neurons is described in [331.
In the model, each time the membrane potential V hits some threshold value V;h, a spike is
generated and V is reset to here, I will alaays take VmSet = Vr, so that the voltage is
reset to the resting potential after each spike. Equation (1.2) shows that each spike produces
an inaement of size AC, in the concentration of calcium inside the cell. Clearly this is an
CHAPTER 4. 0SCILL44TIO?VS POOLS OF COII'PLED NEURONS
Figure 4.2: Xlternating bursts of nring in two coupled pools of 100 neurons each. This is what is knom as a 'taster plot": each horizontal line represents one of the 200 neurons in the population, with spiking times indicated by dots. One pool of neurons fies rapidiy, suppressing firing in the other pool, then reduces its firing rate due to spikefrequency adaptation. When the h g rate drops far enough, the previously suppressed pool is able to recover and dominate the previously firing pooI. The figure shows the output from the nondimensionalized mode1 presented in section 4.3.2; the quantity t is thus a dimensionless time. scaled by the membrane time constant. Pammeters: 1, = 10: K, = 48, r, = 0.025 ( K o q = 1.2): k = 1.2; kt = 0-75: /3 = 0.2; re = 2.5. (These parameters wiil be discussed in section 4.3.2.) ,U1 numerical simulations in this chapter are carrieci out using the fourth-order Runge-Kutta method nith maximum step size h = IO-' (dimensionless).
'io t es 1 pF is standard
S=Siemens=l/Ohms u
Tm
VP &,~,t
Value
1 PF 0.05 mS
Parameter 1 Meaning
1 (i) G
Table 4.1: Parameters for adapting integrare-and-fie model. dimensional fonn. The values cited are taken from 1108).
C Q
membrane time constant resting potential reset potential
approximation. since real cells do not change ionic concentrations instantaneously. However. the
influx of calcium in real ceiis is sufnciently rapid that it may be approximated by a step: see
Pl. 1081. The e f k t of incoming calcium is to activate a calcium-dependent potassium current:
the presence of calcium acts to open potassium channels in the cellular membrane. making
the membrane more permeable to potassium ions and thus inducing a current Bow across the membrane. (See section 1.4.1 for more information about fiows of ions across membranes.)
For a constant input m e n t i(i) = f,, an adapting neuron fies at some initial rate that
decreases as the calcium accumulates: see Figure 4.3. Denoting the initial rate finzt and the final
steady-state rate f s s . the degree (or "strength") of the adaptation effect may be summaxized as
membrane capacit ance membrane conductance
input current Ca-de~endent K conductance
-80 mV -2 ph1
50 to 100 ms -54 to -40 mV
-
VK Aca TCU
: K h
- finit - fss Fadop = - A s - 1 - -. finit finit
20 ms -70 mV -70 mt'
O to 10 ,UA O to .I mS/uhI
K reversal potentiai [CàLC] increment
[Ca'li] decay tirne constant spiking threshold
h value of Fadop = O indicates no adaptation, while Fahp = 1 indicates "complete" adaptation.
in which the neuron ceases 6ring entirely after it has adapted. Biological neurons rarely e-xhibit
complete adaptation: tj-picai experimental values for Fahp range from O to 0.6 (21. 1081.
rm z C/g
Here, Vreset =
4.3.2 Dhensionless form
Converting equations (4.1-1.2) tu a dllnensionless form helps to cl+ the analysis. and the
remainder of t his chapter wiIl deai only wit h dimensionless quantities.
Let us nondimensionalize V by setting v = (V - V r ) / ( h - V,) = (V - K)/0. where 0 r -
V,. This maps the rest (and reset) voltage V, to v = 0, aad the spiking voltage h h to v = 1- -1 sensible time scale is provideci by the membrane time constant: r, G Clg; d e h e a dimensionless %ime7' t = ;Irm. The calcium concentration [ca2+] may be nondimensionalized with respect to
Figure 4.3: Response of adapting IF neuron to contant input current. (Top) The 'ïnstantaneous firing rate:' f, caicuiated as l / t* . where t* is the interval between successive spkes. The initial rate finit = 374 EIz, while the steadystate rate f,, = 218 Hz; the strength of adaptation is thus Fohp = 1 - As/ finit = 0.42. (Bottom) The calcium concentration in the neuron. 'lote that [Ca2+ ] rises to cycle around a steady state value. Parameters: C = 1 pF: r,,, = 20 ms: - V, = -70 mV: Kh = -54 mV: Ac, = 0.2p;LI: VK = -80 mV: Io = 6.4pA: G = -06 rnSj'ph1; TC, = 50 m.
CH-*TER 1. OSCILLATIONS IN POOLS OF COLFLED WW'RONS
Table 4.2: Values of the dimensionless constants for various choices of the dimensional parame- ten. The dimensional parameters not Listeci are C = 1 p F . 7, = 20 rns. VreSt = = -70 mV. a = 0.2 ph1. and VK = -80 mV.
an arbicrary reference lewl. [~a'+],.~: set c = [ ~ a " ] / [ ~ à ~ - ] , , ~ . with [ ~ a " ] , , ~ = 1 PSI. These definitions lead to the single-neuron equations
where 5 = dxldt. The dimensionless parameters are given by k, = G [ C ~ ' ~ ] , , ~ T ~ / C . kr =
k,(V, - VK)/8. = A ~ , / [ c ~ ' ~ ] , , ~ . and r, = rc,/r,: d of rhese are positive. escept in the nonphysiological case where L> c V'. The dimensionless input current is given by I ( t ) = ~(~)(T,/CO)-(V,,,,~-V,)/O. which becomes just I ( t ) = I ( E ) ( T ~ / c B ) since V,.,,,t = K. Table 4.2
shows typical d u e s for the dimensionless parameters.
The dimensionless voltage rises until it reaches v = 1. at which point it is reset to v = O. X convenient way to rewrite equations (4.4-4.5) is
where ~ ( c ) = [l + k,c]-' and r ( t . c ) = I ( t ) - krc. These definitions cast the integrate-and-fire
mode1 into its most basic form, making the between-spikes solution particularly clear. Setting
t = O when u is reset to v = O? and taking a constant cunent f = Ïo. the solution to (4.6) for a
given (constant) c is:
The solution asymptotically approaches u, = Ï*T. Providecl that v, > 1. v ( t ) will reach the
fving threçhold v = 1 at some time t*' which is easily determined Erom (4.8) to be
Using this. we can d e h e an 5nstnsrantaneous firing rate" as the reciprocal of the interspike interval t* :
where X(.) is the Heaviside step function ('Fl(x) = 1 if x 2 1, othenvise ?l (x) = 0).
4.3.3 Synaptic coupling
To form networks of the above-described IF neurons. 1 coupled them using simulateci synaptic
currents. Each time a neuron outputs a spike. its associated syaptic current is increased by a
synaptic kernel y (s) for all times t > S. If { t P } is the set of times at which neuron i has hed.
its synaptic output current at time t is
This is a common method of simpli-ing the intricacies of synaptic d-mamies: see 1105. 107. 361.
Here I will use the synaptic kernel ~ ( s ) = e - s ~ ' s ~ ( s ) . where r, is the synaptic decay time
constant. (Since I am working with the nondimensionalized model, r, is also dimensionless.)
This kernel corresponds to a synapse with a very rapid onset time and an eqonential decay:
the modelled 'heurotransmit ters" begin to diffuse across the synaptic cleft inst antaneously after
the neuron generates a spike. and the process persists for some (typicdy brief) time. with the
s y a p t ic influence decaying exponent idy .
In the numerical simulations, 1 have generally taken Ts = 0.025. Assuming a membrane
time constant of 20 ms. this corresponds to a dimensional synaptic time constant of 0.5 ms, which implies a quite rapid decay in qnaptic iduence? meaning spikes in the distmt past have
a negligible effect on the present value of the synaptic current. Figure 4.4 shows the synaptic current output for a single IF neuron.
4.4 Population activity
The cokctive firing of a gmup ("pooln) of ceurons produces a net artivity for the pool: if we
observe the spike train formed by superimposing the spiking times of all neurons in a pool, we
cm average over a short time window and dehe an average firing rate for the pool. Foltowing
Figure 4.4: Synaptic current output from a single neuron. I,!w(t) = xm ~ ( t - t l ) : wit h spaptic kernel ~ ( s ) = e-31rs31(s). The rapid spaptic decay t h e means that. for this firing rate' the sqnaptic output is essentidy a function o d y of the most recent spike time. Pammeters: Io = 10: T~ = 0.025.
CH,-tPTER 4. OSCILL,4TIOIVS LN POOLS OF COLPLEL) YEURONS
Gerstner ;lOûl, 1 define this average firing rate. or "activity," as
1 ng& (t; t + At) &(t) = -
At Ni .
where Fi is the population activity for pool i , Ni is the number of neurons in the population. At is some small t h e interval, and n$,,(t: t + At) is the total number of spikes generated by the
pool in the interval from t to t + At. Figure 4.5 shows a raster plot (the stackeà tirhg records
of ail neurons in a population, plotted against time) iiiustrating this definition. For wry large populations, we rnay formally consider the limit
In this limit, Fi (t) is independent of the choice of At, and at every time an "instantaneous"
firing rate is defined. For the smaller networks 1 will consider here. nrith population sizes in
the hundreds of neurons. the choice of At does make a difference when calculating Fi Erom the
output of a simulation: Fi becomes highly discontinuous if At is made too srnail. since in a finite
population some s m d slices of tirne will contain no firings at dl. However. it is possible to
analyze the systemk behaviour under the zsumption t hat pool activities are smoot h functions:
as we will see, this approach yields reasonable results despite the finite population size.
4.4.1 Activity in a single population
The activity observed in a population of neurons depends on whet her the population is display ing
asynchronous or spchronous behaviour. Ln aspchronous firing the £king times of the individual
neurons are uncorrelated. and we may assume that the population acrivity is a simple average
of the individual &g rates. When the population becomes synchronized. the firing times of
the neurons are strongly correlated; in the case of a M y synchronized system. all neurons fire
simultaneously. The population activity in the synchronous case is a series of brief pulses of high
activity, separated by intervais of no activity (see 1105. 1091).
For a single pool of uncoupled neurons. the population could maintain a spchronous date
only if all neurons had identical initial conditions and no noise m a present in the system; since
noise is in fact present in ail real neural systems. synchrony is not realistic for a pool of uncoupled
neurons.
Given that the population is in an asynchronous state. we may calculate the population
activity as the average of the instantaneous tiring rates of the individual neurons:
CH-*TER 4. OSCEL.4TIONS liV POOLS OF COUPLED NEURONS
O
01 I I I I 1 , O
I I O
2 2.0 1 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.1 Time
Figure 4.5: Raster plots illustrating population activity. ( Top) Each horizont al line represents one of the 100 neurons in the population. with spiking times ùidicated by dots. (Bottom) If we consider the slice of time from t = 2 to t = 2.01 (At = 0.01), we see that 16 neurons fire in the interval: the population activity at that instant is thus F = 16/(100 0.01) = 16 Hz. Considering the intemai between t = 2.01 and t = 2.02 iilustrates the e h t of a finite population size: the activity in this instant is only F = 4 Hz. impiying that F ( t ) is a very jagged function. It is generaily necessary to take a larger snmpling thne (for ei~ampIe, At = 0.1)? to obtain a smooth
W).
CH,-IPTER 4. OSCILLATIONS POOLS OF COWLED :\rECrRONS 70
where Ni is the number of neurons in the a-th pool. and fj is the h g rate of the j-th neuron.
For a constant input curent Io. each neuron has a firing rate given by (4.10). If we define the
pool-averaged value of c as
then the population activity in a pool of identical neurons is just
where ?(Ci) = [1 + k,C,]- ' and Ï0(ci) = 1, - kr Ci are the pool-averaged membrane time constant
and effective input current . respectively.
From (4.7). the dpamics of the pool-averaged calcium level is Qven by
For k ing which is rapid relative to the calcium decay rate. as is generally the case. we may
replace the indh1dua.i delta functions in (4.16) with the £iring rate itself [108]: obtaining
'iote that the population activity is a function of the pool-averaged calcium level. The problem of detennining the activity of pool i is thus reduced to ûnding Ci. Equation (4.15) may be made more tractable using the approximation [IO81
I l [-h(1- z)]-l = - - - x 2 + O ( 4
for z « 1. For input Io. if I'T » 1: we may use (4.19) in (4.15) to obtain
Gsing (1.22) makes (4.18) into a simple linear ODE:
-1 where the adaptation time constant rad = [/3 (k r + %) u(Î,T - 1) + $1 . For Ci(0) = O. equation (4.24) has solution
where it has been assumed that Ï,? > 1 throughout. allowing us to &op the Heaviside sçep f'c t ions.
Equations (4.22) and (4.23) predict the time course of the population activity for a pool of
uncoupled identical neurons with constant input I,. -1s t + x.
and the activity approaches its steady-state value
Figure 4.6 shows the population activity and pool-averaged calcium level fkom a numerical sim- ulation of a pool of uncoupled neurons: the simulation values are compareci ni th the predictions
of equations (4.22) and (4.25). 'iote that the t h e courses of the pool activity and calcium level shown in Figure 4.6 resemble
the h g rate and calcium concentration plots for a single neuron. shown in Figure 4.3. An uncoupled pool of neurons is equivalent to a single neuron in which the effect of the noise in the
individual firing rates has been smoothed by averaging over the population. In fact. a pool of oeurons behaves in many ways U e an analog neuron' producing a collective aring rate output
that may be seen as a . analog value [1031. The average calcium concentration in the population
effectively encodes the nring rate: this has been discussed in [20! 211.
CH4PTER 4. OSCILLATIOXS LW POOLS OF COUPLED ,VECXONS
Figure 4.6: T h e couse of the activity and average calcium level in an uncoupled population. (Top) The population activity decays exponentidy with time constant rad to a steady-state value Frs. Dashed line: results korn a numencal simulation. Solid line: predicted time course, From equation (4.22). Finite size effects mean that the population rate fluctuates in the numerical run. (Bottom) The pool-averaged calcium level increases with the same cime constant, rad, to a steady-state d u e Gy. Dashed line: results Born a numericd simulation. Solid line: predicted time couse, from equation (1.26). Parumeters: N = 500; Io = 15: k, = 1.2: kI = 0.75: j3 = 0.2: r, = 2.5.
CHH-lPTER 4. OSCILLATIONS POOLS OF COUPLED NEURONS 73
4.4.2 Activity in two coupled populations
When coupling is added to the system, as described in section 4.3.3 and illustrated in Figure 4.1,
the pools may no longer be in an asynchronous state. The coupüng allows the possibility that
synchronization will occur within or across pools. Here. 1 wil l assume that enough noise is injected
into the individual neurons that any sjmchronizing tendencies of the coupling are overcome, and each pool remains in an asynchronous state. (In aJl numerical simulations in this chapter, a
random voltage m e t in the range (e-.3. .3! is used. alone with Poisson noise wirh Xi = A- = 4n
and AV = 0.01 (see sections 5.4.2 and 6.4): this high noise level does indeed suppress all spchronization effects. and the population is kept in an asychronous state.) Xspchronous
6ring wili dlow me to calculate the activity in two coupled pools using the same approximations as in section 4.4.1.
Define an effective input curent for pool i. f,. combining the effect of its own adaptation (calcium) level with the coupling from the other pools:
where KJi is the coupling strength from pool j to pool i. ~ ( s ) is the synaptic coupling kernel.
and Fj is the activity in pool j. If the pool activities v î q - slowly compared to the synaptic time
constant, we may mite
where 1 have used the synaptic kernely(s) = e-s/ra?i(s). Here 1 will oniy consider the case of two
pools with no interna1 coupling (Ki* = O), qplmetric coupling between the pools (KL2 = K21 = K ) : and identicai population sizes Ni = N2 = N. For this case, the effective input currents are
where K, = NK. Xote that this is essentiaily Gerstner's spike response method, see [107, 1091.
In [iOJ], Gerstner carries out the analysis of the popuiation activity of a single a ~ c h r o n o u s l y
firing popuiation of neurons, and sketches the approach for multiple populations; he does not
consider spike-frequency adaptation effects.
C W T E R 4. 0SCLLL44TIOXS IN POOLS OF CO W L E D .!"LI'RONS
Csing the currents (4.291.30) in equation (4.15) gives
wnerere for brevity i have detined Fi = =(Ci) = (1 + k,Ci)-' for i = 1, 2. For instantaneous
talues of Cl and C2, (4.33) and (4.34) provide a set of simultaneous equations to be solved for
FI and F2. (Xote that il = &(fi) and f2 = &(F~) .) In their fidl nonlinear form the equations
are not soluble analytically though a solution may of course be found numerically. To obcain a
closed-form solution, I use the linear approximation given in equation (4.22). and write
Sote that these equations
however. piecewise Iinear.
are not Linear. due to the presence of the step functions: they are.
Figure 4.7 shows the intersection of the Fi and fi equations for a case with weak coupling, and iiiustrates the difference between the full nonhnear equations and
the piecewise linear approximation.
For sufficiently weak coupling (see section 4.5). there is a single solution to equations (4.36 4.36)' given by the intersection of the two Lines in Figure 4.7 (bottom plot). Dehing the
local abbreviations ZL = kr + k+/2. Z2 f (1. - 1/2)/(1 + K.T& Z3 G Zl/ ( l - K~T:). and
4 r K,rSZI/(i - K~T:). this solution is given by
-1ssuming Fi(t) = pi and F&) = F*: we can substitute into equation (-4.18) and reduce the
syaem's dynamics to a two-dimensional systern on the Ci - C2 plane.
C W T E R 4. OSCKLATIOXS IN POOLS OF COLPLED YEUROIVS
I 1 '
F, (exact) F, (exact)
%-= - I - - - - -
- - - - C - - - -
%
\
c e - -
F, I I I 1 I I I I I 1 5' l
O 1 2 3 4 5 6 7 8 9 10
Figure 4.7: Determination of the activity in two coupled pools. weak coupling. (Top) The full nonlinear equations (4.334.34 are showo. The quantities refer to the maximum possible firing rate of pool i , given the m e n t input level and value of Ci. The values Ff refer to the firing rate in pool a' past which the opposite pool is completel suppressed. These quantities dl be referred to later, in sections 4.5 and 4.6. (Bottom) The piecemise linear appro'cimations: equations (4.35436). Parameters: K,r, = 0.5; 1, = 10: k, = 1.2: kr = 0.75: = 0.2: r, = 2.5: ci =ci. C* =C*-
C W P T E R 4. OSCILLATONS IX POOLS OF COWLED NEUROXS 76
Defining another abbreviation. Zs i PZ4 + l/~,.-~, we may write the Lued points ci (at which
Ci = O)
Shifting this âued point to the ongin by defining Ci = Ci - we b d
The matrix eigenvalues are Xi = -PZ3 - 1/&. X2 = PZ3 - l/Zj: substituting and simpüfying.
t hese expressions becorne
1 P o l + k T / 2 ) A:, = --- TC ~ + K A '
Yote that the eigenvalues depend on the couphg ody through the product Ka%. A? is always
negative for inhibitory coupling (Ko > O ) . X i is negative for Ka+, < 1: at K 0 ~ $ = 1 it has a
singularity. wit h
and
Lm XI =+m. K0t,+l+
(4.16)
For 1 < Kars < 1 + B ( k ~ + &/2) Tc, AI > O . Figure 4.8 shows Xi and X2 plotted against K0r,. For the remainder of this section. 1 will consider Kars < 1. in which case the k e d point Cl =
& is locally stable: Al < O and X2 < O. The Lînearized djmamics indicate that the system wül
converge to these values starting fiom anq- point in the Ci - C2 plane (Linear systems. of course.
cannot have multiple fked points or multistable soIutions) . Unfortunately, the corresponding
nonlinear system is not tractable: to solve it. 1 would need to be able to solve for Fi and f i in their full noniinear form. equations (4.33-4.34). Therefore 1 cannot make rigorous claims about the behaviour of the nonlinear equations. except to argue that their behaviour will be
locally similar to the behaviour of the piecewise Linear equations. Based on extensive numerical
simulations, it appears to be reasonable to daim that al1 nonosciliatory solutions in the Cl - C2 plane do in fact converge to the near vicinity of a &ed point that is well appro-hated by (4.61)-
See Figure 4.9 for the results of several simulations7 starting with Mnous initial values of CI and
CKAPTER 4. OSCILLATIONS LW POOLS OF COù'PLED YEURONS
Figure 4.8: Eigenvalues for calcium dynamics. Note the singularity in Xi at K,T, = 1: the eigenvalue is undefineci at this point. For 1 < K,T, c 1 + B (kt + k . 4 ) rc = 1.675, XI > O: after %*hich it becornes negatke once more. Parameters: = 1.2: kt = 0.75: @ = 0-2: rC = 2.5.
Figure 4.9: Movement in Ci - C2 space, weak coupling. The results of several numerical sirn- dations are shown. using different initial d u e s for CI and C2. ( R e d that CI and C2 are quantitieç averaged over each pool. Random initial values of c were assigned to each member of each pool, with the means chosen to yield approximately the desired Ci(0) and C2(0).) Resdts are from simulations with two pools of 500 neurons each (Ni = N2 = 500). P ammeters: Io = 10: K, = 20, = 0.025 (Koq = 0.5); kT = 1.2: kr = 0.75: 0 = 0.2: = 2.5.
C2. and Table 4.3 for a cornparison of the predicted and actual values of CI and c2 ar Mnous
values of the coupling strength.
4.5 Omet of oscillations
In Figure 4.7. there is only one solution for f i and F2. But as we increase the coupling strength,
additional solutions appear. Figure 1.10 shows a situation witith multiple intersections of the FI
II Theorv 1 Simulation 1 1
Table 1.3: 1:alucs ûf CL aud C2. Thç ' T h e q " culuriiu givrj the value u l quatiuu ii .4ij fur rach coupling strength. The "Simulation" columnç show results from numerical runs with two pools of 500 neurons each. Since fluctuations exist in the Ci in the simulations, the value reported is the averaged over 15 time units, after the system has converged to its steady state: the numbers in square brackets are the associated standard deviations. Pammeters: Io = 10: kT = 1.2: kI = 0.75: fi = 0.2: T, = 2.5.
Kars .1
and F2 curves. The e.xact nonlinear equations (4.33-4.34) c m have up to five intersections. but
two of these occur only in a narrow range of parameten. and 1 Nill be concemed only with the
three that also aise in the piecewise iinear equations (4.35436). In addition to the Fi = Fz intersection point. Ne non; have one point with Fi = 7 > 0. F2 = 0. and another with
F2 = > O. FI = O. The quantities refer to the maximum firing rate a pool may
attain: this occurs in the absence of coupling, or when the other pool is completely suppressed.
The other significant points marked on Figure 4.10 are Fr: these are the firing rates above which
pool i completely suppresses the opposite pool.
Which of the three possible solutions for the F, appiies is a function of the C,: when the
system moves away from Cl = Cz, it can enter a regime in which one pool M y suppresses the
nring of the other. The £king rate solution then becomes one of the two corner points, with
either Fi = O or F2 = 0. Once this occurs. oscillatory behaviour *ses in the systern. as foiiows
(for concreteness, consider the case where the system moves first to a state with F2 = 0):
Owing to an imbalance in the C* levels (C2 > Cl)? pool 2 is M y suppressed by pool 1.
Thus, we have Fi = F(RU > O' F2 = 0.
Ci = (72
2.676
The pool dpamics becorne Ci = ,ûFl - Ci/r, and & = - C ~ / T ~ .
Cnder these dynarnic~~ Ci increases, while C2 decreases
At some point, draps below Fi? and the h g rate solution Fi = y. F2 = O
no longer exists. The system moves quickly towards the ody other stable solution. F2 =
Fz"" > O: FI = O*
1
Percent error
2.7 Ci [O]
2.606 1.0031 C2 [O]
2.606 1.0041
C W T E R 4. OSCILL-4TION.S POOLS OF COUPLED iVEu'ROW
Figure 4.10: S trong couphg leads to multiple intersections in Fi, F2 Cumes. ( Top) Full nonünear equations (4.33434) are shown. ?iote the three intersection points: the lower right (Fi = F2 = 0); the upper ieft (fi = O. F2 = Fm): and the centre (Fi = F2). The quantities refer to the rna.xirnum possible firing rate of pool i . given the curent input level and value of Ci. The values F: refer to the &g rate in pool i past which the opposite pool is completely suppressed. Xote that here? > F,': compare this to the weak-coupling case in Figure 4.7. (Bottom) The piecewise h e a r approximations, equations (4354.36). Parameters: KG, = 1.2: Io = 10: ki, = 1.2: kI = 0.75; B = 0.2: Tc = 2.5; Cl = C2 = Cl = C2.
This t h e , C2 increases while Ci decreases.
0 CVhen Fa falls below E;, the systern transfers back to the Fi = y", f i = O point,
and the cycle repeats.
The existence of oscillations of the form described depends on two conditions. First. we must
have > Fi. at Ci = Ci; if this condition is not satisfied. there is only one possible solution
for Fi and F2, namely the FI = F2 intersection discussed above. and the system will tend to that fxued point. Once > F: at this fixed point, the mo new solutions for Fi appear. and
oscillations becorne possible. This condition is necessq but not sufficient: if the Fi = F2 Lved point is stable. the systern may never visit the other fiued points. and oscillations may not occur.
I will discuss these two conditions in turn.
At the steady-state calcium level = ë2 = C. we have Ta+ = Io - b - Z ~ C and F; =
[Io - L - (kr + ~,)c]/K,T,. c is a hnction of K,rs-see equation (4.41)-and with some manipulation we may rearrange y O 1 ( K , ~ s ) > F: (Kars) to yield
3 where 1 have defined the local abbreviation Z6 i i(IokT - + kr ) - (1, - l)/rC. The d u e of the right
band side. above. is less than 1 for typical parameter values. (For Io = 10. k, = 1.2. kr = 0.75. B = 0.2. and r, = 2.5, have chat > F: if Kor, > 0.76.)
The point at which the fked point Ci = C2 = C becomes unstable is determined sirnply by
exafnining Figure 4.8. At K g s = 1. the local (linearized) dynamics around the fked point change
from a node (both eigendues negative) to a saddle (one negative and one positive eigenvalue).
Thus. for K g s > 1, fluctuations (which are alazys present in the system due to the finite
population size) na drive the system away fiom Ci = C+ = 6 and towards one of the two other
Lxed points, which exist provided that Kars satisfies the inequality (1.47).
Thus' we expect that oscillations wil l certainly occur for K,T, > 1. and this is confirmed
by numerical simulations. However, oscillations in the simulations stilI occur for Kars < 1.
and in fact tiny oscillations arolmd G are seen even for K.T$ less than the right-hand side in
inequality (4.4'): this mismatch between the theory and the observed behaviour is due to the
linearizing appro-ximations used to derive (4.47)0 and also to the presence of fluctuations in a
finite population. At the other extreme. for large Kars one pool pemanently suppresses the
other and oscillations cease: see section 4.7.
Figure 4-11 shows the r d t s from a numerical simulation with Koq = I.S1 and as expected,
there is an osdatory solution (this is the same simulation whose raster plot was shown in
Figure 4.2).
CKIPTER 4. OSCILL--ITIONS IrV POOLS OF COLT'LED : W U R O I '
Figure 4.11: Oscillatory solution for two coupled pools. Results are from a simulation with cwo pools of 100 neurons each. (Top) Cl and C2 versus t . Note that the oscillation consists of the average calcium levek in the two pools, Ci and C2, nsing to a high value (when the pool is active) and falling to a 10%- value (when the pool is suppressed). (Bottom) Fi and F2 versus t, where the Fi are as d e h e d in (-1-Il), using a sliding window with At = 0.15. Yote the fluctuations in firing rate; these are due to the finite population size. Pammefers: Io = 10: Ko = 18, T, = 0.025 (Ko7, = 1.2); kT = 1.2: kl = 0.75: = 0.2: rc = 2.5.
4.6 Period of oscillations
.As Figure -1.11 shows: oscillatiom in the two coupled pools consist of the average calcium concen- tration levels in the two pools (Ci and C2) cyciing back and forth between a high value (which
1 shall c d Ch) and a low level (called Cl): the period of this cycle is also the period of the oscillatory bursts of firing. I wiU denote by T the length of a single pool's burst of firing; the
period of the osciliations is thus appro?ùmately 2T. To h d the period. 1 wiii solve for the three
m h m v n quantities (Ch, C;, ~d Tj in :c&= ~f thc systcin p;~ 'a . i ic tc '~~.
1 will assume that one pooi always suppresses the other, and neglect the time during which both pools have nonzero 6ring rates. (.As Figure 4.1 1 shows. this time is very brief.) To solve for
T. 1 d l examine the time course of the Ci during one bunt of firing; for concreteness. consider
the case where Fi > O and Fz = 0. 1 set time t = O at the beginning of this period. and time
t = T as the instant at which the system changes back to Fi = O. F2 > O. Since pool 1 has been
quiescent immediately before t = O. its calcium level has been falling while the level in pool 2 has been rising. This implies the foilowing set of b o u n d q conditions:
Since F2 = 0. the dynamics of pool 2 become & = -CZ/+c. which gives C2(t) = .Applying the boundary condition at t = T gives Ci = C ~ ~ - * / + C . ailowing me to write an expression for T in terms of the other two variables?
The calcium dqnamics of the active pool are only slightly more involved. Since pool 2 is not
£king, pool 1 receives no input from the synaptic coupling between the pools. With only the
global input 1, ineuencing it: pool los average firing rate is given by
Substituting this into (4.18). we obtain
-1 using the local abbreviations Z7 p(I, - 1/2) and rad r [,LI (kr + k/2) + &] . Solving (4.56)
givës Ci ( t ) = Z7rad + (Cl - z ~ T ~ ~ ) ~ - ~ / ~ ~ ~ ; applyhg the final condition. Ch = &rad f (CI - ~ ~ r . ~ ) e - ~ / ~ a d . C'sing equation (-1.52) to eliminate T gives
One more equation is required to be able to solve the syst em. and it is obtained i q examining
the condition that causes the termination of a bwst of firing in one pool (in this case. pool 1).
At t = O. > Fr: pool los firing rate in the absence of inhibition is greater than the rate
required to W y suppress pool 2's firing. .As Ci rises. faiis; at the same time. C2 falls
and Fi rises. When F("Of < Fi. the &mi point of the 6ring rate dynamics disappears. and
the system moves rapidly towards the only remaining stable fived point (the state with F2 > 0.
FI = O). The final equation is obtained using the sirnplifying assumption that this transition
occurs instantaneously when the two critical values of Fi cross: F("=(T) = F;(T). The d u e of is given by (4.33) evduated at t = T: since Ci(T) = Ch, this gives
F; is found by caicuiating the effective input curent to pool 2. Ï2 Io - kr& - FL Kars (see
equation (4.32) in section 4.4.2) and the effective membrane time constant. T* (1 + k,&]-'. Pool 2 is full. suppressed when Ï2i2 5 1; taking f 2 ~ 2 = 1 at t = T yields
Equating (4.58) and (4.59) provides a simple h e a r relationship between Ch and Cl,
using the local abbreviations Z8 = [ I ~ - $ - e] /Zl and Zs (kr f k,)/K,r,Zi. We now have the required three equations. namely (4.52), (1.57). and (4.60). Using (4.60)
we may eliminate Ch in (4.57), obtaining an expression only in terms of Ci:
Equation (4.61) is not anaiyticaily tractable, but we may write a function based on the above
CE4PTER 4. OSCLLL,4TIONS IN POOLS OF COWLED NETJ'RONS 85
Figure 4.12: Plot of g ( C l ) against Cl for tj~ical parameter values: the zero crossing of this function gives the solution for Cl. Solid line: Exact hinction: from equation (4.62); the zero crossing is at Cr = 1.24. Doshed h e : First-order appro-ximation given by equations (-1.66468): expanding about Ci = 1/2: the zero crossing is at Ci = 1.19. Pammeters: Io = 15: Ko = 50, T, = 0.025 (Kars = 1.25); kf = 1.2: kl = 0.75; /3 = 0.2; rc = 2.5.
equation.
and solve (4.61) by hding g(Ci) = 0: see Figure 4.12 for a plot of g(Ci) against Ci for typical
paramet er values. To determine if there exists a unique solution for Cl; we must examine the behaviour of the
h c t i o n g(Cl). 'iegative d u e s of Ci are unphysical, so we consider o d y Cl 2 0; at Ci = O
CHAPTER 4. 0SCEL14TIONS LW POOLS OF COb'PLED -EURONS 86
we have g(0) = &.rad - Ze. This value is generaily greater than zero: more specifically, if we
consider &T, = 1. we may write t hat
For k7 = 1.2,
that we must
k1 = 0.75. B = 0.2' and T, = 2.5 this gives Io > 1.74. a moderate value. ( R e d have Io > 1 to have anJt activity at aii in the population.) The above criterion is
cûweïntive; if Kor3 > 1, the valut! oi 1, requireà CO maice g(Oj positive d be smaifer. Thus.
for most reasonable parameter regimes we will have g(0) > 0.
The ûrst derivative of g(Cl) with respect to Cl is
Cf ~ 1 r . d -- 5 (cl - & ~ a d ) 2 8 a g ( c 1 ) - g 1 ( C i ) = - Z 9 + ( ) [ 1 f T C 1 . (4.64) ac1 28 + Z9G ad 1Ws + Z9Cd
Figure 4.13 shows a plot of this hinaion for typical parameter values. The deribative is negative
at Cl = O: gl(0) = -& and Z9 > 0. .As Cf grows. gi(C1) approaches an asynptotic value given
b y
~ - T c / T ~ ~ This asymptotic value is negative if Z9 > 1. or simply if Z9 > 1 since ?=/rad > 0. Zg > 1
for Kars < I + [1 + 2kr/kT]-': for kT = 1.2 and kl = .75. this becomes K,T, < 1.44.
To summarize: near K,T, = 1. the hinction g(Cl) has g(0) > 0. and gl(Ci) < O for ail Ci. This impiies that the hinction will cross zero in onip one place. and thus there exists a unique
solution for CI. We obtain this solution by hd ing the zero crossing of equation (4.62). This is easily done numericaiiy and routines to do so are available in many software packages (for
example. both 5IATLAB and Maple have zero-hding routines). If a closed-form solution is desired. the hinction in (4.62) may be expanded as a power series. A singularïty in the second
derivative means that the function is not analytic at Ci = O, so we must expand about another
point. Cf = a. Expanding to k t order gives
where the slope m(a) and intercept b(a) are functions of our choice of point of expansion. a.
Defining the abbreviations ?(a) = Z8 + aZ9 and u(a) = [ c ~ / ~ ( a ) ] " / ~ ~ ~ d. we may write the siope
and intercept as
and
- f c / ~ a d - Figure 4.13: Plot of g'(Ci) against Ci. The asymprotic xdue is 2, Zg = -0.37. Param- eters: 1. = 15: K, = 50. T~ = 0.025 (K,r, = 1.25): k, = 1.2: kr = 0.75: B = 0.2: rc = 2.5.
CKAPTER 4. OSCILLATIOLVS IN POOLS OF COUPLED XEUROLVS 88
With the slope and intercept in hand. the appro-uimate solution of equation (4.61) is given by
This solution is a function of a. and a poor choice of expansion point codd lead to inaccurate
results. However. choosing a = 112. for example. provides a reasonable approximation over most
nonpat hological parameter set tings.
Once a value of Ci has been obtained. Ch is then found from Ch = Z8 + Zg Cl. and h d l y
the half-period is T = Tc h ( C h / C i ) . Figure 4.14 compares the theoretical prediction to results
Erom a series of numericd simulations. plotting the half-period T against K o ~ $ , with al1 other
parameters fixed. A good match between theory and numerical results is obtained. across a broad range of values of Kars. The theory predicts that oscillator death (see the next section)
will set in eariier than is actually the case in the numerical simulations. and therefore the theory diverges Erom the simulation resuits as the theoretical oscillator death point is approached.
4.7 OsciUator death
If the coupling strength becomes too large. oscillations will cease because the fully-adapted k ing
rate of one pool will still be enough to completely suppress the other. so one pool wiil permanently
dominate: this is known ~ 'osc i i la tor death" [37. 381. The theory in the above sections provides
a prediction for when this should occur for this system.
As discussed in section 1.6. the transition Gom one pool firing ro the ot her 6ring occurs when yu drops below F ' : that is, when the maximum Ering rate of the active pool drops below the
rate required to N l y suppress the other pool. (Recd that this staternent neglects the (srnail) f i t e time during which both pools have nonzero activity.) using the terminology introduced
above, the condition for oscillator death is
If pool i M y
Thus,
Fi' < ru. (4.70)
dominates. then Ci + C:" = Br. (1, - 1). fkom equation (4.26) on page 71.
CKWTER 4. OSCILLAT?ONS IN POOLS OF C O W L E D YEUROIVS 89
I
Theory I / * Simulations, \=.O25
Simulations, ss=.05
1
Figure 4.14: Cornparison of theory with simuiation resdts: T vs. K,T,. Two spapt ic decay rates were used, r, = 0.025 and = 0.05. The other parameters were fked at: I, = 10: kT = 1.2: kI = 0.75: f i = 0.2: rC = 2.5. The simulation resuits are for two pools of T00 neurons each. Xote that the theory predicts that no oscillations will occur for K,T, > 1.587: see equation (4.73) in section 4.7. The vertical dotted line marks Ko.r, = 1.587: note that while the theory has T -+ oc as it approaches this point, the simulations stiU have oscillatory solutions for K,r, = 1.6. By the time we reach Kars = 1.7. however. no oscillatory penod can be defined, since the two pools are making essent i d y random transit ions, driven by fluctuations rat her t han det erminist ic dynamics; see Figure 4.15 in section 4.7. The numerical simulation results shown here were geneiated by keeping T, fked at one of two d u e s and m g K.; other mm (not shown), in which r, was varieci' indicate that the product Koq is the significant factor. as predicted. For large mlues of T, the theory begins to break down' since the assumption of approxïmately constant population activity over the spaptic coupling time scale, used in equations (4.3I-L.Z), is violated-
from equation (4.27). If pool j is M y suppressed. C, + 0. and from equation (4.59),
Substituting (4.71) and (4.72) into (4.70) and rearranging, the condition for oscillator death
becomes
For Io = 10, k, = 1.2. kI = 0.75, ,8 = 0.2. r, = 2.5. this becomes Kars > 1.587. -1s Figure 4.14
shows. this underestimates the true d u e sornewhat, since oscillations still occur for Ko7, = 1.6.
The reason is the h i t e population size, which causes fluctuations in the firing rates. When the
M y adapted 6ring rate of one pool is only rnargindy high enough to suppress the other pool.
small fluctuations in firing rate wiil be able to cause transitions. dowing the other pool to become
dominant. As Kars grows. these transitions are expected to become less and less regularly spaced
in time, since their timing is dominated by random fluctuations rather than by the deterministic
population djnarnics. Figure 4.15 shows a numerical simulation with Kor3 = 1.7. and we do in
fact see irregular transition intervals. By Kars = 1.8. oscillations have halted completely, and
one pool consistently cornes to dominate despite the fluctuations: the precise d u e at which this
occurs should be a function of the population size. since fluctuations vanish as the number of
individuai elements e-xpands t O i d h i tu.
4.8 Future directions
Aithough synaptic coupling within each pool has been neglected in this chapter (Ki i = O), adding
interna1 coupling does not hindamentaily change the nature of the calculations. Consider the
case with d-to-ail coupling between pools (Klz = K21 = K< as before), and aii-to-d coupling
within each pool (Kii = Kz2 = W). The efkctive current equations (4.31-4.32) then become
where Ko = NK and Wo = NW. The caldat ions in sections 1.4.1 through 4.7 shodd be
repeatable for this expandecl system, at the cost of more cumbersome aigebra. In fact. any number of pools of neurons could be analyzed, representing each population's activity using
effective synaptic input cunents and population-averaged calcium levels.
Throughout this chapter I have assumed that asynchronous firing prends in the populations,
Figure 4.15: Fluctuation-induced transitions in a simulation with parameters 1, = 10. k, = 1.2, kt = 0.75, ,8 = 0.2. and T, = 2.5. For these valalues. oscillator death is precücted for KT, > 1.587: here, K,T, = 1.7. The plot of Ci against t shows that transitions aie in fact still possible, but that they are irregular and induced by fluctuations in the population h g rates (due to the finît e population size) rather than by the deterministic population dpamics.
with any potential synchrony bebg disperseci by noise. In [105), Gerstner carries out an elegant
derivation of the stability of the aspchronous and spchronous states in a single population.
in terms of the noise level and the axonal delay tirne (1 have not considered avonal delays
in this work). It would be interesting to extend Gerstner's work to the coupled pools, and
consider the effect of spike-frequency adaptation on the stability boudaries between s p c h r o ~
and asynchrony.
The calculations presented here have been based on the simple integrate-and-fie neuron
mûdel, but it in possilie tu c a r y uuc esseuciaiiy the same Serivations for more complex modek
The only requirement is that an expression must be available which relates an individual neuron's
&g rate to its curent input and its interna1 calcium concentration. as equacion (4.13) does
for the integrate-and-Eire case. With such an expression in hand. much of the subsequent would be essentially unchanged. One difficulty that might anse would be if the model to be anaiyzed had no reasonable iinear approximation in the regime of interest. equivalent to equation (4.22):
this codd make the system intracrable. though computational studies could stili be camied out
in such a case.
Chapter 5
Noise-shaping in populations of coupled
neurons
5.1 Acknowledgement
This chapter presents work carried out in the Appiied Biodynamics Laboratory. part of the
Center for Biodynamics at Boston Cniversity, headed by Prof. James Collins. The research is
an extension of previous work on neural noise-shaping, presented in Mar et al. [36). I wish to
acknowledge the coilaborative input of Dr. Douglas Mar. Prof. Carson Chow. and Prof. James
Collins: however. everything presented here is primarily my own work.
5.2 Background: Analog t o digit al conversion
In this chaprer. we move from locomotion to sensory processing. and consider sorne signal- processing characteristics of networks of spiking neurons. The ability CO distinguish signals from
noise will be as vital to biologically inspired robots as it is to animals. and the work in this chapter
suggests some ways in which spike-Erequency adaptation may help irnprove signal processing in
networks of neuron-like spiking units.
A common task in electronic signal processing is converthg signals Erom analog fonn (for
euample, the voltage output of a device such as a microphone) into digital form (for example. a
bit string encoding the sound registered by the microphone): this is known as . i /D conversion.
.As Figure 5.1 shows. the process typically introduces two forms of discretization into the onginal
continuous (in both time and amplitude) signal. First. the signal is sampled at discrete t h e s ,
with the individual sarnples still being continuous d u e s (producing signal h ( x ) in the figure). Provided that the sampling fiequency is at least double the highest fiequency component of the
onginal signal, no information is lost in this process: this is the content of the 'iyquist sarnpluig
Figure 5.1: Effect of quantization on a signal. Function f i (x) is the original. analog signal. In hnction f2(x), this signal has been sarnpled a t discrete tirnesr as long as this sarnpling occurs at twice the highest fiequency occurring in fi(x), no information is lost in this process. The hnction f3(x) is formeci by quantizing f i (x) , in this case by roundhg al sampled values to either O or 1. Information is lost in this 1 s t process. Put another way, noise (calied quantization noise) is introduced into the original signal.
theorem [1101. The second discretization consists of dowing the sampled values to assume oniy
a iimited number of discrete &es (see signal f3(x) in the figure. in which the discrete values are
O and 1). This quantization does cause a loss of information. In effect. noise has been injected
into the original signal: this is known as quantization noise.
Electronics designers want their A/D converters to work as accurately as possible. so they
attempt to d u c e the level of quantization noise as much as possible; the lower the quantization
noise, the more faithhilly the doubly-discretized digital signal will reproduce the original analog
input. One method of reducing the quantization noise is oversampling. If the original signal is
contained in a band O 5 f 5 f,, oversampluig means using a sarnpling frequency f, >> 2f,. The
oversamphg ratio (OSR) is defined as
sampling frequency OSR =
Ciyquist frequenc y
- f s - - 2 f o
For oversampling to be effective, the signa1 must be sufEiciently busf': it must have some
chance of changing quantized Ievels between successive sampling times. If this is nor inherently
the case. it may be achieved by *'dithering' the signal. adding enough external noise to allow
changes in quantization state co occur. Busy signals wil l have. to a good approximation. quan- tization noise with a Bat power spectrum. the same at aiI frequencies (white noise) [ I l l 1. In this
case. the amont of noise in the signal band. no. may be shown to fa11 off with the square root of
the oversampling ratio,
This is the standard sampling effect: when M independent. identically distributed randorn vari-
ables are averaged. the standard deviation of the new random variable thus obtained is propor-
tional to LW-'/* [110, 1121.
It is, however. possible to do better than simple oversampling. Electronic devices c d e d delta-sigma (AC) converters empIoy a technique known as noise-shaping to improve the effect of oversampling. Figure 5.2 shows a schematic of a AC quantizer. A kt-order delta-sigma
converter reduces the noise in the signal band more rapidly than simple averaging:
The reduction in noise power is achieved by altering the shape of the power spectrum. With
simple oversampling, the quantization noise is approximately white: it has equal power at a.ll
frequencies, E( f ) = E. a constant. When the delta-sigma scheme is used, the quantization noise
power spectrum goes as N(f) = 2Esin(~f/f .) [Ill]; see Figure 5.3. The power spectrum is
"shaped": it has Iower power at the low frequencies which comprise the signal band. and higher
power in the high frequencies. This noise shaping is what enables the faster reduction of in-band
quant ization noise aith oversampling ratio.
Analog input uitegrator Quantizer r Digital output r
Jm S b 7 'i
Digital CO analog
Figure 5.2: Schematic of a k t -order delta-sigma converter. The analog input z( t ) is converted into a series quantized outputs Y i , where i indexes the discrete sampiing tirnes. Rather than simply oversampling at the clock frequency f,. the delta-sigma converter subtracts its own output from the input signal. and operates on an integrated version of z(t) rather than on the signal itself. The effect of these two factors is to reduce the amount of quantization noise present in the signal band.
5.3 Neural noise-shaping
'r'eurons may use a form of noise-shaping in their signal-processing functions. Mar e t al. [36] and
Adams [113j have demonstrated that a network of inregrate-and-fire (IF) neurons. when coupled
in all-to-aii mutual inhibit ion. displays kst-order noise shaping.
Consider the schematic of an IF neuron shown in Figure 5.4. and compare this 6 t h the
delta-sigma modulator shown in Figure 5.2. Comparing the two figures points out the similarities
between a self-inhibiting IF neuron and a AC modulator: both accept an analog input, integrate
it. and apply negative feedback using a discretized version of the integrated signal. It is important
to note. however' that although the IF' neuron does perform a sort of discretization of the input
through its spiking behaviour. it remains an entirely analog device; in part idar . time is not
dwcretized in Figure 5.4.
In (361, a network of N integrate-and-fire neurons coupled in all-to-ad inhibition is used to
produce noise-shaping. The basic operation of the network is once again illustrated by Figure 5.4, replacing the individual integrator with a set of N integrators receiving simila inputs. and
replacing the singlenewon synaptic m e n t with a collective curent generated by aii of the
neurons in the network. The number N is then andogous to the oversampiing ratio in the
discretetMe case. A coupled network shows an improved signal-tenoise ratio (Sm), as weU as
CH,-IPTER 5. NOISESK4PING POPULATIONS OF COUPLED ,\IEUROIVS 97
Figure 5.3: Effect of a AC modulator on the noise power spectrum. The signal of interest has a maximum frequency component of f,, indicated by the vertical dotted iine. (Dashed line) The result of simple oversampling: the noise is Bat with frequency, E ( f ) = E. The in-band noise power, no, is the area under the dashed line in the range O f 5 f,, and this decreases with ovewamphg ratio as n, x (OSR)-'/?. (Soiid üne) The result of applying a AC modulator: the noise is Shaped." with less power in the low frequencies and more in the high frequences, N( f ) = BE sin(rrf//,); total noise power is the sarne as in the simple oversampüng case. The in-band noise power is the area under the solid iîne in the range O 5 f 5 fo, which decreases with oversamphg ratio as no x (osR)-~/*. (Compare the area of the tnangle fomed by the solid h e to the area of the rectangle formed by the dashed line.)
Figure 3.4: Schematic of an integrate-and-fire neuron. Each output spike generates a decaying exponential pulse of synaptic output current, ISw = xm ~ ( t - t (") ) , where ( t m } is the set of f i n g times and ~ ( s ) = e - S / r s ~ ( s ) is the synaptic kernel. The figure shows the low-leak limit. where the neuron acts as a perfect integrator of the input current: 6( t ) = I ( t ) - K I s p ( t ) ? For synaptic coupling st rengt h K.
I I I
an evtended dynamic range (DR). (At a aven frequency of interest. S'iR rneasures the ratio of
the signal power to the noise level, while DR is the ratio between the maximum attainable signal
power in the system and the noise level: for a &ed maximum signal power. decreasing the noise Ievel increases the dynamic range.)
This work has demonstrated several points related to noise-shaping in networks of coupled
neurons:
w--
0 The noise-shaping effect persists in the integrate-and-tire model with a source of noise
dinerent from the one used in the previous work. In [361, a random reser of the voltage was
used to introduce noise into the firing times of the individual neurons: we s h d see that a
network using random Poisson processes as the noise source also shows the noise-shaping effect.
p u 1 - Synaptic current
The dynamic range and signal-to-noise ratio are improved by the addition of spike-frequency
adaptation to the integrateand-fhe model used in [36].
r A more cornplex conductance-based mode1 (wit h Hodgkin-Hu'dey type dpamics) also demonstrates the noiseshaping effect .
rn In the conductance-based modei, the improvement in dpamîc range caused by adding
spike- flequency adaptation to the integrat e-and- fire mode1 disappears. There is . however
a signal-processing benefit conferreci by adaptation in the conductance-based case, namely
that it evens out the firing rates in a heterogeneous populationo helping to prevent the fastest-spiking neurons kom completely suppressing the slowest.
I will discuss these points in sections 5.4 and 5.3.
5.3.1 Calculation of power spectra
To assess noiseshaping behaviour of a network of neurons, one must convert a iist of firing times
(generated by an integrate-and-£ire model or by a conductance-based model, in sections 5.4 and
5.5, respectively) into a power spectrum. This is done by replacing each delta-function spike with a nmow rectangular puise of unit height. rentrecl at the fwing t i m ~ . th i i nperatinn ir
carried out over a bill run. it converts a list of firing times into a continuous function of t h e ; the
autocorrelation of this hinction may then be found with standard techniques [Il-LI, and taking
the Fourier trançform of the autoconelation function yields the power spectral density, which 1 shail refer to simply as the power spectrum. Code in C to carry out this operation was provided
courtesy of Douglas Mar. Boston t'niversity.
5.4 Adapting int egrat e-and-fire neurons
5.4.1 Neuron model
The adapting IF model uses the same equations presented in section 1.3.2:
for i = 1,. . . . N . (The parameters k,. kt. p. and T, are discussed in section 4.3.2.) .AU neurons
receive the same input curent I ( t ) = Io + S(t). combining a constant DC input and a t h e
varying signal. S(t) = A sin2rf.t with amplitude A and fiequency f,. This input is weighted by
a heterogeneity factor a,, representing a variation in intrinsic firing rates across the population;
the ai are chosen from a uniform distribution over some range. typically ai E [l. 1.251. The
coupüng is all-to-all (each neuron couples to every other neuron in the network, including itseü),
and uses the same f o m as described in section 4.3.3: IfW(t) = x, ~ ( t - ty). where { t l } is the set of firing times for neuron j and y(s) = e-s/r+l(s) is the synaptic kernel. T, being the synaptic decay time constant. The couphg strength is given by the constant K y with K > O
representing inhibitory coupling.
Since equations (5.55-6) are nondimensional, all frequencies cited in this section d also be
dimensionless (note, however, that the conductance-based model of section 5.5 is dimensional).
A frequency of 1 therefore implies one cycle per membrane t h e constant. The conversion to
a dimensional kequency depends on the value of the membrane time constant assumeci in the
process of nondimensionalization. Here, I have in mind a neuron with a low level of leak and a
long membrane time cons ta . , perhaps r, > 50 ms; the exact value is not critical to the results.
The average network frequency over each numerical run will be denoted F. Ln ail cases s h o w
here. the values of Io have been adjusted to make F 2 1000.
AU numerical sinulations are c h e d out using a fourth-order Runge-Kutta method with
maximum step size h = IO-' (dimensionles). When two neurons happen to generate spikes
within the same interval, the step size is halved until only one of them £ires: the exact time at
which the tiring threshold is reached is then found by interpolation betweea v ( t ) and v ( t + h).
5.4.2 Effect of Poisson noise
In [36], noise is introduced into the f i n g rimes of the neurons via a random reset of the voltage
after each spike [log. 115. 1161; rather than being reset to O once the threshold u = 1 is reached.
the reset voltage v, is chosen with a uaiform probability distribution over some range. Noise is
necessary for several reasons. -4 sufficient level of noise prevents spduonization. which often
occws in coupled networks. especially with inhibitory coupling [27. 9 1. 117. 1181. A synchronized
network can only represent one signal. namely the spchronized network frequency: it is thus
better for the network to remain in the asynchronous state. in which signals may be encoded
in the network actiblty at an? arbitrary Frequency. Another. related reason for adding noise
is to suppress the harmonics at multiples of input signal frequency. In a network of purely detenninistic neurons. even in the absence of sj-nchronization. the signal a-il1 have strong peaks
at multiples of the input frequency: we want a c1ea.r peak at the signal frequency. and no other
harmonics. and adclhg noise accornpüshes this. In addition to these factors. of course. is the fact
that reai biologicai neurons tend to be quite Mnable in their £king times [31. 39. 119. 120. 121. 1221, so a model incorporating noise is more biologicaily reasonable than a purely deterrninistic
one.
Rather than using a random reset. another common method of introducing noise into a
deterministic neuron model is to simulate the effect of incoming synaptic inputs fiom neurons
not forming part of the modelled network [31 , l l5 , 123,1241. Since these external neurons are not
explicitly represented in the network, their spiking times may be treated as effectively random
and modelied as a Poisson proces wit h some rate A. Considering two separate Poisson processes.
one for excitatory inputs and one for inhibitory inputs. the noisy version of (5.5) becornes. for
an individual, uncoupled neuron.
where q+(t) and 7-(t) are Poisson processes with rates A' and A-. respectively. Since the output
of a Poisson process is a series of 6-functions: the effect of the additional te- in (5.7) is t o
"kick:' the voltage up and down instantaneously by Auf or Au- as each point event occurs.
This discontinuous change in membrane voltage is not, of course, physically realistic, but it
reasonably reproduces the effect of incoming qmaptic inputs where the synaptic time constants
are small; sùnulating the synaptic dynarnics of each incoming spike is more computationdy
expensive, and gives essentially the same results in numericd simulations (results not show).
Setting A+ = A- = A and Aui = Av- = Av rnakes the noise terms average out to zero. so that the average firing rate is the same as in the noiseless equations.
Figure 5.5 shows a cornparison of the noise-shaping seen with Poisson noise to that seen with
random reset noise: the results are qualitatively similar. though the slope of the curve in the
Poisson noise case is slightly more shallow than in the random reset case. The improvement in
dynamic range and signai-to-noise ratio seen in the random reset case (described in the next
section) is also present for the Poisson noise case. though the improvement is slightly smaller (results not shown)-
The near agreement in the poser spectra for the two cases indicates that the ncise-shaping
effect is not a unique feature of the random reset method of adding noise to the firing rates.
5.4.3 Effect of adaptation on DR and SNR
Incorporating spike-frequency adaptation into the integrate-and-fîre mode1 has the effect of in-
creasing the network's dpamic range and signal-to-noise ratio in the lower frequency ranges.
Figure 5.6 compares power spectra obtained for nenvorks with and without spike-frequency
adaptation. At the signal frequency of 100 Hz. there is a gain of 2.6 dB in dpamic range (the noise level decreases by this amount. which for a constant maximum signal power corresponds to
an increase in dynamic range). There is alsu a gain in signal-to-noise ratio of 3.3 dB cornparecl
to the no-adaptation case, again at a signal frequency of 100 Hz. No theoretical derivation of the effect of adaptation on the power spectrum has been com-
pleted. to date. Calculations using a kt-order approximation for the influence of the terms
on the 6ring rates of the individual neurons ùidicate that the effect is due to some higher-order
influence, since the increased dynamic range does not corne out of the k t -order version (these
caldations wiil not be reproduced here). Work is currently in pmgress on a higher-order a p
proach to the theory: this is being carried out in collaboration with Dr. Douglas Mar of Boston
Cniversity and Prof. Carson Chow of the Goiversity of Pittsburgh.
I 1 1
r
. . . . . . K=O, Poisson noise - - - K=50, randcm :eset
K=50, Poisson noise t 1
Figure 5.5: Power spectra with Poisson noise vs. random reset ( p = O, no adaptation). 'uo input signal has b e n provided to the network (A = O). (Dotted line) Uncoupled network, K = 0. Poisson noise with X = 250: Au = 0.025. 1, = 18.36, ai E [l, 1.251, F = 1000. (Dashed line) Coupled network, K = 50. Random reset noise, v, reset into the range [-.3, .3]. 1. = 60.8: ai E [1. 1.251, F = 999.6. (Solid line) Coupled network. K = 50. Poisson noise with X = 250, AV = 0.025. 1, = 61.5, a, E [l. 1.251: F = 1001. The power spectrum in the Poisson noise case is similar to t hat in the rand~m reset case (except for a slightly shallower dope with hequency), indicating that the results reporteci in [361 are not specific to their choice of noise model.
K-O, no adaptation - - - K=50, no adaptation K=50, adaptation
Frequency
Figure 5.6: Power spectra with and without spike-fkquency adaptation. int egrate-and-fire model. .ln input signal with A = 3 and f, = 100 has been provided to the system in each of the foliowing cases. In all cases. the heterogeneity factor a, E [l, 1.251, and noise is introduced via a random voltage reset noise. u, reset in the range [-.3: .3]. (Dotted line) Uncoupled network: K = 0: Io = 18.455: F = 1001. (Dashed Zine) Coupled network. no adaptation: = O: K = 50: 1, = 61.5: F = 1002. (Solid lzne) Coupled network with adaptation: k, = 1.5: kr = 0.9375; /3 = 0.2: 7, = 2.5; K = 50: Io = 77.2: F = 999.9.
CKWTER 5. NOISE-SH,-1PlXG IN POPULATIONS OF COLrPLED XECTROl\iS 104
Table 5.1: Currents used in the conductancebased model. V' m. h. n. [ca2+1, and ndCa) are dynamical variables, while the quantities Vy and gs (e.g. VK and g K ) are parameters. gi\;ing the reversal potential and conductance, respectively, associated with ion ,Y. The values of the parameters used are shown in Table 5.2.
5.5 Conduct ance-based neurons
Equation
IL = SL[VL - V] h a = S N . ~ ~ ~ [ K V , - v] IK = gKn4[VK - VI Ica = g ~ a m ( ~ ~ ~ [Via - V]
Ca" I . 4 ~ ~ = ~ ~ P [ V . - V ] ( ~ ) [Ca-']+ K~
Io (constant)
Current 1 Description
5.5.1 Neuron model
1 INa IK Ica
I S A X p
Io
In this section we wiil consider noise-shaping using more cornpiex mode1 neurons than the
integrate-and-fire neurons used in section 5.4. Here. the individual neurons wili be described
by a conductance-based neuron (see section 1.5.2) described in [XI. In the original paper. a
two-cornpartment approach is taken. with one set of equations for the soma and another for the
dendrites: here. only a single cornpartment has been used.
The equations are of the standard conductance-based t g e . with all-to-d synaptic coupling
between members of the network. The rate of change of the membrane voltage is
Leak curent Sodium cwen t Potassium current Calcium curent
.Aherhyp"polarizationeurrent
Applied current
The definitions of the various ionic currents are summarized in Table S. 1. As in the integrate-and- fire case' heterogeneity in the population is introduced by multiplying the applied current Io by
a factor ai for each neuron. The synaptic coupiing m e n t is, as before, IF (t) = 1, ~ ( t - ty ) , a-here {ty } is the set of tiring times for neuron j and ~ ( s ) = e-dTs ~ ( s ) is the spaptic kemel, T,
being the synaptic decay time constant . Gnlike the int egrate-and-£ire model. the conductance-
based neuron has no explicit threshold at which the voltage is discontinousiy reset. Firing times are therefore assigned as the times at which the voltage crosses some &ed value such
as -45 mV: when this crossing occm, a spike is generated and the neuron's spaptic current
output is increased. The coupling strength is given by the constant K, with K > O representing
inhibitory coupling.
Three variables summarize the action potentiai generating dynamics of the neuron: m. the
sodium charnel activation; h, the sodium channel inactivation; and n7 the potassium channel
activation. (See section 1.5.2). .U1 of these have dpamics of the fom x = &[x,(V) -z]/r,(V), with
and
T= = I,/[nz(V) 4- &(V)!
The sodium activation is assumed to be fast. so that
wit h
(S. 10)
Note that V is evpressed in mV t hroughout these equations. Equat ions of the form (5.12-5.13) are
the resdt of fitting curves to experimental data on the ion channel kinetics in particular neurons:
the neurons described here and in [211 are cortical pyramidal neurons. but ail conductance-based
models have a similar form.
The sodium inactivation variable obeys
= $h[h,(V) - h ] / ~ h ( V ) .
where & is a rate scaling parameter. and the channel kinetics are given by
The potassium activation obeys
with channel kinetics
Equations (5.8) Y (5.11), (5.l-l). and (3.17) define the usual four-dimensional system of equa- tions used in conductance-baseci models; here, a common simplification has been made. reducing
the system to three ordinary differential equations by replacing the ODE for rn with the algebraic
relationship (5.11). Wang [211 adds spike-frequency adaptation to the basic model by consid-
ering a calcium-dependent potassium current: as the concentration of calcium. [ca2+]. rises. a
potassium current c d e d IaqHP is activated. causing the ceU to £ire less rapidly. (The subscript
"AHP" stands for .'after hyperpolarization." and refers to the fact that after a cell has 6red a
burst of spikes. its resting potential is hyperpolarized compared to the resting potential in the
absence of calcium.) The calcium dynamics (again. from [211) are given by
where TC, is the decay time constant. and p sets the rate of influx of cdciurn ions. The kinetics
of the calcium charnels are assumeci to be fast. so that
At the resting potentiai (V, = -64.6). m g a ) is near zero. During a spike. as V rapidly increases
to the vicinity of 50 rnV, and rnr) + 1 for a brief period. causing Ica to becorne nonzero. By
equation (5.20). this causes an increase in calcium concentration. which then begins to decay
away once the spike ends and mp) + O again.
The neural model is thus made up of four ODES: (5.8): (5.14); (5.17): and (5.20). The values
of the parameters appearing in the equations are listed in Table 5.2.
Wit h a constant applied cunent , the neuron spikes regularh, and the calcium concentration
buiids up until a steady state is reached, in which [ca2+] is increased by each spike. then decays
back to its onginal level before the next spike occurj: see Figure 5.7. Figure 5.8 shows the
relationship between applied current and firing rate for the model. with and wïthout adaptation.
Zloise is added to the srjtem using the same method describeci in section 3-42? namely
incorporating two independent Poisson processes (one excitatory. one inhibitory) to simulate the
iduences of neurons not evplicitly modelled. The only equation affecteci is (5.8): whkh becomes,
for an individual uncoupled neuron
Figure 5.7: Voltage and calcium traces of conductance-based model. Io = 5 pA. p = 0.003 ph1 j (PA-ms) .
Figure 3.8: ( Top) Firing rate vs. applied current for the conductance-bzed model. (Bottom) As above. but zoomed in to show the vicinity of the onset of oscillations.
Paramet er C gr, VL
gNa
- -
VK 1 -80 mV 1 Potassium reversal potential 1
Value 1 DF
1 gca I 1 ms 1 Calcium conductance 1
Significance Membrane ca~ac i t ance
0.1 mS -65 mV 45 ms
Sodium reversal potential Potassium conductance
&O 55 rnV
- - .- I I
Vca [ 120 mii 1 Calcium reversal potential I
Leak conductance Le& reversal potential Sodium conductance
SK 18 mS
P TCa
KD
1 9 . 4 ~ ~ / j m s 1 After hyperpolarization conductance ]
-
4 h
4n
Table 5.2: Parameten in the conductance-based model: d u e s taken from [211.
0.003 PM/ ( p - k m ) 80 ms 30 DM
where p'(t) and q-( t ) are Poisson processes with rates A* and A-. respectively The effect of
Calcium influx rate Calcium decay time constant Calcium equilibrium constant
I
the additional terms in (5.21) is, as before. to increment or decrement voltage instantaneously by
AV+ or AV-. This discontinuous change reasonably reproduces the effect of incoming synaptic
4
inputs where the synaptic t h e constants are smali. Simulations were run in which the Poisson
h rate scaling
processes &ove a spapt ic coupling m e n t rather than iduencing the voltage directly, and the
effect was the same. Since the above fom. which is computationaily more convenient. has been
t
used, taking. 1 have taken AVi = AV- = AV and A+ = A- = A: balancing the excitatory and
4
inhibitory noise in this way means that the average nring rate is stiU given by the curve shown in
Figure 5.8. With the addition of this Poisson noise. firing in the neurons is no longer completely
regulaq see Figure 5.9.
n rate scaling
5.5.2 Noise-shaping results
Figure 5.10 illustrates that the noise-shaping effect is stiU seen in a network conductance-based
neurons coupled in all-to-all inhibition (the spectra shown are for the neadaptation caseo in
which p = O and thus [ca2'](t) = O). It is interesting to note. however, a signincant chaage
in the kequency range to which the low-kequency noise power is shifted. In the integrateand-
fire model, the power from the low-hquency noise was shifted to the range near the network
fiequency (see Figures 5.5 and 5.6). Here, however, the noise power is "piled upT' at frequency
Figure 5.9: Firing in the conductance-baseci neuron with Poisson noise. Parameters: I, = 5 PA; X = l k H z : A V = l m V .
CHilPTER 5. NOISE-SHAPING IN POPII'LATIONS OF COUPLED .\lEI/'RONS 111
loO 1 I L' " ' 1 I
1
I Uncoupled I Coupled
Network frequency
Individuai neuron I
L C - cn C a u - 9 + O a3
3 8 it O 0 a /
104 0
Individual neuron
1 I
1 o3 I 1 . . . . l I
1 o0 lol 1 O* 1 o3 1 o4 Frequency [Hz]
Figure 3.10: Yoise-shaping in a network of fifty conductancebased neuronç. no spike-frequency adaptation (p = O). (Dashed line) Uncoupled network: K = 0; Io = 0 . 2 6 ~ ~ 4 ; F = 1.001 kHz. The vertical dotted lines indicate the range of individual neuron frequencies, from 17.9 Hz to 22.6 Hz. (Solid line) Coupled network: K = 12: 7, = 0.5 ms; Io = 5.2015 PA; F = 0.9993 kHz. The vertical dashed line indicates the fastest individual neuron frequency; the range is fiom 0.12 Hz (off the scale) to S6 Hz. Parnmeters (common to both cases): ai E [l. 1.251; A = 1 kHz; A V = l m V : p = O .
below the average network Çequency, but stiil higher than the individual neuron kequencies. -1s
yet, no theoretical explanation for this ciifference e-uists.
5.5.3 Effect of adaptation
The improvement in dynamic range and signal-tenoise ratio offered by adding spike-frequency
adaptation to the integrate-and-fire model disappears when we move to the conductance-based model of this section; the results are not shoaa, but the spectnun with adaptation is essentially
indistinguishable kom the nonadapting spectrum shown in Figure 5.10. It is unclear why this
should be the case, but However, there is still an advantage to be gained from the presence of adaptation. In a
heterogeneous network, the individual neurons display a range of baseline Grmg rates: each
neuron tires at a slightly difFerent rate in response to the same level of current input. represented
in the mode1 by the range of values of the ai in equation (5.8). Wlen the network is coupled, Figure 3.10), and the fastest-king neurons tend to suppress their slower neighbours; note the
wider range of individual £king rates in the coupled network show in Figure 3.10. as compared
to the range of individual rates for the uncoupled network. The coupling can create a situation
in which the slowest neurons in the network are effectively silenced. either not spiking at al1 or
spiking so rarely that they take no real part in representing the input signal. Spike-frequency
adaptation reduces the tendency for the neurons with high intrinsic rates to dominate the slowest
ones (since a fast neuron dl adapt and Iower its firing rate. giving slower neurons a chance to
escape hom the inhibition and fire). The signal-processing advantage of this lies in the fact that
greater signal-to-noise ratios are attained by increasing 1V. the number of neurons in the network:
if mmy neuronç are completely suppressed. the effective network size is reduced. limiting the
at t ainable signal-to-noise ratio.
Figure 5.11 shows two raster plots. in which horizontal lines represent individual neurons.
with the spiking times indicated by dots. The upper plot shows a run without adaptation. while
the lomr one incorporates the adaptation djnamics. Both nins have been adjusted to have
the sarne average network frequency but in the neadaptation case approximately 30 of the 100
neurons play effectiveIy no role in the network's activity, while with adaptation all 100 neurons
are at least somewhat active.
The effect grows more pronounced as we increase N. or as we increase the couphg strength
K at a given N. This silencing of the slower neurons implies that a heterogeneous network cannot use its hill capacity. and in effect the network contains only IVeff 5 N neurons. where
NeII is ca lda ted by setting a threshold £king rate below which a neuron wiil be counted as
dent . Figure 8.12 shows the benencial e f k t of adaptation on Neff. using a threshold of 0.1 Hz as the cutoff for considering a neuron to be active.
The reduction of Neff with N places an upper limit on how many neurons can effectiveiy
participate in signal-processing in a network. Since the S.SR increases with increasing N in a
coupled network [36], a reduced Neff places a limit on the attainable signal-to-noise ratio.
No spike-frequency adaptation
With spike-frequency adaptation
Figure 5.11: Raster plots, comparing coupled networks with and without adaptation: the input current has been adjusted to give the same average individual neuron Bequency in each case. The heterogeneity factor ai E [l, 1.251 in both cases. (Top) Coupled network? no adaptation (p = 0): 1, = 10.25 p.1. Range of individual neuron frequencies: O to 82.8 Hz. rvith an average of 19.99 Hz. At a threshold of 0.1 Hz, NeII = 65. (Bottom) Coupled network, with adaptation (parameters as in Table 5.2): 1, = 11.2 p A Range of individual neuron fiequencies: 0.7 to 45.1 Hz. alth an average of 20.01 Hz. At a threshold of 0.1 Hzt NeIf = 100 = N . Pammeters (common to both cases): N = 100; K = 12; T, = 0.5 ms; ai E (1. 1-25]; X = 1 kHz: AV = 1 mV.
Figure 5.12: Effective number of neurons in network (Kif) vs. N. A neuron is counted towards NeII if it fires at a rate of at least 0.1 Hz. In aii cases. the value of Io has been adjusted to yield an average individual neuron frequency of 20 0.1 Hz. (Solid line, circles) ?io adaptation ( p = O). (Dashed line, s g u a ~ s ) With spike-frequency adaptation (parameters as in Table 5.2).
Chapter 6
Effect of adaptation on neural
variability
6.1 Local abbreviations
The following table lists the abbreviations used for convenience in this chapter: as discussed in
section 1.7. they are *?ocal" in the sense that they apply only within th& chapter.
1 Abbreviation 1 Defini t ion 1 Section 1
6.2 Introduction and background
If biologically inspired robots are to use networks of spiking artificial neurons as their primary
interna1 control mechanism, we wiU need to understand the properties of such neurons in con-
siderable detail. Here. we RU examine the effect of noise on the variability of a simple spiking neuron mode]' and consider how this variability changes under the influence of spike-hequency
adaptation. Recordings of spike trains £kom actual neurons do net! typically, spike at regular intenmls;
rather, there is a high degree of variabiüty in the interspike intenals (ISIS) [31,39! 119, 120, 121,
122). There are thought to be two main sources of this Mnability 131, 115, 123: 1261: intrinsic
noise, interna1 to each neuron and arising £rom factors such as fluctuations in the opening and
closing in finite populations of ion channels: and qnaptic noise, caused by the influence on
each individual neuron of a large and generally unmonitored population of other neurons, whose incoming spikes cause the neuron's state to vary in apparentiy random ways.
This chapter addresses a simple obsemtion arising from numerical simulations of the integrate- and-fire (IF) neurons used in chapters 4 and 5. There are two particularly simple ways to intro-
duce vanability into the firing times of IF neurons: random voltage reset [36. 109. 115, 1161, in which the membrane voltage is reset to a random d u e after each spike: and random threshold
reset [33. 127. 1281, in which the h g threshold for the next spike is reset to a random value
each time the neuron generates a spike. The simulations point out two interest ing facts:
random voltage reset and randorn threshold reset have opposite effects as the E n g rate
decreases: with voltage reset, the variabihty drops rapidly as the k ing rate approaches
zero: with threshold reset, the variability increases rapidly
a adding spike-kequency adaptation alters the variability. even at high firing rates: with
voltage reset. adapting IF neurons are l e s variable than the equivaient tonic (nonadapt-
hg) neuron: with threshold reset. adapting neiirons are more variable than their tonic
equivalents
Figure 6.1 shows the numerical resuits described above. using a standard quantity called the
coefficient of iariability (CV) to summarize the degree of variability of the IF neurons under
various conditions. The CV is simply the ratio of the standard deviation to the mean. and is
often used in e-xperimental and theoretical work [1291. It is quite straightforward to calculate the probability density (and thus the CV) associated
with the interspike interval distribution in the tonic IF case. -4 closed-fonn solution for the
coefficient of Mnability in the presence of spike-flequency adaptation will be derived. subject to
one approximation. Wlere this appro.uimation breaks down will be discussed, and fuiaiiy 1 will
siimmarize the underlying cause of the difference in t-ariability with and without spikefrequency
adaptation.
6.2.1 Notation for probability calculations
Standard terms from probability theory have been used: to avoid arnbiguity, these wilI be reviewed
brieflq; here. The probability of an event will be denoted Prtevent). .A randorn variable X has
a probability density funch'on (pdf) f x ( x ) such that the probability of X taking on a value in
the set A is j"fX(x)dZ For a scalar randorn variable, as aU variables wiil be in this chapter,
CHzlPTER 6. EFFECT OF AD=IPT=ITION ON ,VECCR4L \lRLU3EITZ'
Random voltage reset O .2 I 1 ï I l I I I
Randorn threshold teset
Figure 6.1: Effect of adaptation on IF neuron variabili- numerical results. The coefficient o f variability (CV) is used to summarize the level of Msiability; CV is defined as the ratio of the standard deviarion to the mean, CV = a/p. Squares show the r d t s for tonic neurons (no spike-trequency adaptation), while circles indicate the resdts when adaptation is present (Ic , = 1.2, kI = 0.6, ,8 = 0.2, T, = 2.5; see section 4.3.2 for a description of these parameters). ( Top) Random voltage reset: after each spike, u is reset to a random value, uniformly chosen £rom the range [--3. .3]; the firing theshold is 0 = 1. The horizontal avis shows the average f i g rate' < f > . (Bottom) Random threshold reset: after each spike, the firing threshold 0 is
0.5
0.45
0.4
0.35 5
0.3
0.25
0.2
0.1 5
reset to a random value. uniformly chosen fiom the range [.7. 1.31: the voltage is reset to u. = 0.
I 4 I I 1 1 1 1
No adaptation - + Adaptation - - - - - - - -
- -
' 1 1 1 1 I I I I
O 1 2 3 4 5 6 7 8 9 <f>
Pr{= $ X 5 b } = J: fs(z)dz. .hother representation of a random Mnable's distribution is the curnulatzve distribution function (cdf) F\-(z) = Pr{X 5 x}. The pdf and cdf are related by
The mean of a random variable is given by
and in general the n-th moment of X is given by
The variance of X is given by
Finally. the standard deuiation of X is given by
0.y = \ / b r [ ~ ] .
and the coeficzent of uariability (CV) is given by CV = as / E[X] = os / p s .
6.3 Neuron model
Once again. the integate-and-fie model described in [10810 which includes spike-frequency adap- tation. has been used. (This model was also used in chapters 4 and 5.) The nondirnensionalized
form of the equations is
where ?(c) = [l + k#. Î ( t . c) = Io - krc. and 6 is the spiking threshold. See section 4.3.2 for information on the parameters. The tonic (nonadaptîng) version of the equations is obtained
simply by eliminating equation (6.7) and using c = O in equation (6.6).
For a Lued value of c.
where v(0) = v, is the voltage reset value. This solution is euact in the tonic case, but only an
approximation when the neumn display spikefrequency adaptation. since c is not constant in
that case. Setting v, = ?f, we see that v(t) -t v, as t -+ x: provided that v, > 8. the neuron
will eventually cross the spiking threshold and be m e t CO vo.
6.4 Random voltage reset
In random voltage reset, the voltage u is reset stochastically rather than to a constant d u e
vo. To determine the resulting variability in the intenpike intenals. we must define a random
Mnable giving the distribution of the ISIS. then find the coefficient of Mnability (Ct') of this variable.
First. note that for a given reset voltage v,. we c m solve equatioa (6.8) for the time. t* . at which v = d :
which yields (for h e d v, and 8)
where v, = ~ f . as before.
If v, is replaced with a random variable Vo with some known distribution fvo (va). then the
intenpike i n t e d also becomes a random variable. C d this random Mnable T: its distribution
fT(t) is determined by the distribution of V.. The variability of the interspike intends is thus
determined by the variability of the random variable T. Before proceeding, we need to tkid the inverse of equation (6.10): which gives the voltage
reset value required to produce a gîven ISI:
The cdf for T is
FT(t) = Pr{T 5 t } = Pr{O 5 T t ) . (6-12)
where the second equation arises simply &om the fact that ISIS may not be negative. Since
5-',L (t) is monotonicallq- decreasing, equation (6.12) may be rewritten as
Figure 6.2: Random voltage reset. uniform pdt The shaded area comesponding to the integral in equation (6.16); this area gives the variable T falls between O and t , which is the definition of FT(t).
indicat es indicates the area probability that the random
Csing the known distribution fLo(vo). (6.13) can be evaluated. and the pdf fT(t) may then be
found by taking j T ( t ) = dFT( t ) /d t . The details of solving (6.13) depend. of course. on the f o m
of random reset distribution. Possible cases include a uniform distribution [36] and a Gaussian
distribution (1051; here. 1 will present only the former.
.Assume that the voltage is reset with uniform probability into the voltage range [A. BI: with
B <_ 8: this gives the foilowing probability distribution function for V, ( s e Figure 6.2):
Ira(%) = B-.4 '
O , otherwise
The minimum possible ISI occurs when V, = B (highest voltage = shortest time), while the
maximum ISI occurs when Vo = A (lowest voltage = longest time). Define these limiting times
as Tl = t* ( B ) and T2 = t* ( A ) , where t* (v.) is given by equation (6.10). For ISIS with t < Ti or
t > T2, the pdf fv0 (t) = O. For f i 5 t 5 T2. we have A 5 V.( t ) 5 B, and equation (6.13) may be applied as foilows:
The complete e-pression for the cdf for T is then
t < TL
t > T2
Taking the derivative with respect to t. we fuid the pdf.
To evaluate the coefficient of variability (CV), we dl require the mean and standard deviation
of the interspike i n t e d distribution given by equation (6.21). Considering distributions of the
form fr(t) = xeYt and carrying out the appropriate integrations, we find
and
Setting
and using X = Zi. Y = Zz in equations (6.22-6.23). the coefficient of variability may be found: E[T] and E[T?] combine to give Var[T] = E[T*] - (E[T])~. which in turn gives O* =
and CV = oT /pT
6 A.2 Approximations for c dynamics
In the case of no adaptation, 7 = 1 and f = Io, and equations (6.22-6.25) may be applied
directly To apply these equations for the case with spike-Erequency adaptation, the dynamics of the adaptation variable c on t and 1 must be taken into accouut. Ushg the same approximation
Figure 6.3: Random voltage reset: theory and numericai resuits. Mter each spike. the reset voltage v, is uniformly chosen in the range [-.3. -31: the ûrïng threshold is constant at 0 = 1. The neuron exhibits spike-frequency adaptation (k, = 1.2. kt = 0.6, @ = 0.2. T, = 2.5). The horizontal avis gives the average stead-state finng rate, < f >. ( Circles) Results from numerical simulations. (Dashed line) Theoretical prediction for CV, using (6.24-6.25) in (6.22-6.23).
as in section 4.4.1, we replace C = Bb(v - O) - c/rC with C -- /3j - c/.r,, where f is the firing
rate. -4s before, this implies that c wiU rise to a steady-state value c,, = /3?od (lo - f ) . where
~d = [B (kr + %) + $. Then F = [l +k+c,,]-' and f = 1, - krcsSt and the fomulae derived in the previous section may be applied as before. This approximation assumes that the firing rate is fast relative to the decay rate r,; for slow rates ( s m d values of the input m e n t Io). the
approximation breaks clown.
Figure 6.3 shows a cornparison between numerical nins and the theoretical predictions. for
the random voltage reset case. 'iumerical nuis without adaptation are not shown, but they have
been performed, and show a very dose match between the theory and the numerical resuits.
6.5 Random threshold reset
6.5.1 Calculation of CV
If the firing threshold 0 is reset. and we hold v, and v, fuced. equation (6.10) becornes
e:;act!j. âs befûre twxpt that uuw the iLlterspikt! irriewai is a luriction of 6 ratber t'nan oi v,.
Fincüng the inverse of this b c t i o n gives us the voltage threshold required to give a desired ISI. t :
P ( t ) = v, + (v , - ~ , ) e - ~ " (6.21)
Let 8 be a randorn variable whose d u e is the random threshold. The pdf of 8. fe(0)
determines the cdf of the randorn variable T (which gives the incerspike intervals), as follows:
Once again. there are two possible cases for the distribution of the random threshold: uniform
and Gaussian. and once more 1 will present oniy the uniform case. in which the thresbold is reset
uniformiy into the range [C. Dl, which gives the following pàf for 8:
with D 2 C, C 2 v,, and D 5 v,; see Figure 6.4. Substituthg (6.33) into (6.32) j-ields, for C 5 8-'(t) 5 D.
ë1 (t) D
Figure 6.4: Random threshold reset, uniform pdf. The shaded area indicates indicates the area correspondhg to the integrai in equation (6.35): this area gives the probability that the random variable T fdls between O and t. which is the definition of Fr ( t ) .
1 = - [e-l(t) - C]
D - C
Define Ti = t * (C) and T2 = t * ( D ) as the minimum and maximum possible ISIS. respectively.
Then substitut ing (6.27) into (6.36) produces the complete expression for FT ( t ) :
Taking the derivative of (6.37) with respect to t gives the pdf of the interspike intervals.
Tl 5 t 5 T2 =
ot herwise
The pdf here has the same form as it had in the uniform rmdom voltage reset case. namely
f r ( t ) = xeYt. Setting
and using X = Z3, Y = Z4 in equations (6.22) and (6.23): we can b d the coefficient of variability
just as before.
6.5.2 Approximation of c dynamics
.As in section 6.4.2, 1 will use the steady-state value c,, to find the values of ? and f, then
use (6.39-6.40) in (6.22-6.23) to find the CV.
Figure 6.5 compares the results kom numerical simulations with the t heoretical prediction obtained by using the css approximation. As the plot indicates. the fit is not very accurate.
This is because the approximation assumes that u, » O, while in the adapting case v, in fact rernains quite srnall pvpn at high finng r a t e , r?- Pr̂ ! be c L c ~ ~ s e d in section 5.62 (scc Figxc 5.7).
The deviation from theory is greater for the randorn threshold reset than it was for the random
voltage reset, because setting the threshold to values as high as 8 = 1.3 brings the neuron into
a significantly less linear region, in which chere are occasionally very long interspike intenals.
making the appro-ximation to the c dpamics less accurate.
6.6 Discussion
6.6.1 Difference between voltage and threshold resets
The reason that random voltage reset and randorn threshold reset have opposite effects on the
coefficient of variabiiity as the firing rate decreases is illustrated in Figure 6.6. Consider the tonic
casc (no spike-frequency adaptation). so that ? = 1. Ï = Io, and u, = tr = Io. For large values
of I,, u, » 8, and the neuron's dynamics are essent iaily linear during the approach t O t hreshold
(in equation (6.6). the Io tenn dominates the le& term. - u / f ).
At smder values of u,. the approach to threshold is no longer linear. as the leak term cornes
to dominate the dyamics. In the randorn voltage reset case. this reduces the variability because
the influence of the initial condition v, is overwhelmed by the effect of the le* term during the
asymptotic approach to v,. For v, = 8 + E ( E « 1) , the firing time increases to infinity. and
trajectories arrive at the threshold at the same instant regardless of their initial condition; thus.
the coefficient of variability approaches zero.
In the random threshold reset case. a small value of u, increases the variability. because
some trajectories meet the threshold when they are stiIl in the largely Linear phase, while others
enter the c w e d asymptotic approach to u,. For v, = D + E ( E « 1). the shortest £king times
have some small finite value. while the longest times (those for which the random variable 0 takes on the value D) approach infini@; the coefficient of vaxiabiüty thus approaches inanity.
Thus, the reason that the coefficients of variability shown in Figures 6.1 for tonic neurons diverge (upwards or downwards) as the firing rate decreases is because the reduced firing rate
is accomplished by reducing Io. which conesponds directly to reducing u, and thus moving the
neuron into the regimes discussed in the preceding paragraphs. Xote that reducing the h g
rate while simult~eously increasing the t h e constant r could keep the trajectones h e a r even
Figure 6.5: Radom threshold reset: theory and numericd results. .ifter each spike, the firing threshold 6 is u n i f o d y chosen in the range [.7: 1.31; the reset voltage is ccnstant at u, = O. The neuron e-uhibits spike-fiequency adaptation (k, = 1.2, kr = 0.6, #3 = 0.2. Tc = 2.5). The horizontal avis gives the average steady-state h g rate. < f >. ( C h i e s ) Resuits from numerical simulations. (Doshed lzne) Theoretical prediction for CV1 using (6.39-6.40) in (6.22-6.23).
Random vottage reset
Random threshold reset
Figure 6.6: Difference between random voltage reset and random threshold reset! for u, = 1.75. (Top) Random voltage met . The solid and dashed lines represent the extrernes of a random reset of u. into the range [-.3, 31. FVhen each of these traces reaches 0 = 1, a spike is generated; the interval between these kt-passage times thus represents the range of possible interspike intervals. (Bottom) Random threshold reset. The solid and dashed lines represent the extrema of a random reset of 8 h t o the range [.?. 1.31. Each of these Lines starts from the same reset value of v, = O. The solid line generates a spike when it reaches û = 1.3, while the dashed line spikes at 8 = 0.7. The intemai between these first-passage times represents the range of possible interspike intervals. The range is much wider here than in the upper plot. For a much larger lalue of u, , a l l voltage traces would be nearly linear , and there wodd be no ciifFerence between the random voltage and random threshold resets.
at low rates, and the effects on the coefficient of variability would be eliminated.
6.6.2 Effect of adaptation
The reason for the increase/decrease in variability for adapting neurons compared with tonic
neurons at the same frequency is essentiaily the same effect discussed in section 6.6.1: reducing
v, brings the neuron into a regime wherein the approach to threshold becornes dominated by
leak terms, and the trajectories diverge from linearity. Figure 6.7 compares the value of vT
at various h g rates for a tonic neuron with those for a neuron with adaptation. CVhile v,
increases hearly with the firing rate for the tonic neuron, the adapting neuron asymptotes to a
s m d value of v, even at high rates, due to the decrease in i and f caused by increasing c. Thus
the adapting neuron is in the smd-v, regime discussed above. even when it is king rapidly.
and so it experiences the corresponding effects on its CV (a decrease in the random voltage reset
case. and an increase in the random threshold reset case).
These effects on neural variabiiity are something to be borne in mind in future modelling work
using the integrate-and-fire model: noise introduced by a random reset may have unexpected
effects on the Mnability in the presence of spike-frequency adaptation. depending on the choice of reset method.
1 * No ada tation 1 + With a 8 aptation
Figure 6.7: Variation of v, with firing rate, with and without adaptation: results are from numerical runs with no random reset. Since the value of v, varies slightly during each interspike intenal, (as c is incremented. then decays) . the value on the horizont al axis represents the average of a, over many intenmls. 'iote that the tonic (no adaptation) case has v, increasing linearly with firing rate, while in the adapting case v, asymptotes to a small due.
Chapter 7
Summary and Conclusions
The work presented in this thesis constitutes part of a much larger effort. narnely an attempt to
extract phciples £rom animal behaviour which may be applied to produce robots with the sort
of behavioural competence displayed by reai animals. The challenges involved are formidable.
and we are stili at an early stage of research: it is not clear which aspects of animal physiology are & i d to their efficient operation. and which may be neglected. Progess tonmrds robots
r iva lhg the performance of biological organisms wiii depend on the efforts of many researchers.
from many different fields: no single researcher c m hope to c q out work in all of the required
directions. My goal has been to 6nd certain aspects of the larger problern on which 1 couid make
some progress, and to pursue work in these directions with the hope of contributing something to our curent understanding.
.himals are built of muscles, bones. neurons. and so on: these are very different materials than
those found in present-day robots. which are made mainly of metal. plastic. motors. minochips.
and the like. There wiil therefore be a lïmit to how directly we may apply the information we
obtain by studying animals to the task of bÿilding robots: many details of the operation of
an animal's physical body will be neither reproduceable in a robot, nor relevant to the robot's
operation. (Does the fact that a squirrel cornes equipped with a heart mean that our robotic
squirrel m u t have an artificial hem. even if it has no b'blood'?) Which aspects of an animal's
physical constitution must be emdated to make a biologicdy inspired robot?
To address this question. we need to abstract the details of animal physiology into mathe-
matical models; this abstraction is what will enable common principles to be impl~mented in
substrates as Werent as cytoplasrn and siLicon. Like most other researchers, I have focussed
on neurons. the signal-processing c e k which are rnost directly responsibie for an animal's be-
havioural patterns. Each chapter of the thesis has presented a mode1 of some aspect of the
d-mamical behaviour of networ-k of neurons.
Chapter 2 describeci a simple method for adding spike-frequency adaptation (a common
property of red neurons) to e'usting analog neuron models: the resulting models were called
"phasic analog neurons." *Adaptation extends the dynamics of networks of andog neurons. and
in particdar, it introduces the possibility of oscillatory solutions in situations where there would otherwise be ody Lxed-point solutions. The mat hemat ical analysis ~resented in the chapter
enabled us to characterize the conditions under which oscillations would arise in a simple two-
neuron network, and to predict the stability of the oscillatory cycles arising. Since analog neurons models are a popular means of representing neural activities in artificial systems, and especidy
in robotics, this extended range of dynarnical behaviours may be of relevance to other researchers.
The mathematical anaipis provides guidance in choosing the parameter settings for a two neuron
network such that the resulting oscillations have some desired form: this minimizes the amount
of trial and error searching required.
In Chapter 3, one application of phasic analog neurons was presented: a network of twelve
such neurons was used to generate command signais to be sent to the legs of a six-legged naking
robot. Two concepts from biological locomotion were used to guide the design of the network:
each leg was driven by a "centrai pattern generatoi' producing a baseline rhythm: and the motion
of leg was iduenced by a "stretch reflek' which resisted motion past a certain distance away
from the neutrai position. -1 nurnber of biologically inspireci networks based on andog neuron
models have been used by other researchers to generate hexapod gaits. The network presented
here constitutes a new variation on the usual fom of such networks. and one which is unusuai
in its use of adaptation to generate the oscillatory rhythms.
Chapter 4 considered neural osciilators similar to those discussed in the previous two sections!
but £rom a different perspective: rather than examining two analog neurons. two populations of
individuaily spiking neurons were considered? with coupiing between the two populations. Once
again. oscillations arose in the presence of spike-Frequency adaptation, and mathematical analysis
provided predictions of the point of omet and the penod of these oscillations. The main point
of interest Born a theoretical perspective was the ability to make such predictions for a system
with a large number of dimensions, by simpli&ing the dynamics to a mo-dimensional form.
The analysis was based on earlier work describing the dynamics of neural populations [105], but
extends the previous approach by considering the effect of spike-frequency adaptation. From an
engineering perspective, osciilators consisting of a iarge number of individual units could have
advantapi in terms of robustness: since no single unit is responsible for the collective osciliatory
behaviour, failure of one or more units will not destroy the osciilations.
Moikg from motor behaviour to sensory processing, Chapter 5 discussed a signal-processing
effect caIled noise shaping, which has recently [36] been shown to arise in networks of coupled
model neurons. The chapter presents computational results extending the previous work: the
effect is shown not to rely on the specifîc type of noise introduced into the system; the addition
of spikefrequency adaptation is shown to improve the network's ability to detect signals; and the
noise-shaping effect is shown to be présent in a more elaborate model than that previous used,
suggesting that the phenornenon may arise generically in networks of spiking units, rather than
as a consequence of the specific model select4 for the initial studieç. Once again, the point of
interest fiom an engineering perspective is the illustration that a highly pardel, highly redundant
system c m carry out effective signal processing. Since such a system would be e-xpected to be very
robust, it is of interest to attempt to understand the principles involved, and these computational
studies have provided the basis for theoretical work now in progress.
Various rnethods have b e n used to inject noise into model neurons, and Chapter 6 presented
some simulations and analysis of the effect of spike-frequency adaptation on two popular sources
of noise used Mth the integrate-and-fire model. If researchers are to use integrate-and-fire mode1
with spike-frequency adaptation present. they must bear in mind that it rnay change the effect of the noise that they inject into the model; when dealing with coupled nerworks. such a diange
is not always clearly visible in the network output. By considering individuai neurons, the work
presented in the chapter clarifies the source of the effect. allowing researchers using such models
to account for it.
The task of trying to understand how animais work is one of the most chailenging facing
modem science. and we have only just begun. The vast complexity of biological systems is attracting more and more engineers, physical scientists. and mathematicians to study them: as
the efforts of many researchers accumulate over rime. we may one day reach the point where
we have a level of understanding which wiIl enable us to build robots able to deal with their
entironment as well as animals do. 1 hope that this thesis has provided some mal1 contribution
towards that goal.
Bibliography
111 Mark Edgiey. What is Caenorhabditis elegans and why work on it? From web site:
http:,'i www.biotech.missouri.edu~ Dauer-FVorld: Wormintro.htrn1. 1999.
[21 Xicholas S trausfeld. -4 tlns of an insect bmin. Springer-Verlag, Berlin. 1976.
[31 Valentino Braitenberg md Milena Kemali. .Itlas of the hog 's brain. Springer-Verlag. Berlin.
1969.
[4\ Robert W. Williams and Karl Hemp. The control of neuron number. Annual Review of
Neuroscience, 11:423453? 1988.
[5] Eric R. Kandel. Nerve cells and behavior. In Eric R. Kandel and James H. Schwartz.
editors? Ptinciples oJ neuml science. chapter 2. pages 14-26. Elsevier, 'Yorth-Holland. Sew
York. 1981.
(61 Rolf Dermietzel and David C. Spray. Gap junctions in the brain: where, what type. how
many. and why? 'lFends in Neurosciences. l6(5) :l86-l92. 1993.
171 Michael Zigmond, Floyd Bloom. Story Landis, James Roberts, and Lamy Squire. editors.
findamental neuroscience. Academic Press, San Diego. k t edition. 1999.
[81 Michael -4. hb ib , editor. The handbook of bmin theoy and neuml networks. hf1T Press, Cambridge. MA, Grst paperback edition. 1998.
191 Eric Kandel md James Schwartz. editors. Ptinczples of neuml science. Elsevier/'uorth-
Hoiland, Yew York, ht edition, 1981.
[IO] h - i n B. Levitan and Leonard K. Kaczmarek. The neumn. Oxford Cniversity Press. 'iew
York, 1991.
[Ill John G. Xichob, A. Robert &lartin. and Bmce G. \Vallace. h m neuron to bmin. Sinauer
.hsociates, SunderIand, &LA, third edit ion, 1992.
1121 William A. MacKay. Neuro 1 01: Neumphysiology vrithout tears. SefaIotek, Toronto, 1997.
(131 Harvey Lodish, David Baltimore. Amold Berk, S. Lawrence Zipursky, Paul Matsudaira,
and James Dame11. Nerve cells. In Molecular Cell Biology. chapter 21, pages 923-990.
ScientSc American Books (W. H. Freeman and Company), Xew York. 1995.
[14I Chnstof Koch. Computation and the single neuron. Nature, 385:207-210, 16 January 1997.
[151 Christof Koch. Biophysics of computation, chapter 19 (Voltage-dependent events in the
dendntic tree), pages 428-451. Oxford Cniversity Press, Xew York, b t edition. 1999.
[16] John Koester. Passive electrical properties of the neuron. In Eric R. Kandel and James H. Schwartz. editors, Pnnciples of neural science, chapter 4. pages 3643. ElsevieriNorth-
HoUand, New York, 1981.
[li] hfichael D. Gershon. James H. Schwartz. and Eric R. Kandel. Morphology of chernical synapses and patterns of interconnection. In Eric R. Kandel and James H. Schwartz.
editors. P rinctples of n e v m l science. chapter 9. pages 91-103. Elsevier J Yorth-Holland. Xew York, 1981.
[181 Thomas F. Weiss. Cellular biophysics. volume 9 (Electricai properties) . MIT Press. Cam-
bridge. MA, 1996.
119) Harvey Lodish. David Baltimore. Arnold Berk. S. Lawrence Zipursky. Paul Matsudaira,
and James Darnell. hfolecular Ce11 Biology. Scientific .herican Books (W. H. Freeman and Co.). Xew York. third edition. 1995.
[201 Christof Koch. Biophysics of computation. chapter 9 (Beynd Hodgkin and Huxley: calcium
and calcium-dependent potassium nirrents)? pages 212-231. Oxford University Press, New
York. &t edition. 1999.
[211 ?Ciao-Jing IVaag. Calcium coding and adaptive temporal computation in cortical pyramidal
neurons. Journal of Neurophysiology, 79(3) : 1549-1566, Mar& 1998.
[221 Idan Segev and Robert E. Burke. Cornpartmental models of complex neurons. In C. Koch and 1. Segev, editors, Methods in neuronal modelang, chapter 3. pages 93-136. The MIT' Press. Cambridge. >Li. 1998.
[231 W i d R d . Perspective on neuron mode1 cornpl&. In Midiael A. Arbib, editor, The handbook of bmin theory and neural networks. pages 72û-732. The MIT Press. Cambridge,
&LA, 1998.
[24] R. Traub. R. Wong, R. 'ULiles, and H. 'VLichelson. A mode1 of a ca3 hippocarnpai pyramidal
neuron incorporating volt age-damp data on intrinsic conductances. Journal of Neumph ys-
iology, 66:635-649, 1991.
1251 -4. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of P h ysiology (London),
117:500-544, 1952.
[261 Chnstof Koch. Biophysics of computation, chapter 6 (The Hodgkin-Huxley mode1 of action potential generation) , pages 142-171. O.dord Lniversity Press, New York, Grst edition,
1999.
i2ii John A. %%ire, Carson C. Chow, Jason att, L'nstina Soto-TreviÏio. and 'rancy Kopeii.
Synchronization and oscillatory dynamics in heterogeneous. rnutually inhibited neurons.
Journal of Computational Neuroscience, 55- 16, 1998.
[28] R. FitzHugh. Impulses and physiological States in theoretical models of nerve membrane. Biophysical Journal, 1:445-466, 1961.
[291 J. S. Sagumo. S. .himoto. and S. Yoshizawa. An active pulse transmission Line simulating
nerve axon. Proceedings of the IRE. ZO:2061-207I. 1962.
[301 .Y. D. ~lurrary. Mathematical biology, volume 19 of Biomathematics. Springer. Yew York.
second edition. 1993.
[311 Henry C. Tuckwrll. Introduction to theoretical neumbiology. volume 2: Nonlinear and
stochastic theories. Cambridge University Press. Cambridge. C.K.. 1988.
1321 Louis Lapicque. Recherches quantitatifs sur l'excitation electrique des nerfs traitee comme
une polarisation. Journal de Physiologie et de Pathologie généde (Paris). 9:620-635. 1907.
[33] Christof Koch. Biophysics of computation chapter 14 (Simplified models of individual
neurons), pages 330-349. Oxford Cniversity Press. 'Yew York, k t edition. 1999.
[34] J.J. Hopfield. Neurons with graded response have collective computational properties
like those of two-state neurons. Pmceedings of the National Academy of Sciences, USA, 81:3088-3092, May 1984.
[36] John -1- Hertz, Richard G. Palmer, and Anders S. Krogh. Intmduction to the theory of
neural computation. Addison- Wesley? Redwood City,. CA, 199 1.
[361 D. J. Mar' C. C. Chow. W. Gerstner, R. W. Addams, and J. J. CoUins. Noise-shaping in
populations of coupled mode1 neurons. Pmceedings of the National Academy of Sciences
(USA). 96: l045U-104%. hugust lW9.
[37] .-tiya and Pierre Baldi. Oscillations and synduonizations in neural networks: an
exploration of the labeiing hypot hesis. International Journal of Neurol Systems. 1 ( 2 ) : 103- 124, 1989.
[38] P. C. Bressloff and S. Coombes. Dqnchronization, mode locking, and bursting in strongly
coupled integrateand-fire oscillators. Ph yszcal Review Letters, 8 1(10) :2168-2171, Septem-
ber 7 1998.
[39] F. Rieke, D. Warland. R. de Ruyter van Steveninck, and W. Bialek. Spikes: Expluring the neural code. 4IIT Press, Cambridge, Mass., 1997.
[401 Michael .A. Cohen and Stephen Grossberg. Absolute stability of global pattern forma-
tion and parailel memory storage by cornpetitive neural networks. IEEE %nsactions on
Systems, Man, and Cybernetics, 13(5):815-826, October 1983.
[41] D. J. Pinto, J. C. Bnimberg, D. J. Simons' and G. B. Ermentrout. h quantitative p o p
dation mode1 of whisker barrels: ce-aamining the Wilson-Cowan equations. Journal of Cornputalional Neuroscience, 3:24G-264. 1996.
(421 F. R. Waugh and R. SI. WesterveIt. Analog neural networks with local cornpetition. II. Xpplicat ion to associative merno- Physical Review E. -i7(6):G3?--Ki 1. June l993.
1-43] Randail D. Beer and John C. Gdagher. Evolving dynamical neural networks for adaptive
behavior. Adaptiue Behauior, 1(1):91-122. 1992.
[Ut John Gallagher, R. Beer, K. Espenschied. and R. Quinn. Application of evolved locomotion
controllers to a hexapod robot. Robotics and Autonomous Systems, 19:95-1031 1996.
[45) J.J. Hopfield and D.W. Tank. "Neural" computation of decisions in opt imization problems.
Biological Cybernetics. 52:141-102. 1985.
[461 John A. Hertz. Richard G. Palmer, and Anders S. Krogh. htroduction to the theoq of
neumi cornputation. Addison-Wesley, Redwood City, CA, 199 1.
[47] R. Kühn and S. Bos. Statistical mechanics for networks of graded-response neurons. Phys- ical Review A, 43(4) :208+2087, February 199 1.
[43] F. R. Waugh and R. M. Westervelt. .halog neural networks with local cornpetition. 1.
Dpamics and stability. Physical Review El 47(6):4524-1536. June 1993.
[49] Hui Yel Anthony 'T. 4Liche1, and Kaining Wang. Global stability and local stability of
Hopfield neural nemorks wit h delays. Ph ysical Review E, 50 ( 5 ) A20642 13' 'iovember
1994.
[SOI Hui Ye, Anthony N. Michelo and Kaining Wang. Qualitative analysis of Cohen-Grossberg
neural networks with multiple delays. Physiuzl Review E, 51(3):2611-2618, Mar& 1995.
(511 B. Fiedler and T. Gedeoii. h class of convergent neural network dynamics. Phystca D? 111:288-294, 1998.
[52l L. Glass and M. Mackey. h m docks to chaos: The rhythm of life. Princeton Gniversity
Press, Princeton, YJ, 1988.
[53] T. Graham Brown. The intrinsic factors in the act of progression in the m m a l . Pm- ceedings of the Royal Society of London, Series B, 84(572):308-319, December 1911.
[54] Rondd L. Calabrese. Half-center oscillators underlying rhythmic movernents. In 3lichael.l. Arbib, editor. The handboot of brain theory and neuml networks. pages 144-447. MIT Press.
Cambridge. 41.4, 1998.
[55] Richard A. Satterlie. Reciprocal inhibition and rhythmicity: swimming in a pteropod
moiiusk. In Jon W. Jacklet. editor. Neuronal and cellular oscillators, chapter 6, pages
151-171. Marcel Dekker. New York. 1989.
[561 James Gordon. Spinal medianisms of motor coordination. In Eric R. Kandel. James H. Schwartz, and Thomas 51. Jessell. editors. Principles of neuml science. chapter 38, pages
381-595. Elsevier, Xew k'ork, 199 1.
[si1 Kiyotoshi M atsuoka. Sust ained oscillations generated by mut ually inhibiting neurons wit h
adaptation. Biological Cybemetics. 52:364-376, 1985.
1581 Kiyotoshi Matsuoka. .Llechanisms of frequency and pattern control in the neural rhythm
generatorj. Biological Cybenietics. 56:316-353, 1987.
[59I Harold Atwood and Peter ?iguyen. 'ieurai adaptation in crat$sh. .4merican Zoologist,
35:28-36, 1995.
(601 D. Hom and M. Csher. Yeural networks with dynamical threshoids. Physical Review A, 40(2):103ô-1044, July 1989.
[611 Janet Halperin. hlachine motivation. In J. Meyer and S. Wilson, editors, From Animals to Animats, pages 213-221. Cambridge. Mass., 1990. MIT Press.
[621 A. Hen. B. Sulzer. R. Kuhn, and J.L. van Hemmen. Hebbian learning reconsidered: repre- sentation of static and dynamic abjects in associative neural nets. Biological Ctjtenietics,
60:457467, 1989.
1631 . U m y C . Scott. Neurophysics. Wiley-Interscience. Sew York, 1977.
[Ml J.J. Collins and 1. Stewart. Hexapodai gaits and coupled nonünear oscillator models.
Biological Cybernetics, 68:287-298: 1993.
[65] Paul Glendinning. Stability, instability and chaos: an introduction to the theory of nonlinear
dzflerential equations. Cambridge University Press, Cambridge, U.K., 1994.
[661 John Guckenheimer and Philip Holmes. Nonlanear Oscillations, Dynamical Systems, and
Bzfircations of Vector Fields, volume 42 of Applied ikfathematicnl Sciences. Springer-Verlag, New York, 1983.
[671 David McMiUen, Gabriele D'Eleuterio, and Janet Halperin. Oscillations in phasic analog neurons. Poster presentation at the Gordon Research Confereace on Yeuroet hology, 1999.
[681 Ivan E. Sutherland. A walking robot. The Mucian Chronicles, Pittsburgh! P.4. 1983.
[69] Marc Raibert . Legged robots that balance. MIT Press. Cambridge. MA. 1986.
[TOI Shin-hlin Song. Machines that walk: the adupfive suspension vehicle. SIIT Press. Cam- bridge, MX, 1989.
[Tl] Rodney A. Brooks. .i robot rhat walks: emergent behavion from a carefully evolved network. Neural Computation, l(2) 253-262. 1989.
[721 David R. McSiillen. Kafka: A hexapod robot. 4Iaster-s thesis. Lniversity of Toronto lnstitute for Aerospace Studies. 1993.
[731 Luc Steels and Rodney Brooks' editors. The artificial life route tu artificial intelligence:
building embodied, situated agents. L. Erlbaum Associates? Hillsdale. SJ. 1995.
[XI Rodney -4. Brooks. A robust layered control system for a mobile robot. IEEE Journal of
Robotics and Automato'on. 2(1) :14-23. Mar& 1986.
[?SI Randall D. Beer. Intelligence as adaptiue behauior: an ezperiment in computationol neu-
roethology. Academic Press, Boston, h1.L 1990.
[761 David J. MLIaako. A geneml mode1 of legged locomotion on notuml t e m i n Kluwer Academic
Publishers? Boston, MA, 1992.
[ï?] H. Cme, C. Rartling, G. Cpbalyuk. J. Dean, and SI. Dreifert. .A modular artscia1 neural
net for controlling a six-legged waiking system. Biologzcal Cybemetics. 72:421-430! 1995.
[78] K. Espenschied, R. Quinn, R. Beer. and H. Chiel. Biologically based distributed control
and local reflexes improve rough terrain locomotion in a hexapod robot. Robotics and Autonomous Systems, 18:59-64. 1996.
1791 Ronald C. Arkin. Behauior-based mbotics. MIT' Press, Cambridge, MA, 1998.
[SOI H. J. Chiel, R. D. Beer, and J. C. Gailagher. Evolution and analpis of model CPGs for
wallring. 1 Dynamical modules. Journal of Computational Neuroscience, ï(2) :99-118. 1999.
[811 R. D. Beer. H. J. Chiel. and J. C. Gallagher. Evolution and analysis of model CPGS for walkïng. II Generd principles and individual variability. Journal of Computational
Neumscience, 7(2):119-147, 1999.
[82] Keir Pearson. The control of walking. Scientijc American. pages 72-86. December 1976.
[83] Fred Delcornp. Neural basis of rhythmic behavior in animals. Science. 210(31):492498,
October 1980.
(8-11 S ten Griher. Peter \Valen. and Lemart Brodin. 'u'euronal network generating locomotor behavior in lamprey: circuit% transmitters, membrane properties. and simulation. Annual Revieur of Nezrroscience. 14: 169-99. 199 1.
[85] Y. hhavsky, G. Orlovsky Y. Panchin. A. Roberts. and S. Soffe. Seuronal control of swirn-
ming locomotion: analysis of the pteropod mollusc clione and ernbryos of the amphibian xenopus. Trends in Neurosciences. 16(6):227-233. 1993.
[86] Allen 1. Selventon. Yuri V. Pandiin. Yuri 1. Arshavsky. and Gngori S. Orlovsky. Shared
features of invertebrate central pattern generators. In P. Stein. S. Griher . -1. Selverston.
and D. Stuart, editors. Neuronï, networks. and motor behavior. chapter 10' pages 105-1 17. The MIT Press. Cambridge. MX. 1991.
[87] Ziancy Kopell. Toward a theory of modehng central pattern generators. In A. Cohen.
S. Rossignol, and S. Grillner. editors. Neural control of rhythmic mouements in uertebmtes. chapter 10. pages 369413. John Wiley St Sons, Yew York, 1988.
[881 Nancy Kopell. Coupled oscillators and the design o f central pattern generators. Mathe- matical Biosciences, 90:87-109, 1988.
[89] Brian hldoney and Donald H. Perkel. The roles of synthetic models in the study of central pattern generators. In A. Cohen: S. Rossignol, and S. Grillner, editors, Nevnzl contml of
rhythmic movements in vertebmtes. chapter 11' pages 415453. John Wüey Sr Sons. New
York? 1988.
[901 Eve Marder, Xancy Kopelll and Karen Sigvwdt. Hoa- computation aids in understanding
biological networks. In P. Stein, editor: Neumns, networks, and motor behavior, chapter 13,
pages 139-149. The M T Press, Cambridge, hL4: 1997.
[91] D. Terman: N. Kopell, and .A. Bose. Dynamics of two mutually coupled slow inhibitory
neurons. Physica Dy 117:241-275, 1998.
[921 David R. Mc'vlillen, Gabriele 31. T. D'Eleuterio, and Janet R. P. Halperin. Simple central
pattern generator mode1 using phasic analog neurons. Physical Reuiew E, 59(6):6994-6999: June 1999.
[931 JefEey M. Camhi. Neumethology: Neme cells and the natuml behavior of anamals. Sinauer
Xssociat es, Sunderland, Mass., 1984.
[94] ü. Bassler. On the definition of central pattern generator and its sensory control. Bioloqical Cybenietics, 54:65-69. 1986.
[95l Keir G. Pearson and Jan-Marino Ramirez. Sensory modulation of pattern-generating cir-
cuits. In P. Stein. S. G d n e r , A. Selverston. and D. Stuart. editors. Neurons, networh,
and motor behavior. chapter 21. pages 225-233. The MIT Press. Cambridge. MA. 1997.
1961 Martin Golubitsky. lan Stewart. Pietro-Luciano Buono. and J. J. Collins. A rnodular
network for legged locomotion. Physim D. ll5:56-72, 1998.
1971 J. J. Collins. Gait transitions. In Slichael -1. Arbib. editor. The handbook of bmin t h e u y and neumi networks. pages 420-423. The MIT Press. Cambridge. MA. 1998.
1981 D. Wilson. Insect walking. Annual Review of Entomology, 11:103-122. 1966.
[991 Fred Delcornp. The walking of cockroaches-deceptive simpiicity. In Biologieal neunil
networks in invertebmte neumethology and mbotics. chap ter 2. pages 2 1-L 1. Academic Press. Boston, 1993.
[IO01 Joseph Yang. Implementation of dpamic neural network appiied to a walking algonthm
of a hexapod robot. Bachelor's thesis' Cniversity of Toronto. 1999.
[101) Hok Cruse. What rnechanisms coordinate leg movement in wallring arthropods? Rends in Neurosciences, l3(l):l5-21 1990.
il021 Matthew SI. Williamson. 'leural control of rhythmic a m movements. Neuml networki. 11 (7-8):13'i9-1394, 1998.
[IO31 Clarence L. Johnson. Analog computer techniques. SIcGraw-Hill, Xew York. second edition,
1963.
[IO41 Merlin L. James. Analog and digital computer methods in engineering analysis. Interna- tional textbooks in generd engineering. International Teutbook Compang: Scranton, PA, 1964.
[IO51 Wulfram Gerstner. Populations of spiking neurons. In WoEgang hfaass and Christopher Bishop, editors, Pulsed neural network, chapter 10, pages 26 l-'295. MIT Press, Cambridge.
Mass.? 1999.
[106] Hugh R. Wilson and Jack D. Cowan. Excitatory and inhibitory interactions in locaiized populations of model neurons. Biophysicrrl Journal. 12: 1-24, 1972.
[loi] F;ull£ram Gerstner. T h e structure of the activity in neural network rnodels. Physical
Review E, 51(1):738-758. January 1995.
(1081 Ying-Hui Liu and Xao-Jing Wang. Yeuronal adaptation and decorrelation-a generalized
int egrate-and fire model. Preprint , 1999.
[IO91 Wulfrarn Gerstner. Spiking neurons. In Woifgang Maass and Christopher Bishop. editors.
Pulsed neural networks, chapter 1. pages 3-33. MIT Press. Cambridge, Mass.. 1999.
[Il01 Sophocles J. Orfanidis. Introdvction to signal processing. Prentice-Hail. Cpper Saddle
River, XJ, 1996.
[1111 J. C. Candy. .An overview of basic concepts. In Steven 'Yorsworthy. Richard Schreier. and
Gabor Ternes, editors, Delta-Sigma Data Converters: Theory, Design. and Simulation.
chapter 1. pages 1-25. E E E Press. Xew York. 1997.
[Il21 Sheldon 41. Ross. In tmduction to pmbabilzty models. r\cademic Press. Boston. fourth edition. 1989.
[Il31 Robert W. Adams. Spectral noise-shaping in int egrate-and-fire neural networks. In Pm- ceedangs O/ the 1997 IEEE Conference on Neuml Networks, pages 963-938, 1997.
[Il41 W. H. Press. S. -4. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical recipes in
C: The art of scientific computing. Cambridge Lniversity Press. Cambridge. U.K.. 1992.
[llj] W. H. Calvin and C. F. Stevens. Synaptic noise and other sources of randomness in motoneuron interspike intervais. Journal of ~Veumphysiologg, 31574-587, 1968.
[Il61 Petr L W k f and Charles E. Smith. The effect of a random initial value in neural k t -
passage-the modeis. Mathematical Biosciences, 93:191-215, 1989.
[ i l l ] .Y.-J. LVmg and J. Rinzel. Aiternating and synchronous rhythm in reciprocally inhibitory
model neurons. Neuml Computation. 4:84-97. 1992.
[Il81 C. van Vreeswijk, L. F. Abbott, and G. B. Ermentrout. ihen inhibition, not excitation,
synchronizes neural firing. Journal of Computational Neumsn'ence, 1:313-321, 1994.
[Il91 Christof Koch. Bioph ysics of cornputution. chapter 15 ( S tochastic rnodels of single celis),
pages 142-171. Oxford Lniversity Press, 'Sew York. first edition. 1999.
[1201 W. Softky and C. Koch. Cortical cells should fire regularly, but do not. Neural Computation,
4:643-646, 1992.
[1211 W. Softky and C. Koch. The highiy hegular firing of cortical ceils is inconsistent nith
temporal integration of random epsps. Journal of Neuroscience, 13:334-350, 1993.
(1221 2. F. Mainen and T. J. Sejnowski. Reliability of spike timing in neocortical neurons.
Science, '268: l503-l506. 1995.
11231 G. L. Gerstein and B. SIandelbrot. Random walk models for the spike activity of a single
neuron. Biophyszcal Journal. 4:U-68. 1964.
[1241 R. B. Stein. -1 cheoretical analysis of neuronal variability. Biophyszcal Joumal. 5:173-194.
1965.
[1251 W. Calvin and C. F. Stevens. Synaptic noise as a source of variability in the interval
between action potentials. Science. 1%:842-8U. 1967.
Il261 John A. White. Jay T. Rubinstein. and Alan R. Kay. Intrinsic noise in neurons. To appear
in Trends in Neurosciences. 2000.
[127] G. Gestri. H. A. K. Slasterbroek. and W. H. Zaagman. Stochastic constancy. variability
and adaptation of spike generation. Performance of a giant neuron in the visual system of the Ry. Biological Cybc,netics. 38:31--40. 1980.
[1281 A. V. Holden. Modek of the stochastic activity of neurones. Springer-Verlag, New York.
1976.
il291 Fabrizio Gabbiani and Christof Koch. Principles of spike train analysis. In C. Koch and
1. Segev. editors. Methods in neuronal modeling, chapter 9 . pages 313-360. The MIT Press,
Cambridge, L U . 1998.