132
Synaptic Transmission and Inverse-Neuron Dynamics H.T. van der Scheer

Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Synaptic Transmission andInverse-Neuron Dynamics

H.T. van der Scheer

Page 2: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:
Page 3: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Synaptic Transmission andInverse-Neuron Dynamics

ThesisArtificial Intelligence

Author:H.T. van der Scheer

Supervisor:Prof.dr. A. DoelmanMathematical Institute

Leiden University

In partial fulfillment of the requirements for the degree of

Master of Science (M.Sc.)

September 9, 2011

Page 4: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:
Page 5: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Abstract

In this thesis a new method is introduced for moving from a neuronlevel of abstraction to a network level of abstraction. In this approach averifiable synapse model is derived from a higher level hypothesis. Thederived model depends crucially on the inverse of the neuron model thatis used. This allows one to circumvent unknown synaptic mechanismswithout ignoring their dynamic effect.

After reviewing some neurophysiology, neuron modeling, model reduc-tions, bifurcations, and systems theory, it is shown that inverse models canbe derived for a general class of neuron models including: the Hodgkin-Huxley model, the FitzHugh model, the integrate-and-fire model, theIzhikevich simple model, and many more.

At the squid giant synapse the method actually leads to a reproductionof recorded post-synaptic currents from a simple hypothesis at a higherinput-output level of abstraction. In addition it is shown that it is possibleto incorporate plausible mechanisms into a state-space realization of thederived synapse model.

Since the approach aims for an abstract functional interpretation ofnerve cell behavior, it could have a big impact on our current understand-ing of neuronal function. Hence, it may lead to useful new abstract neuralnetworks that could be of interest to scientists in artificial intelligence,robotics and control.

Page 6: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Contents

1 Introduction 101.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.1.1 Nerve Cells vs. Animal Behavior . . . . . . . . . . . . . . 101.1.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.2 A Handle on Complexity . . . . . . . . . . . . . . . . . . . . . . . 111.2.1 A Common Framework . . . . . . . . . . . . . . . . . . . 111.2.2 Simplification . . . . . . . . . . . . . . . . . . . . . . . . . 111.2.3 Attainable Sub-Goals . . . . . . . . . . . . . . . . . . . . 12

1.3 Problem Statements . . . . . . . . . . . . . . . . . . . . . . . . . 121.3.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.4 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.4.1 Use to Neuroscience . . . . . . . . . . . . . . . . . . . . . 141.4.2 Use to AI, Robotics and Control . . . . . . . . . . . . . . 151.4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.5 Brief Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Brief Introduction to Neurophysiology 172.1 Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 The Cell Membrane . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.2.1 Lipid Bilayers . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.2 Ion Pumps . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.3 Selective Permeability . . . . . . . . . . . . . . . . . . . . 20

2.3 Synaptic Transmission . . . . . . . . . . . . . . . . . . . . . . . . 212.3.1 Synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3.2 Neurotransmitters . . . . . . . . . . . . . . . . . . . . . . 212.3.3 Quantal Release . . . . . . . . . . . . . . . . . . . . . . . 212.3.4 Hypothetical Release mechanisms . . . . . . . . . . . . . . 22

Vesicular Hypothesis . . . . . . . . . . . . . . . . . . . . . 22Vesigate Hypothesis . . . . . . . . . . . . . . . . . . . . . 23

2.4 Primitive Nervous Systems . . . . . . . . . . . . . . . . . . . . . 242.4.1 Cnidaria . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Modeling 273.1 Conductance Based Modeling . . . . . . . . . . . . . . . . . . . . 27

3.1.1 Membrane Capacitance . . . . . . . . . . . . . . . . . . . 273.1.2 The Nernst Equilibrium Potential . . . . . . . . . . . . . 283.1.3 Membrane Currents and Conductances . . . . . . . . . . . 293.1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.1.5 The Equivalent Circuit . . . . . . . . . . . . . . . . . . . . 303.1.6 Resting Potential . . . . . . . . . . . . . . . . . . . . . . . 31

3.2 Kinetic Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.2.1 Voltage-Dependent Channels and Conductances . . . . . . 32

Independent Subprocesses . . . . . . . . . . . . . . . . . . 32

Page 7: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Conformations of the Entire Channel . . . . . . . . . . . . 33Voltage-Dependent Transition Rates . . . . . . . . . . . . 34

3.2.2 The Hodgkin-Huxley Model . . . . . . . . . . . . . . . . . 34Activation Functions and Time Constants . . . . . . . . . 35Persistent and Transient Currents in the HH-Model . . . 36

3.2.3 Transmitter-Dependent Channels and Conductances . . . 37A Simple Scheme . . . . . . . . . . . . . . . . . . . . . . . 37Characteristic Time-Course: A Simplification . . . . . . . 38A Second Order Scheme . . . . . . . . . . . . . . . . . . . 38

3.3 Other Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 403.3.1 Spatial Structure . . . . . . . . . . . . . . . . . . . . . . . 40

Compartmental Modeling . . . . . . . . . . . . . . . . . . 40Dendrites, Axons and PDE’s . . . . . . . . . . . . . . . . 40

3.3.2 Models of Transmitter Release . . . . . . . . . . . . . . . 403.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 41

4 Reductions 424.1 Minimal Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.1.1 A Top-Down Approach . . . . . . . . . . . . . . . . . . . 424.1.2 A Bottom-Up Approach . . . . . . . . . . . . . . . . . . . 434.1.3 The Persistent Sodium Plus Potassium Model . . . . . . . 46

4.2 Approximate Invariants . . . . . . . . . . . . . . . . . . . . . . . 484.2.1 The FitzHugh Model . . . . . . . . . . . . . . . . . . . . . 484.2.2 A Quantitative Reduction . . . . . . . . . . . . . . . . . . 48

4.3 The Izhikevich Simple Model . . . . . . . . . . . . . . . . . . . . 494.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 50

5 Bifurcations in Neurodynamics 525.1 Preliminaries: Basic Concepts . . . . . . . . . . . . . . . . . . . . 52

5.1.1 Dynamical Systems, Flows and Orbits . . . . . . . . . . . 525.1.2 Qualitative Equivalence . . . . . . . . . . . . . . . . . . . 535.1.3 Bifurcations: Qualitative Changes in Behavior . . . . . . 535.1.4 Near Equilibria . . . . . . . . . . . . . . . . . . . . . . . . 54

Hyperbolic Equilibria and Linearization . . . . . . . . . . 54The Planar Case . . . . . . . . . . . . . . . . . . . . . . . 54Nonhyperbolic Equilibria and Center Manifold Reduction 55

5.2 Local Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . 555.2.1 Saddle-Node Bifurcation . . . . . . . . . . . . . . . . . . . 555.2.2 Poincare-Andronov-Hopf Bifurcation . . . . . . . . . . . . 57

5.3 Global Bifurcations and Limit Cycles . . . . . . . . . . . . . . . . 595.3.1 Fold Bifurcation of Cycles . . . . . . . . . . . . . . . . . . 595.3.2 Homoclinic Orbit Bifurcation . . . . . . . . . . . . . . . . 60

5.4 Summary of One-Parameter Bifurcations . . . . . . . . . . . . . . 615.4.1 One-Parameter Bifurcations of Equilibria . . . . . . . . . 615.4.2 Planar One-Parameter Bifurcations of Periodic Orbits . . 61

5.5 Examples in Neuron Models . . . . . . . . . . . . . . . . . . . . . 62

Page 8: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

5.5.1 Bifurcations in the INa,p + IK-Model . . . . . . . . . . . . 62Saddle-Node/Homoclinc Hysteresis . . . . . . . . . . . . . 63Saddle-Node on a Limit Cycle Bifurcation . . . . . . . . . 63Subcritical Hopf/Fold of Cycles Hysteresis . . . . . . . . . 63Supercritical Hopf Bifurcation . . . . . . . . . . . . . . . . 63

5.5.2 Bifurcations in the Izhikevich Simple Model . . . . . . . . 635.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 67

6 Nonlinear Systems Analysis 686.1 Inversion and Normal Form . . . . . . . . . . . . . . . . . . . . . 68

6.1.1 A One-Dimensional State Variable . . . . . . . . . . . . . 686.1.2 Byrnes-Isidori Normal Form . . . . . . . . . . . . . . . . . 696.1.3 Normal Form Inversion . . . . . . . . . . . . . . . . . . . 70

6.2 Conversion to Normal Form . . . . . . . . . . . . . . . . . . . . . 716.2.1 Lie Derivative Notation . . . . . . . . . . . . . . . . . . . 716.2.2 Relative Degree . . . . . . . . . . . . . . . . . . . . . . . . 726.2.3 State Transformation . . . . . . . . . . . . . . . . . . . . 746.2.4 More General Nonlinear Systems . . . . . . . . . . . . . . 77

6.3 Nonlinear Realization Theory . . . . . . . . . . . . . . . . . . . . 786.3.1 The Realization Problem . . . . . . . . . . . . . . . . . . 786.3.2 Brief Intermezzo: The State Elimination Problem . . . . . 786.3.3 Extended State Space . . . . . . . . . . . . . . . . . . . . 80

6.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 81

7 A New Analysis of Synaptic Transmission 827.1 The Network Level of Abstraction . . . . . . . . . . . . . . . . . 827.2 The Main Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837.3 An Introductory Example . . . . . . . . . . . . . . . . . . . . . . 857.4 Realization of Inverse Neuron Models . . . . . . . . . . . . . . . . 88

7.4.1 A General Neuron Model . . . . . . . . . . . . . . . . . . 897.4.2 Inverse of the Sub-Reset Part . . . . . . . . . . . . . . . . 89

Formal Inverse . . . . . . . . . . . . . . . . . . . . . . . . 89Non-Realizability of the Inverse . . . . . . . . . . . . . . . 90Realizability by Approximation . . . . . . . . . . . . . . . 90Elimination of the Input Derivative . . . . . . . . . . . . . 91Classical Realization . . . . . . . . . . . . . . . . . . . . . 92

7.4.3 Derivation of the Reset Map . . . . . . . . . . . . . . . . 92Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . 93Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

7.4.4 The Inverse-Neuron . . . . . . . . . . . . . . . . . . . . . 947.5 More Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957.6 The Squid Giant Synapse: An Elaborate Example . . . . . . . . 99

7.6.1 Biological Background . . . . . . . . . . . . . . . . . . . . 99The Nervous System of Molluscs . . . . . . . . . . . . . . 99Giant Fiber Systems and Startle Behavior . . . . . . . . . 99

Page 9: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Measurements at the Giant Synapse . . . . . . . . . . . . 1007.6.2 A Biological Realization of an Abstract Model . . . . . . 103

The Initial Hypothesis . . . . . . . . . . . . . . . . . . . . 104Relative Degree from Biologically Motivated Form . . . . 104New Hypothesis due to Relative Degree . . . . . . . . . . 104Resulting Hypothetical model of Synaptic Transmission . 105Verification against Measurements . . . . . . . . . . . . . 106Transformation into Biological Form . . . . . . . . . . . . 109Invertibility Conditions . . . . . . . . . . . . . . . . . . . 110Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

7.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 113

8 Conclusion 1148.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

8.1.1 A Structured Method . . . . . . . . . . . . . . . . . . . . 1148.1.2 A General Inverse-Neuron . . . . . . . . . . . . . . . . . . 1148.1.3 An Example Application: The Squid Giant Synapse . . . 114

8.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1158.2.1 Neuron and Synapse Are Formally Related . . . . . . . . 1158.2.2 Hypotheses Lead to Synapse Models . . . . . . . . . . . . 1158.2.3 Stimuli Are Well-Represented . . . . . . . . . . . . . . . . 115

8.3 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . 1158.3.1 A General Neuronal Connection Model . . . . . . . . . . 1158.3.2 Combined Excitatory and Inhibitory Inputs . . . . . . . . 1168.3.3 Robustness against Noise . . . . . . . . . . . . . . . . . . 1168.3.4 Choice and Adjustment of Parameters . . . . . . . . . . . 116

A Appendix 117A.1 Classical Realization by Generalized State Transformation . . . . 117

First Step . . . . . . . . . . . . . . . . . . . . . . . . . . . 117Second Step . . . . . . . . . . . . . . . . . . . . . . . . . . 119

A.2 Brief Review of Abstract Neural Networks . . . . . . . . . . . . . 119A.2.1 Artificial Neural Networks . . . . . . . . . . . . . . . . . . 119

The Multilayer Feedforward Perceptron . . . . . . . . . . 119Recurrent Hopfield-Type Networks . . . . . . . . . . . . . 121

A.2.2 Spiking Networks . . . . . . . . . . . . . . . . . . . . . . . 121A.3 Noise Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 123A.4 Euler Implementation . . . . . . . . . . . . . . . . . . . . . . . . 126

A.4.1 Implementation of the Original System . . . . . . . . . . 126A.4.2 Implementation of the Inverse . . . . . . . . . . . . . . . . 127

System with Stable Internal Dynamics . . . . . . . . . . . 127System with Fully Available State . . . . . . . . . . . . . 127Agreement with Continuous-Time Case . . . . . . . . . . 128

References 129

Page 10: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

1 Introduction

1.1 Context

The adaptive behavior of state-of-the-art autonomous robots is still no matchfor the adaptive agility with which animals perform complex tasks. There is stillmuch scientists can learn from nervous systems, whether active in neuroscience,artificial intelligence, robotics or control. Unfortunately, even the functionalrole of single cell behavior is still poorly understood. There is no shortage ofdata, but a good framework or theory within which to interpret this data seemsto be lacking. Hence, a central question remains. How does the behavior of ananimal arise from the behavior of its neurons? [36].

1.1.1 Nerve Cells vs. Animal Behavior

On the one hand, understanding single neuron dynamics is vital to understand-ing the architecture of nervous systems. There are many types of neurons withvarious types of behavior that are employed in different parts of a nervous sys-tem. Characterizing their functional behavior could help us to better understandthese architectures.

On the other hand, when considering the limb movements and eye move-ments that constitute the outward behavior of an animal, the neurophysiologicallevel of detail seems inappropriate. To describe such behavioral components, itseems, we need to move to a higher level of abstraction. So what is it that wewant to achieve?

1.1.2 Goals

The ultimate goal is to understand the principles of adaptive behavior in animalsto such an extend that we can reproduce it and possibly apply its principleselsewhere.

On the one hand, in modeling behavior, it is desirable to be able to useabstract representations and building blocks without having to worry each timewhether or not these could be realized biologically. On the other hand, wewould like such higher level models of adaptive behavior to be informed byneurophysiology.

Thus, at the level of neurophysiology, it is desirable to be able to realizebasic components using biologically realistic canonical circuits. Similarly, whenusing scalars, vectors, functions and vector fields to represent limb orientation,muscle forces, retinal images and the like, we would like to know how to relatethese representations back to the level of neuronal activity. Is there a sufficientlygeneral mathematical framework that can deal with both levels of abstraction?

10

Page 11: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

1.2 A Handle on Complexity

1.2.1 A Common Framework

A good candidate for a common framework seems to be nonlinear systems-theory. This framework has the potential to deal with both the neuron leveland the level of animal behavior. Traditionally however, the flavors may differ.At the neuron level, dynamical systems are used to model observations or mea-surements in a descriptive manner. Bifurcation-theory is then used to analyzethe behavior of these systems. In this approach models are typically treated asautonomous systems, i.e. as systems that dependent on time only implicitly.Consequently, inputs are reduced to parameters that are independent of time.Hence, the functional role of the observed phenomena remains speculative.

At the level of behavior, nonlinear systems-theory, as it is used by engi-neers in robotics and control, seems appropriate. The body configuration ofthe animal for instance, may be described in terms of joint coordinates. In thisapproach the ability to stabilize certain behaviors or trajectories plays an im-portant role. It allows one to look at behavior from a functional design pointof view. The connection with neurophysiology however, is usually lost.

Thus, even if we express both levels in terms of systems-theory, the relationbetween smooth limb movements and neuronal activity is not obvious. So,how can we move between the two levels? How could a smooth limb-movementin joint coordinates be represented by the temporal activity of a ‘population’ ofinterconnected neurons? Conversely, how can we get to the higher level from aneuron level without loosing the ‘connection’ with biology? Where to start?

1.2.2 Simplification

In general one attempts to simplify matters. For instance, many scientists al-ready seem to agree that the action potential, or spike, is the basic element ofneuronal signal transmission [14], [10]. Hence, neuron activity is often reducedto a sequence of spike times called a spike train. There are those however,that do not agree with such simplifications, in fact there is no general agree-ment on which details matter and which do not [36]. So, while some focus onthe spike and use a Poison model to generate a formal spike train from a time-dependent firing probability [4], others focus on sub-threshold behavior to reducemathematical neuron models while retaining as much of their continuous-timedynamics as possible [24]. Both are born from a desire to simplify matters.

We too take simplification to be a requirement for understanding adaptivebehavior in animals. Thus, to us , the question is not whether we should simplifyor not, but the question is: how? Ideally one would like to be able to simplifyat a higher level without making any unwarranted assumptions. In this thesiswe will argue that, instead of trying to simplify model components individually,we should try to simplify network connections as a whole.

11

Page 12: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

1.2.3 Attainable Sub-Goals

To obtain our ultimate goal we may need to set subgoals: find the simplest gen-eral network level description that is still biologically valid. In mathematicalanalysis it is natural to consider one dimensional maps and systems first beforeone moves on to consider vector mappings and multidimensional systems. Sim-ilarly it seems reasonable to first consider a single one-directional connection inthe simplest network of all, a two cell feed-forward network, one input cell andone output cell.

What we are interested in at a network level are simple functional descrip-tions of network connections and neuronal activity. In contrast the activitiesmeasured seem hopelessly complex. What we must realize however, is that someof this complexity may be due to a simple requirement for making connections:the signal must travel from A to B. Thus, if we consider the complex voltagetraces as an intermediate signal representation in a transport domain, then the‘actual’ signal transformation into output may be far more simple then one isled to believe. So what is it we propose to do?

1.3 Problem Statements

In order for us to begin to understand the functional role of neuronal con-nections, we consider a simple task first. Perhaps the most simple task fora neuron-synapse pair, or neuronal connection, is to reproduce, at its output,an input signal from an other neuron. Good models of excitability exist, incontrast, synaptic mechanisms are still largely unknown. So, given a model ofexcitability, what should the synaptic transfer be to accomplish our simple task?

This is an inverse problem. Consider the following diagram:

inputexcitability-model

//

functional? ,,

transport

synaptic system?

��

inverseqq

output

where the input and the output are of the same type. Thus, we will only con-sider ‘complete’ signal paths from neurotransmitter to neurotransmitter, fromconductance to conductance or from voltage to voltage.

If the neuronal connection, the input-output functional in the diagram, isto reproduce an input signal at its output, then the functional is the identitymap. Hence, in this simple transmission task, the unknown model of synaptictransmission should equal the systems-inverse of the excitability-model.

Two questions immediately arise:

1. can an input-output model of excitability be inverted? And, if so,

2. can such an inverse-model be realized by a biologically plausible system?

12

Page 13: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

These are some of the questions that will be investigated in later chapters.Of course, we do not really expect neurons to implement inverse systems

exactly. However, we do not have to restrict ourselves to exact transmission,i.e. to the functional identity map, we may consider other hypotheses for thefunctional input-output map. In fact, this is where the potential power of thisapproach lies.

The idea is to use the standard method of science: postulate a hypothesisand verify or falsify its validity. Thus, in the case of a functional neuronalconnection:

1. we make an educated guess, the hypothesis, of what the input-outputfunctional might be, and we formalize this hypothesis in the form of aninput-output dynamical system.

2. We derive the inverse of the excitability-model.

3. We connect the two in series to obtain a verifiable model of synaptictransfer.

4. We verify or falsify this model of transfer against measurements. If themodel is falsified, we update our hypothesis, i.e. we return to step 1.

As one iterates this model-prediction loop, the key is to keep the hypothesis ofthe input-output functional as simple as possible. Thus by iteratively updat-ing our hypothesis, it may very well be, that we can find a simple functionaldescription that is still biologically valid.

This approach, at least for the squid giant synapse, will actually lead us toa convincing model of synaptic transfer from a simple functional at the networklevel!

1.3.1 Summary

In sum, we will investigate the following:

1. Can neuron models be inverted?

2. Can such inverses be realized by a state-space dynamical system?

3. Can we find a simple input-output functional that agrees with measure-ments?

4. Can this functional be realized by biologically plausible mechanisms?

Let us briefly consider how answers to these questions could help neuroscientistsor scientists in AI, robotics and control.

13

Page 14: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

1.4 Motivation

1.4.1 Use to Neuroscience

A simple input-output functional at the network level would greatly facilitateanalysis. It would provide an invaluable insight into neuronal function andwould eventually lead to a better understanding of the functional organizationof nervous systems. By starting with a hypothesis at the network level one mayforce such a simple higher level interpretation right from the start.

To avoid unconstrained speculation, models need to be compared with data.Models with too many variables and parameters can often be made to fit almostany data. Such models may be too general to provide any real insight. Ahypothesis at a network level of abstraction reduces the degrees of freedom andhence reduces the probability of such overfitting. A biological realization mayallow such a network level hypothesis to be verified against measurements madeat a more detailed level of abstraction, thereby constraining the model evenfurther.

Synaptic mechanisms are still largely unknown. The ability to circumventsuch mechanisms without ignoring their effect is extremely useful. In addition, anice consequence of the proposed method is that the resulting model of synaptictransmission is matched to that of the neuron model both in its complexity andin its parameters. Hence, a simple neuron model results in a simple synapsemodel, and a complex neuron model in a complex synapse model. This makesmodeling choices less arbitrary. Furthermore, parameters related to functionare clearly separated from parameters related to physiology.

The direct transmission of essentially unprocessed information is expectedto play a role in at least some parts of the nervous system. Hence, finding theinverse of a neuron model and its realization, is a suitable modest sub-problemto start out with.

In theoretical neuroscience the so called neural code is a topic of a vigorousdebate [4], [10]. The action potential or spike is often thought of as an elemen-tary unit of neuronal signal transmission [14] or as a basic element of a neuralalphabet [10]. Hence, action potentials are treated as a sequence of stereotypicalevents represented by their spike times alone [4], [10], [14]. The debate is onwhether the code, used to encode a stimulus or input, is a rate code or a timingcode.

The proposed method offers an alternative to this predominant view of spiketimes as a neural code. From this alternative perspective, as in the neuralengineering framework proposed in [10], the debate on neural code becomesirrelevant and thus, as in [10], the method transcends concerns about whethera timing or a rate code is used1. Instead, the focus turns to the higher levelfunctional that is realized by the neuronal connection.

1The proposed method was in fact inspired by some of the ideas in [10]. In particular wetoo make a clear distinction between representation and transformation. Unlike the neuralengineering framework however, we do not reduce neuronal signals to a series of spike times andwe do not use statistical methods. Instead, we process continuous signals using deterministicinverse systems.

14

Page 15: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

1.4.2 Use to AI, Robotics and Control

In applications the most obvious and most important profit to be gained willbe indirect. If, eventually, we are able to find a biologically supported input-output functional, then this will lead to a better understanding of function andarchitecture of nervous systems. This understanding will allow us to apply itsprinciples in robotics and other fields. There are however, also some more directbenefits.

Artificial feed-forward neural networks such as the multilayer perceptronare well-known for their capacity to approximate functions to any degree ofaccuracy, see [3],[21], [13] and appendix A.2.1. Such feed-forward networkshave no memory, their output is a direct mapping of their input. Hence, theycannot produce dynamic output to static input to form pacemaker or centralpattern generator circuits such as those found in nature for locomotion, heartrate control and breathing. Problems in robust control however, such as systemidentification, process modeling, state estimation and trajectory tracking, areof a dynamic nature. It will come as no surprise that, in the context of suchdynamic problems, static mapping networks have a slow learning rate [37].

The input-output functional of an artificial neuron is usually a boundednonlinearity, such as the logistic function. Such a static function can be precededby the identity map in the form of a dynamic neuron model and its inverse, asin the diagram. The resulting network would have all the representative powerof the classical network and more, because even a slight parameter change awayfrom inverse could introduce dynamics. (Of-course, a new learning rule wouldhave to be derived to take advantage of this power.)

There have been several approaches to introducing dynamics in neural net-works. In [41] a short review is given and, along with feedback connections atthe network level, a case is made for introducing dynamics at the neuron level.It is shown that, in a process modeling task, this can lead to a reduction inparameters, while improving performance.

1.4.3 Summary

In short, finding the inverse allows neuroscientists to test hypotheses at a higherlevel of abstraction, while engineers can ‘explore’ realistic dynamics from a famil-iar ‘starting point’, thus, closing the gap between neuroscientists and engineers.We stay close to nature because eventually we would like to learn from thehigher level architecture of nervous systems. For this we need to understand itsbasic building blocks.

1.5 Brief Overview

Before presenting the main contributions of this thesis, it will be necessary toprovide some background first. We begin with reviewing some neurophysiologyin section 2. This knowledge is then used in section 3 to build mathematical

15

Page 16: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

neuron models. However, such physiologically detailed models are usually notreadily amenable to analysis. Hence, these are often reduced to a simpler form.Some of these reductions are discussed in section 4. In section 5 their behaviornear certain critical regimes is classified with the aid of bifurcation theory. Thusfar, the treatment is not really new.

Next, we turn to systems inversion in section 6, a topic not frequently en-countered in the context of single neuron dynamics. Finally, we arrive at themain topic of this thesis in section 7, the use of inverse-systems in models ofsynaptic transmission. It is shown that a general class of neuron models isalready in a particularly convenient form for systems inversion. Technically,these inverse-neuron models do not allow for a proper state-space realization.However, a realization of a satisfactory approximate inverse is derived that canrecover a sufficiently smooth input with arbitrary precision.

It is shown how a theoretically derived model of synaptic transfer can betransformed into a biologically plausible form without sacrificing performance.This approach actually leads to a qualitative reproduction of post-synaptic cur-rents at the squid giant synapse, from a simple hypothesis at a higher input-output level of abstraction.

16

Page 17: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

2 Brief Introduction to Neurophysiology

In fulfillment of the

requirements for the

course:

CellularNeurophysiology

Lecturer:

Prof.dr. S. A. van Gils

Department of Applied

Mathematics

University of Twente

Most of the material in this section, although not all, can be found in anystandard introductory text on neurobiolology such as the excellent and concise[38].

2.1 Neurons

Nervous systems consist of individual nerve cells, neurons. These exchangesignals with one another at points of contact called synapses. Although almostall cells have neuron-like properties such as spatial conductance of electricalpotentials, neurons are unique in that they specialize in information processing.Their highly branched structure, electrically excitable membrane and synapticcontacts allow them to communicate over long distances [38].

Neurons are anatomically diverse. Some consist of little more than a spher-ical cell, while others have thin processes many times longer than the diameterof their cell body. Still others have tree-like or otherwise highly branched struc-tures. Despite this diversity most neurons have some structural properties incommon. They all have a cell body or soma which contains the nucleus andother important organelles (such as the Golgi aparatus, endoplasmatic reticu-lum, etc.). Furthermore, various processes usually project either from the cellbody itself or from a single branch connected to it.

Many neurons have a single long process that makes connections in anotherdistant region. Such a process is called an axon and typically propagates elec-trical signals actively in the form of so called action potentials. The otherprocesses are usually shorter and are called dendrites, these usually passivelyconduct signals that attenuate with distance. Cells without axons are calledamacrine cells, and their dendritic processes are also called amacrine processes.The term neurite is also used for nonspecific processes.

In the classical view neurons are polarized. That is, inputs are received atthe dendrites, combined at the soma and converted into a neuronal output signal

17

Page 18: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

PPi��9 axons

HHY� dendrites

Figure 1: A schematic representation of the classic polarized neuron.

that is propagated along the axon to the output terminal, see figure 1. This isnot always true, signals can be transfered at any point of contact between anyparts of two cells. Remote cells can even be effected by releasing transmittersubstances into the blood stream.

As information processing units neurons must perform additional functionsapart from the metabolic functions any cell must fulfill. To maintain its struc-ture and to provide the necessary amount of transmitter substances, a neu-ron must synthesize large quantities of macromolecules. To bring synthesizedmolecules from the soma to distant regions and degraded products back forreprocessing, transport systems have developed [38].

2.2 The Cell Membrane

2.2.1 Lipid Bilayers

Signal processing by nerve cells is made possible by the excitable propertiesof the cell membrane. Cell membranes are largely composed of phospholipidmolecules. These consist of long fatty hydrophopic tails with a charged hy-drophylic head group. Phospholipids tend to self-organise into lipid bilayerswith their hydrophilic heads oriented towards the surrounding water and theirfatty hydrophobic tails towards the water-free interior. If the bilayer would forma closed surface, energetically the most favorable form would be a sphere [40].

Cell membranes also contain several membrane proteins. We can dividethese into:

• channels that allow for the passage of ions and other charged moleculesthrough the membrane, see figure 2,

• pumps to establish and maintain concentration gradients across the mem-brane,

18

Page 19: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 2: A schematic representation of a lipid bilayer with ion channels, from[17].

• receptors for recognizing signaling substances,

• enzymes that facilitate or katalyze reactions, and

• structural proteins that contribute to maintenance and the formation ofconnections.

These are not necessarily mutually exclusive.It is the lipid bilayer, together with its embedded ion pumps and channels,

that is responsible for the electrical properties of the membrane. An equivalentelectrical circuit will lead us to understand the electrical excitability of cells, aswe will see in chapters 3, 4 and 5.

For cells at rest there is a difference in electrical potential across the mem-brane, the inside is negative relative to the outside. It is customary to choosethe outside 0 mV. The difference is usually between -50 to -80 mV and is calledthe resting potential. To understand this difference we need to understand:

• the differences in ion concentrations between the inside and the outside ofthe cell, and

• the selective permeability of the membrane for different ion types.

2.2.2 Ion Pumps

Differences in ion concentrations between the inside and the outside of the cellare established and maintained by ion pumps. These are membrane-boundproteins that selectively pump ions across the membrane from one side to theother.

We can divide ions into positively charged ions, cations, and negativelycharged ions, anions, and into organic and inorganic. Of the inorganic ions themost important cations are: sodium (Na+), potassium (K+), calcium (Ca2+)and magnesium (Mg2+), and the most important anions are: chloride (Cl−) andhydrogen-carbonate, also called bicarbonate (HCO−3 ). Within the cell one also

19

Page 20: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

finds various organic anions such as negatively charged aminoacids and polypep-tides, these are often collectively labeled A−. The K+ concentration inside thecell is higher then the outside, while for Na+ and Cl− the concentrations insideare lower then the outside.

An important pump is the sodium-potassium exchange pump (Na+-K+

ATPase). This pump establishes the opposing concentration gradients for K+

and Na+ using metabolic energy obtained from hydrolysis of ATP . Up to 70%of the energy consumption of nerve cells is used for driving these pumps. ForthreeNa+ ions pumped out of the cell twoK+ ions are pumped in. Other pumpsare responsible for the transport of other ions. The resulting Na+ gradient canbe used as a source of energy to transport other ions against their concentrationgradient. This cotransport requires no ATP.

2.2.3 Selective Permeability

The ion permeability of lipid bilayers or membranes without ion channels isnegligible. Ions have a strong tendency to associate with water, called hydration.This prevents them from entering the hydrophobic fatty or lipid interior of thebilayer.

Permeability increases with several orders of magnitude when membrane pro-teins called ion channels are present. These consist of an assembly of polypep-tide subunits. Due to the diameter of the pore and electrostatic bariers resultingfrom charged areas of the polypeptides within, most ion channels are selectivefor different ions. None are completely selective, but all facilitate a high flowrate of specific ions [38] [40].

Some ion channels belong to a class of transmembrane proteins with sev-eral quasi-stable configurations. Conformational changes are of a stochasticnature and occur under the influence of thermal molecular motion. Which con-figuration is energetically the most favorable may depend on several factors.It may depend on potential changes across the membrane, extracellular trans-mitter substances, intracellular ion concentrations and intracellular messengersubstances. These are not necessarily mutually exclusive. Hence, we may findvoltage-gated channels, which open and close in a voltage-dependent manner,transmitter-gated channels, with binding sites for a specific neurotransmittersubstance and channels that open or close under the influence of some intracel-lular messenger substance [38].

Of the voltage-gated channels, the Na+ channel and K+ channel have playedan important role in our current understanding of electrical excitability. Of thetransmitter-gated channels, the nicotinic acetylcholine receptor channel has beenstudied the most. It is found in the neuromuscular synapse of vertebrates andneeds two transmitter molecules (ACh) to open [38], [27].

20

Page 21: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

2.3 Synaptic Transmission

2.3.1 Synapses

Signal transfer between neurons takes place at points of contact called synapses.At these sites two cells are separated by a thin synaptic cleft only. Typicallybetween 100 and 10000 such synaptic contacts are made by a neuron. One dis-tinguishes between electrical synapses and chemical synapses. Although chem-ical synapses are more numerous, both are found in almost all nervous systemsstudied.

At electrical synapses the cytoplasm of both cells is directly connectedthrough molecular gap junctions. At these synapses transfer is bidirectionaland fast. At chemical synapses transfer can be both bidirectional and uni-directional. Most of what is known about chemical synapses however, comesfrom two preparations in particular: the squid giant synapse and the vertebrateneuromuscular junction [27]. Both are unidirectional.

At chemical synapses a change in potential at the presynaptic terminal causesa transmitter substance to be released into the synaptic cleft. The transmitterdiffuses across the cleft and binds to receptor sites in the postsynaptic mem-brane. This causes a change in membrane conductance at the receiving postsy-naptic side. All such synaptic inputs combined determine the electrical responseof the postsynaptic cell.

2.3.2 Neurotransmitters

Neurotransmitters, also called ligands, can be divided in two groups. The classi-cal neurotransmitters are small charged neuroactive molecules, mainly amines.These are synthesized in short chain reactions both in the soma as well as inthe terminal. At least eight of these transmitters are known. The first to bediscovered, Acetylcholine (ACh), is the only one not synthesized from aminoacid.

The transmitters in the other group, the neuroactive peptides, consist ofchains of approximately 2 to 50 amino acids. Currently about a 100 of theseare known. (Some estimate there to be thousands.) A neuron may release acombination of transmitters at its synapses, but whatever combination is used,it seems to be used at all its synapses [38].

2.3.3 Quantal Release

It is generally thought that release occurs in quanta, packets of several thousandsof transmitter molecules. The release of such a quantum leads to a unitaryminiature postsynaptic potential (mPSP) or in case of neuromuscular synapseto a miniature endplate potential (mEPP). Thus although normal postsynapticpotentials may appear graded, it is thought that they consist of many unitarypotentials.

This is based on the observation that, at the neuromuscular synapse of ver-tebrates, spontaneous miniature potentials of about 0.4 mV occur randomly at

21

Page 22: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

low frequency in the absence of presynaptic stimulation. Furthermore when theextracellular Ca2+ concentration is reduced, evoked potentials fluctuate in astepwise manner. Statistical analysis shows these evoked potentials to consistof multiples of unitary potentials. Thus, the reduced Ca2+ concentration doesnot effect the size of the quanta, but only reduces their probability of release[38].

At the neuromuscular synapse of vertebrates about 200 quanta are releasedper action potential. In most central nervous synapses and the neuromuscularsynapses of many invertebrates only about 1 to 10 quanta are released per actionpotential [38]. There have also been reports of non-quantal release, apparentlyeven massive tonic release, see [48] for a review. Furthermore, sub-miniaturepotentials corresponding to subquanta have also bee reported, see [47] for acritical review.

2.3.4 Hypothetical Release mechanisms

Vesicular Hypothesis The presynaptic terminal has certain features thatare common to secretory cells. For instance a high density of vesicles maybe observed. Thus the transmitter quanta have a morphological correlate, thesynaptic vesicle. Hence, it is assumed that each vesicle contains transmittercorresponding to one quantum of about a 1000 transmitter molecules.

At directional synapses release appears to be restricted to a region oppositeto the postsynaptic site. Here vesicles are concentrated near special presynap-tic sites called active zones. These consist of long round structures near thepresynaptic membrane and are associated with a series of intra-membrane par-ticles. Vesicle fusion does indeed occur at these active zones, temporarily leavingpocket-shaped depressions. In other non-directional synapses vesicle fusion mayoccur all over the presynaptic terminal.

Figure 3: A schematic representation of the classic vesicular hypothesis. Uponarrival of an action potential, voltage-dependent channels open, Ca2+ enters theterminal and triggers the fusion of transmitter-filled vesicles. Thereby releasingtransmitter into the synaptic cleft.

Transmitter release is generally thought to follow the following sequence ofevents [27], see figure 3:

22

Page 23: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

1. Calcium influx: Upon arrival of an action potential at the presynapticterminal, voltage-gated Ca2+-channels open and Ca2+ enters the terminal.

2. Vesicle fusion: The increase in Ca2+ concentration within the terminaltriggers a sequence of events leading to vesicle fusion. The vesicles con-tain neurotransmitter and release their contents into the synaptic cleft(exocytosis).

3. Transmitter diffusion: The transmitter diffuses across the cleft.

4. Receptor binding: The neurotransmitter binds either to receptors in thepostsynaptic membrane or to ion channels directly.

5. Ion channel gating: Ion channels open either due to direct binding of neu-rotransmitter or due to a slower intracellular second-messenger signalingpathway triggered by binding to receptors.

6. Vesicle recycling: The patch of fused vesicle membrane pinches off againto reform vesicles (endocytosis).

By far the majority of researchers adhere to this vesicular hypothesis ofsynaptic transmission. In fact it is often presented as a truth, whereas it cannotclaim to be more than a hypothesis. It was initially proposed by Castillo andKatz as a way to deal with the quantal nature of transmitter release [44].

Although there is much evidence to support the vesicular hypothesis, thereare also many observations that cannot be explained with the vesicle hypothesis[44].To account for these observations alternative hypotheses have been proposed in[44] and [9].

Vesigate Hypothesis One of the main objections to the vesicular hypothe-sis in both [44] and [9] is that, for cholinergic synapses, there is good reason tobelieve that ACh is released preferentially from the cytoplasmic compartment.Furthermore the vesicular hypothesis cannot account for sub-miniature poten-tials. In some synapses the number of released quanta already exceeds by farthe number of readily releasable vesicles, let alone the number of vesicles neededfor a corresponding number of subquanta.

To account for these and other incompatibilities, an alternative is suggestedboth in [44] and [9]. The proposals are very similar. A presynaptic membrane-bound structure composed of several subunits is responsible for the release ofcytoplasmic ACh. The structure is termed vesigate in [44] and should be locatedat the active zone.

The structure has a number of ACh binding sites equivalent to one quantum.In order for a quantum to be releasable the structure should be fully loadedwith ACh from the cytoplasmic compartment. Synchronized activation of allsubunits releases a full quantum. Non-synchronous activation of subunits mayresult in the release of subquanta.

23

Page 24: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 4: A schematic representation of the vesigate hypothesis. A vesigateconsists of several subunits. Only when fully loaded with cytoplasmic ACh cana quantum of transmitter be released. The release is triggered by increased Ca2+

due to opening of voltage-gated channels. Sub-miniature potentials may resultfrom the sporadic activation of subunits. These may be isolated as in the figureor nonsynchronously activated.

The translocation of ACh across the membrane may involve the membraneprotein mediatophore and is triggered by Ca2+. At rest spontaneous minia-ture potentials occur due to occasional activation. Subunits may also activatesporadically at rest, generating sub-miniature potentials.

In [9] it is suggested that activation is synchronized by a hypothetical moleculeand a cartoon is provided. In [44] it is suggested to model the vesigate usingMichaelis-Menten kinetics.

The above hypotheses try to account for the quantal nature of transmit-ter release. However, there have also been reports of non-quantal release [48].Furthermore, it has even been argued that mechanisms of exocytosis are neuro-transmitter specific, see [33] for a review. In short, the mechanisms involved inthe release of neurotransmitters are still largely unknown. How are we to makeany reasonable modeling choices in light of all this uncertainty?

2.4 Primitive Nervous Systems

It seems natural to start our investigation of nervous system architectures withthe evolutionary oldest, most primitive organisms and then move on to considermore recent and complex organisms. Unfortunately soft tissue like the nervoussystem leaves no fossil record behind. Hence comparative biologists often turnto modern descendants of early animals, preferably so called ‘living fossils’ [43].

It will go to far here to discuss the diversity of organisational plans that canbe found throughout the animal kingdom and its evolution. Hence, we will only

24

Page 25: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

discuss the most primitive nervous systems, the ‘skin brains’2 of cnidaria suchas sea anemones and jellyfish. This will nevertheless add an extra argument toour motivation, as we will see.

2.4.1 Cnidaria

Like all animals except sponges Cnidaria, also called Coelenterates, have a threelayered embryo: an ectoderm, an endoderm and a primitive mesoderm. Their ra-dially symmetric bodies consist of an epithelial-like outer-skin or ectoderm withspecialized sensory cells facing the environment and an epithelial-like endodermwith nutritive, secretory and muscle-like cells lining a gut cavity. The mo-toneurons are uniformly distributed throughout the body in a two-dimensional‘diffuse’ nerve net derived from the ectoderm. These interact with each otherby way of reciprocal synapses and graded potentials that get weaker the fartherthey spread (amacrine processes). Sensory signals from chemoreceptors, pho-toreceptors and tactile receptors are passed on to motoneurons and contractileeffector cells in a direct reflex-like manner3.

Sponge

?

stimulus

Sea anemone

?

stimulus

��9 XXz

Jellyfish

?

stimulus

-�-� ZZ~��=

@@R

Figure 5: Schematic representations in order of complexity: from sponges, tosea anemones, to jellyfish. Sponges do not have a nervous sytem. Water flowis regulated by contractile effector cells called myocytes. These have a relativelyslow and sustained response and require direct stimulation. Effector cells of seaanemones are mediated by sensorimotor neurons. Effector cells in jellyfish aremediated by a diffuse nerve net of motoneurons, after [43], also see [35].

In figure (5) schematic representations are suggestively depicted in orderof complexity. From direct stimulation of independent effectors in sponges, tosensorimotor neuron mediated effectors in sea anmones, to the nerve net injellyfishes.

The next step would introduce a third layer of interneurons. These areneither sensory nor motor neurons, on connectional grounds they lie in between.

2This descriptive term seems to have been introduced first in [19].3Such symmetrical 2-dimensional ‘diffuse’ nerve nets may form almost ideal systems for

mathematical study, since spatiotemporal activity patterns may be directly related to outwardbehavioral responses.

25

Page 26: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Such interneurons may be found in the nerve rings and ganglia of the more highlyevolved cnidaria. Interneurons in principle allow for excitatory to inhibitory‘sign’ switching and recurrent feedback connections. Central pattern generatorcircuits may thus become possible [43].

2.5 Concluding Remarks

To understand nervous systems we would like to know the functional role neu-rons play within it. Evolutionary speaking we may expect that cells were com-municating long before processes like axons and dendrites appeared. In fact,communication between single cells is considered to be a prerequisite for meta-zoic organization [38]. Thus, if long distance communication developed gradu-ally from early ‘direct-neighbor’ communication, it may well have retained someof its functions.

The fundamental morphology, physiology and chemistry of neurons and theirmode of contact has indeed remained remarkably constant throughout evolution.Furthermore, in the most primitive nervous systems, the ‘skin brains’ of cnidariasuch as sea anemones and jellyfish, the path from receptors to contractile muscle-like effector cells is fairly direct. What has changed dramatically however, isthe organization of these constituent parts into more and more complex nervoussystems [43].

Nevertheless, given the uncertainty with respect to presynaptic mechanismsand the direct cell-to-cell communication in the early stages of evolution, ourproposed approach seems to be a reasonable one. That is, it seems reasonable toassume that, if called for, cells should be able to reliably transfer one signal tothe next approximately unaltered. In modeling, using an inverse neuron modelto do so circumvents the uncertainty surrounding presynaptic release.

In the next section we will discuss how knowledge of physiology is used tobuild mathematical models.

26

Page 27: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

3 Modeling

In partial fulfillment of

the requirements for the

course:

Neuron Modeling

Lecturer:

Prof.dr. S. A. van Gils

Department of Applied

Mathematics

University of Twente

In this section we will discuss how knowledge of neurphysiology is used to buildmathematical models. We will restrict ourselves mainly to isopotential singlecompartment models, that is, we will neglect variation in membrane potentialalong spatial dimensions.

First, we will introduce an equivalent circuit reprentation of the membrane.Next, we will introduce chemical kinetics as a formalism for describing the time-dependent properties of ion channel conductances. This time-dependence maybe due to voltage-dependent or transmitter-dependent opening and closing ofion channels. Voltage-dependent conductances lead to the Hodgkin-Huxley for-malism of excitability. Transmitter-dependent conductances allow for externalinputs.

3.1 Conductance Based Modeling

Conventionally, the electrical properties of the excitable membrane are rep-resented by an equivalent circuit. Ion specific so called reversal potentials arerepresented by batteries, the ion channel conductances by variable resistors, andthe phospholipid bilayer is represented by a capacitor, see figure 6 on page 31.We will first introduce these individual circuit components, then we put themtogether in the equivalent circuit of the membrane. We start with membranecapacitance.

3.1.1 Membrane Capacitance

As one may recall, lipid bilayers are highly impermeable to ions, see section2.2.3. Thus, a patch of lipid bilayer serves as a thin electrically isolating layer.It separates two conducting fluids and therefore functions as a capacitor. Thestandard equation for a capacitor is

Q = CV ,

27

Page 28: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

where Q is the charge in Coulomb, V is the electric potential in Volt and C isthe capacitance in Farad [38], [4].

When a voltage is applied to a capacitor, it separates and stores charges.At a constant voltage the current through an ideal capacitor is zero. Onlywhen voltage is varied does stored charge change and current flow to rechargeor discharge the capacitor [40]. We can express this capacitive current as

IC ≡dQ

dt= C

dV

dt.

In words: to change the potential of a patch of membrane with capacitance Cat a rate dV

dt an amount C dVdt of current is needed [4].

Capacitance depends on: the area A of the two conductors, the insulatormaterial, and the distance d between them. In particular, the capacitance Cof a cell or cell-compartment is proportional to the surface area A of its cellmembrane . It is customary to use units that are independent of the particulardimensions of a cell or compartment. Hence, quantities are expressed per unitarea of membrane. The capacitance per unit area of cell membrane is called thespecific membrane capacitance and is approximately the same, C ≈ 1µF/cm2,for all neurons [4].

When a voltage is applied to a capacitor, the charges that accumulate onboth sides of the insulator lead to an electric field, the smaller the distance d,the stronger the field. The cell membrane is very thin so the field within is verystrong. It plays an important role both in ion movement through channels andin the voltage-dependent opening and closing of channels [40].

Next we will discuss the reason for the appearance of batteries in the equiv-alent cicuit.

3.1.2 The Nernst Equilibrium Potential

Suppose channels in the membrane open that are specific to some ion i. Thena current may result due to a transmembrane difference in concentration, po-tential or both. We can think of the current as consisting of two components,a conduction component Ic, and a diffusion component Id. The current stopswhen the two oppose and cancel each other i.e. when

Ic + Id = 0 .

The membrane potential Ei at which this occurs is called the Nernst equilibriumpotential or reversal potential for ion i. It depends on the temperature, the ioncharge and the concentrations on both sides of the membrane, as expressed by

28

Page 29: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

the Nernst equation4

Ei = Vin − Vout = α ln[C]out[C]in

, with α =kT

q. (3.2)

Here [C] is the concentration of ion i, T is the temparature, q is the ion chargeand k is the Boltzmann constant [38], [40], [26], [4]. In the equivalent circuitthese Nernst equilibria are represented by batteries, see figure 6.

What is important to note for our purposes is that in general the equilibriumpotential for one ion species differs from the potential for another. Since thetemperature is the same for all ions, this is mainly due to maintained differencesin concentration and differences in ion charge. For the rest potential of a cellwe typically have

EK < ECl < Vrest < ENa < ECa ,

see [26]. This allows the cell to ’steer’ its potential by opening ion channels aswe will see shortly.

Now let us discuss the remaining components of the circuit in figure 6, theresistors.

3.1.3 Membrane Currents and Conductances

When the membrane potential V equals the reversal potential Ei for some ioni, the ion current Ii per unit area of membrane is zero by definition. If thepotential deviates from the reversal potential, then Ii is proportional to the

4The Nernst equation can be obtained from the Nernst-Planck equation (NPE) for elec-trodiffusion. A potential ∆V = Vin − Vout across a membrane of thickness ∆x results in anelectric field within. This field exerts a force on the charged ions, adding a small drift velocityto their random thermal motion. So at any point x along the ion channel the conductioncomponent Ic is proportional to the electric field dV

dx, the ion charge q, and the concentration

[C], while the diffusion component Id is proportional to the ion charge and the concentrationgradient, that is:

Ic ∝ q[C]dV

dx, and Id ∝ q

d[C]

dx.

So for the total current I = Ic + Id = 0 at equilibrium we have:

dV

dx+ α

1

[C]

d[C]

dx= 0 , (NPE at equilibrium)

for some α independent of x. Integrating over ∆x gives us the first part of the Nernst equation(3.2) for the equilibrium potential. In the NPE the dependence of α on temperature T andion charge q is obtained through Einstein’s relation. Alternatively, the Nernst equation canbe obtained through Boltzmann’s law. At thermal equilibrium, the relative probability:

Pout

Pin= exp

(−

∆U

kT

)≈

[C]out

[C]in, (3.1)

of finding an ion on either side of the membrane depends on the difference in potential energy∆U = q(Vout − Vin) between ions on the two sides. Here k is the Boltzmann constant. (Aswith the distribution of molecules in the atmosphere, there are fewer ions where their potentialenergy is higher.) A comparison of (3.2) with (3.1) gives α = kT/q , see [17], [40] and [27] fora more detailed discussion.

29

Page 30: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

difference of potentials V − Ei , also called the driving force. Thus, we have:

Ii = gi(V − Ei) , (3.3)

where, for ion i, the term gi ≥ 0 represents the membrane conductance per unitarea of membrane, called the specific membrane conductance5. In the equivalentcircuit these conductances gi = 1/Ri are represented by resistors Ri , see figure6, see [26], [4].

The conductance term gi in (3.3) can be expressed as

gi = gipi ,

where 0 ≥ pi ≥ 1 is the fraction of channels in the open and conducting stateand gi is the maximal conductance of the population of channels [26], [4].

For large populations, the fraction of channels in the open state tends toequal the probability of finding a channel in an open state. This probabilitycan depend on the potential across the membrane, intracellular messenger sub-stances, intracellular ion concentrations, extracellular neurotransmitters, and soon. Hence, pi will vary with time [4].

Currents that remain relatively constant such as those resulting from ionpumps are termed passive, linear, or ohmic, these are usually collected in oneleakage term IL = gL(V − EL), see [26], [4].

3.1.4 Summary

In sum, we can express the capacitive current per unit area of membrane as

IC = CdV

dt, (3.4)

where C is the specific membrane capacitance, see section 3.1.1. Furthermore,for a large population of channels of type i , the net ion current per unit areaof membrane can be described by

Ii(t) = gipi(t)(V (t)− Ei) , i ∈ {Na+,K+, Ca2+, Cl−, . . .} , (3.5)

where pi(t) is the fraction of channels in the open state, gi is the maximalconductance of the population and Ei is the reversal potential. We are nowready to consider the full circuit.

3.1.5 The Equivalent Circuit

The electrical properties of membranes can be represented by the circuit de-picted in figure 6. Charge is neither created nor destroyed, so by Kirchhoff’slaw, the total current through a unit area of membrane equals the sum of the

5By convention current towards ground (0 mV) is defined as positive. Hodgkin and Huxleychose the absolute potential inside the cell at rest to be zero. Today the outside of the cell ischosen as ground and current due to positive ions leaving the cell is defined as positive.

30

Page 31: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

capacitive currrent (3.4) and all the ionic currents (3.5). Hence, for the totalcurrent I through a unit area of membrane we can write

I = CV + INa + ICa + IK + ICl + . . . ,

or alternatively

CV = I −∑i

gi(V − Ei) . (3.6)

If there are no additional currents such as experimentally injected currents, thenI = 0, see [26].

Figure 6: An equivalent cicuit model of the excitable cell membrane. Ion spe-cific so called reversal potentials Ei (or Nernst equilibria) are represented bybatteries, the ion channel conductances gi = 1/Ri by variable resistors, and thephospholipid bilayer is represented by a capacitor C. The membrane potentialis the difference in electrical potentials V = Vin − Vout between the inside andthe outside of the cell, after [18],[40] and [27].

3.1.6 Resting Potential

Membranes contain many different types of ion channels. The membrane po-tential at which all currents cancel is called the resting potential and is givenby

Vrest =

∑i giEi∑i gi

.

It is the steady state value of (3.6) when I = 0, that is the value for whichV = 0 when there are no experimentally injected currents [26].

Note that if all conductances are zero, except for the conductance of oneion type i, then we have Vrest = Ei. Since the reversal potential for one ionspecies differs from that of another, this allows the cell to ‘steer’ its potentialby opening or closing channels.

31

Page 32: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Recall that the rest potential is negative relative to the outside and that itis usually bounded by:

EK < ECl︸ ︷︷ ︸hyperpolarizing

< Vrest < ENa < ECa︸ ︷︷ ︸depolarizing

.

Opening of potassium and chloride channels leads to an increase in conductancesgK and gCl. This results in a further polarization, or hyperpolarization, of themembrane potential, i.e. making it even more negative.

Opening of sodium and calcium channels leads to an increase in conductancesgNa and gCa. This results in a depolarization, of the membrane potential, i.e.in an increase of membrane potential.

Synaptic currents that have a hyperpolarizing effect are usually called in-hibitory, while those that have a depolarizing effect are usually called excitatory[4], [40]. However neurons can be excited, i.e. made to fire, by hyperpolarizingcurrents and can be inhibited, i.e. stopt from firing, by depolarizing currents[26], see figure 12 on page 51.

While transmitter-dependent conductances allow for external inputs, voltage-dependent conductances lead to feedback and the excitable properties of cells.In the following section we will introduce a formalism for describing both thevoltage-dependence and the transmitter-dependence of ion channel conductances.

3.2 Kinetic Schemes

In this section we will discuss the time-dependent properties of ion channelconductances. This time-dependence may be due to voltage- or transmitter-dependent opening and closing of ion channels, both can be described in aunified formalism using state diagrams and equations analogous to those usedin chemical reaction kinetics. First, we present the Hodgkin-Huxley formalismfor modeling voltage-dependent conductances, then the more general formalismis reviewed.

3.2.1 Voltage-Dependent Channels and Conductances

Independent Subprocesses Hodgkin and Huxley empirically found equa-tions of the form

gi = gimahb (3.7a)

m = αm(V )(1−m)− βm(V )m (3.7b)

h = αh(V )(1− h)− βh(V )h (3.7c)

to describe the voltage dependent conductance gi(t) in (3.3) for some ion i. Heregi is a constant for the maximal conductance per unit area of membrane andm and h are dimensionless variables which can vary between 0 and 1. The α’sand β’s are voltage-dependent rate coefficients. One variable, m (or n in case

32

Page 33: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

of potassium conductance), represents activation of conductance and the other,h, represents inactivation [18], [26].

The equations (3.7b) and (3.7c) have the typical form associated with firstorder kinetic schemes used to describe chemical reactions. Each subprocesschanges state according to a first order kinetic scheme. There are two types ofvoltage-dependent subprocesses:

M1

αm(V )// M2βm(V )oo

H1

αh(V ) // H2βh(V )oo

where M can be in one of two states M1 and M2. Hence, if m represents thefraction in state M2, then 1−m represents the fraction in state M1. The sameholds for H, see [6].

The expression (3.7a) for the conductance indicates that several such inde-pendent subprocesses are required to open or close a channel, a independentbut identical activating processes represented by m and b independent identicalinactivating processes represented by h. The fraction of channels in the openconducting state is thus given by

p = mahb .

Channels that do not have inactivation variables (b = 0) result in persistent cur-rents, channels that do inactivate result in transient currents, and channels thatdo not have activation variables (a = 0) result in hyperpolarization activatedcurrents, see [26], [4], and [6].

Hodgkin and Huxley also provided a hypothetical physical interpretation ofequations (3.7). It was suggested that the voltage-dependence of conductancesis due to the effect of the electric field on membrane molecules with a chargeor polarized charge distribution. At the time however, the composition of theexcitable membrane was unknown [18]. These ‘particles’ were later called gates,see [6], [17] and [4]. It is now understood that conformational changes of channelproteins give rise to voltage dependent conductances [6].

Conformations of the Entire Channel In the more general Markov Ki-netics approach independent identical subprocesses are not assumed. States inthe state diagram represent conformations of the entire protein. Thus in theMarkov approach energetically favorable conformations or foldings of a singlechannel protein are represented by a number of states S1, . . . , Sn with associatedprobabilities and transition probabilities [6].

For a large number of identical proteins we can consider the fraction ofchannels in a certain state and their transition rates instead. We can thenrepresent these transitions by a kinetic scheme

Sirij // Sjrjioo

33

Page 34: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

analogous to that of a chemical reaction. The similarity with chemical reactionswill be even more apparent in our treatment of transmitter-dependent channels.

The associated evolution equation for the above scheme is given by

dsidt

=∑j

rjisj − si∑j

rij , (3.8)

where si represents the fraction of channels in state Si, that is∑i si = 1.

In general the rates r can depend on trans-membrane potential, extracellularneurotransmitter concentration or the concentration of an intracellular agent.

The Hodgkin-Huxley formalism is a subclass of the Markov representation,that is, for any Hodgkin-Huxley scheme an equivalent Markov model can begiven [6].

Voltage-Dependent Transition Rates For the voltage dependent transi-tion rates r (the α’s and β’s in (3.7)) many forms are possible. According tothe theory of reaction rates, the transition rate from one state to another de-pends exponentially on the free energy required to overcome the energy barrierbetween the two states, also called the activation energy. Hence,

r(V ) ∝ exp

(−U(V )

RT

),

where R is the gas constant, T is the absolute temperature and U(V ) is theunknown voltage dependent activation energy 6. The unknown function U(V )can be expressed by a Taylor expansion, resulting in a general form,

r(V ) = exp[−(c0 + c1V + c2V2 + . . .)/RT ] , (3.9)

for the transition rate functions, see [6].

3.2.2 The Hodgkin-Huxley Model

The Hodgkin-Huxley model [18] is one of the most important models in neu-roscience. Not only does it capture the excitable properties of the squid giantaxon, it also provides a general formalism for other models.

Conventions have changed since the model was first introduced. Today thepositive current direction is usually chosen from the inside to the outside ofthe cell and the outside is chosen 0 mV. Hodgkin and Huxley chose the restingpotential of the cell to be zero and chose the positive current in the oppositedirection. Hence, apart from a voltage shift the potential has changed sign.

The model consists of a voltage equation of the form (3.6) with two ion-specific currents IK and INa, and one passive leak current IL. The conductances

6This relation written out in equation form with a (pre-exponential) factor of proportion-ality seems to be best known as Arrhenius equation.

34

Page 35: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

gK and gNa are of the form (3.7). With todays sign convention and the restpotential chosen zero, the equations read

CV = I −

IK︷ ︸︸ ︷gKn

4(V − EK)−

INa︷ ︸︸ ︷gNam

3h(V − ENa)−IL︷ ︸︸ ︷

gL(V − EL) (3.10)

n = αn(V )(1− n)− βn(V )n (3.11)

m = αm(V )(1−m)− βm(V )m (3.12)

h = αh(V )(1− h)− βh(V )h , (3.13)

where

αn(V ) = 0.0110− V

exp(

10−V10

)− 1

(3.14)

βn(V ) = 0.125 exp

(−V80

)(3.15)

αm(V ) = 0.125− V

exp(

25−V10

)− 1

(3.16)

βm(V ) = 4 exp

(−V18

)(3.17)

αh(V ) = 0.07 exp

(−V20

)(3.18)

βh(V ) =1

exp(

30−V10

)+ 1

(3.19)

and

EK = −12 mV , gK = 36 mS/cm2 ,

ENa = 115 mV , gNa = 120 mS/cm2 ,

EL = 10.613 mV , gL = 0.3 mS/cm2 ,

see [18], [26] and [40]. The value of EL is chosen such that the resting potentialVrest = 0.

The particular forms of the functions α(V ) and β(V ) describing the transi-tion rates, were chosen for two reasons. First, they were among the simplest tofit the experimental results and, secondly, some resemble the equation derivedby Goldman (1943) for the movements of a charged particle in a constant field[18]. One may also compare these forms with the general form (3.9).

To provide some insight into the voltage-dependence of conductances, it isconvenient to introduce so called steady-state activation functions and theirtime constants.

Activation Functions and Time Constants For fixed V the variable napproaches the steady-state value

n∞(V ) =αn(V )

αn(V ) + βn(V )∈ [0, 1] (3.20)

35

Page 36: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

exponentially. Hence, we can write:

τn(V )dn

dt= n∞(V )− n , (3.21)

where

τn(V ) =1

αn(V ) + βn(V )> 0 (3.22)

is the voltage-dependent time constant. The same holds for the variables m andh.

These steady state (in)activation functions n∞(V ), m∞(V ) and h∞(V ) andtheir voltage-dependent time constants τn(V ), τm(V ) and τh(V ) are depictedin figure 7, see [4] and [26]. The steady-state functions can be approximated byBoltzman functions, and their time constants by Gaussian functions [26]. Wewill do so in chapter 4

Figure 7: The steady state (in)activation functions (left) and their voltage-dependent time constants (right), from [26].

Persistent and Transient Currents in the HH-Model In the Hodgkin-Huxley model the voltage-dependence of the conductance gK is determined byan activation variable n only. Hence, the associated current IK is a persistentcurrent. From the steady-state activation function n∞(V ) depicted in figure7 we can deduce that the conductance gK ∝ n4 increases monotonically withvoltage, see [26] and [4].

The voltage-dependence of the conductance gNa is determined by one activa-tion variable m and one inactivation variable h. Hence, the associated currentINa is a transient current. The steady-state (in)activation functions m∞(V )and h∞(V ) depicted in figure 7 have oposite voltage dependences, i.e. m andh respectively increase and decrease with V . An increase in voltage first leadsto an increase in the conductance gNa ∝ m3h and a further increase of voltageleads to a decrease in conductance, see [26] and [4].

The Hodgkin-Huxley model has no hyperpolarization-activated currents,there are no conductances that depend on an inactivation variable only. We

36

Page 37: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

will see examples of models with hyperpolarization-activated currents, such asthe K+ inward rectifier current IKir, in chapter 4. Such currents are also some-times called h-currents and denoted by Ih, see [26].

Although the present material is enough to run a simulation of the Hodgkin-Huxley model, it does not immediately lead to an intuitive understanding of itsexcitable properties. The Hodgkin-Huxley model is a four dimensional system,to gain insight into its excitability it is desirable to reduce its dimension. Wewill do so in chapter 4. First however, we extend the current framework to allowfor transmitter-dependent conductances.

3.2.3 Transmitter-Dependent Channels and Conductances

Up untill now, we only considered neurons without inputs or with experimentallyinjected input currents. In reality however, neurons receive inputs from otherneurons. When an action potential arrives at the synaptic terminal it can triggerthe release of neurotransmitter into the synaptic cleft. The transmitter maybind to receptors in the postsynaptic membrane. This can lead to the openingor closing of ion channels which will alter the membrane conductance of thepostsynaptic neuron.

In this section we will extend the kinetic formalism to transmitter-dependentpostsynaptic channel conductances. Furthermore we will review some often usedsimplifications.

A Simple Scheme In a simple two state model of a postsynaptic receptorchannel, n transmitter molecules T bind to the channel directly according tothe following scheme

C + nTr2 // Or1oo ,

where C denotes the closed state and O denotes the open state [7], [4]. Alter-natively the same kinetics can be represented by the slightly less informativescheme

Cr2([T ])// Or1oo .

The latter emphasises the equivalence with voltage-dependent schemes [6].Dependence on transmitter concentration [T ] is usually simpler then voltage

dependence. According to the empirical law of mass action, the opening rateis proportional to [T ]n, as is the probability that n transmitter molecules arewithin binding range of a receptor channel. Hence, the open state probability pchanges according to

dp

dt= r2[T ]n(1− p)− r1p . (3.23)

In some cases it is necessary to add extra states in order to model the timedependent properties more accurately, see [6], [7] and [4].

37

Page 38: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

For a large population of channels we can take p to be the fraction of channelsin the open state and 1−p the fraction in the closed state. The postsynaptic con-ductance is given by g(t) = gp(t), where g represents the maximal conductance.In chapter 7 we will incorporate this simple scheme of postsynaptic conductancein a hypothetical model of synaptic transmission.

Characteristic Time-Course: A Simplification Action potentials or spikesare typically treated as identical stereotyped events characterized by their spiketimes tj [4]. For simplicity it is often assumed that the arrival of a presynapticaction potential evokes a postsynaptic conductance with a fixed characteristictime-course.

The conductance following the arrival of a single action potential at timet = 0, that is the conductance for t ≥ 0, is often taken to be either an exponentialfunction

g(t) = gae−at ,

an alpha functiong(t) = ga2te−at , (3.24)

or a difference of exponentials sometimes called a double exponential

g(t) = ge−at − e−bt

1/a− 1/b.

Note that all have unit area. Sometimes these functions are scaled such thattheir maximum value equals one.

These functions are plotted in figure 8 to illustrate the typical time-courseof the coductance following an action potential. The double exponential is themore general of the three, since it reduces to the alpha function for b → a andto the exponential function for b → ∞. The parameters a and b allow us toindependently characterize the decay time and the rise time respectively.

For simplicity the conductance following a series of spikes at times tj is oftenassumed to sum linearly. Hence, if we take

p(t) =

{e−at−e−bt

1/a−1/b , for t ≥ 0

0 , for t < 0 ,

then we can writeg(t) = g

∑j

p(t− tj) ,

see [11] and [4].

A Second Order Scheme Consider the slightly more complex second orderkinetic scheme

C1

r1([T ]) // C2r2

oo

r3~~O

r4

``

38

Page 39: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

0 2 4 6 8 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 8: The exponential function (solid), the alpha function (solid) and thedouble exponential (dashed) for a = 1 and various values of b.

where both C1 and C2 denote closed states of the receptor channel and Odenotes the open state. This scheme can generate an alpha function response(3.24) under the following conditions.

(i) The transmitter concentration [T] at the arrival time t0 of an action po-tential is modeled as a delta pulse δ(t− t0).

(ii) The fraction of channels in the closed state C1 is always in excess and canbe considered constant and ∼ 1.

The kinetic equations are then given by

dq

dt= r1δ(t− t0)− (r2 + r3)q (3.25a)

dp

dt= r3q − r4p , (3.25b)

where q and p are the fractions of channels in state C2 and O respectively. Inthe limit r4 → (r2 + r3), these equations give rise to the solution

p(t) = r1r3(t− t0)e−(t−t0)(r2+r3) (3.26)

which is equivalent to the alpha function, see [6].

The foregoing illustrates the generality of the kinetic approach in conjunctionwith the equivalent circuit representation.

39

Page 40: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

3.3 Other Considerations

3.3.1 Spatial Structure

Up untill now we have considered the cell as a single isopotential compart-ment. In general however, the potential of a cell may vary considerably alongits branches. Here, we briefly mention some ways to include spatial structure.

Compartmental Modeling To better capture this spatio-temporal activityof cells, one may devide the cell into multiple coupled isopotential compart-ments. For instance, one can treat a section of dendrite or axon as a cilindricalcompartment, see [4] and [27] among others.

Dendrites, Axons and PDE’s A cilindrical section of axon or dendritecan be devided into n identical coupled compartments. Taking the limit ofn→∞ results in a partial-differential equation. For passive dendritic processesor amacrine processes this PDE is often taken to be the linear diffusion equation.It was shown by Wilfred Rall (1959) that under some conditions it is evenpossible to reduce a complete dendritic tree to this linear PDE. One of theconditions is a certain branching ratio for the diameters of the parent and thechild processes, see [27], [40] and [4] among others.

For active processes, such as axons, a nonlinear reaction-diffusion equationis more appropriate. These are used to study propagating nerve impulses oraction potentials as traveling waves, see [40] and [32].

3.3.2 Models of Transmitter Release

Although the principles of excitability are fairly well understood and generallylead to good models, this is not the case for synaptic transmission at chemicalsynapses. Especially concerning presynaptic processes leading to release thereare still many questions unanswered and mechanisms of release are still debated,see chapter 2. For the sake of completeness and without going into details, wemention here two models of synaptic transmission that seem to allow for botha vesicular and nonvesicular interpretation.

In [46] the total amount of resources involved in synaptic transmission ispartitioned into three states: effective, inactive, and recovered. This partition-ing allows for a formulation in terms of kinetic equations of the form (3.8). Thearrival of an action potential is modeled as a Dirac delta pulse, as in the ki-netic equations (3.25). Since the descriptive terms for the three states are verygeneral, the model allows for multiple biophysical interpretations.

The stochastic compartmental model in [34], called a double barrier synapse,also consists of three pools. These vesicular, cytoplasmic and external pools arearranged in a chain and are separated by membrane barriers. Transfers betweencompartments are prescribed by probabilities and action potentials are againmodeled as Dirac delta pulses.

40

Page 41: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

The lack of knowledge of biophysical mechanisms is a major problem whenmodeling synaptic transmission. The approach we introduce in chapter 7 willallow us to circumvent this problem.

3.4 Concluding Remarks

In this section we reviewed a satisfactory formalism for modeling the excitableproperties of cells: the equivalent circuit representation together with kineticequations for channel conductances. The kinetic formalism also allows fortransmitter-dependent conductances. At this point however, we are still farfrom understanding excitability and its role in networks and behavior. In par-ticular, apart from the crude delta pulse representation in (3.25), we are stillmissing a model of transmitter release.

We will introduce a method for deriving models of transmission or release inchapter 7. Before we do however, we will first review some methods for reducingthe dimension of excitable neuron models. This will facilitate analysis and willlead to a better understanding of excitability.

41

Page 42: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

4 Reductions

In partial fulfillment of

the requirements for the

course:

Neuron Modeling

Lecturer:

Prof.dr. S. A. van Gils

Department of Applied

Mathematics

University of Twente

There are many reasons why we may want to simplify conductance-based modelssuch as the Hodgkin-Huxley model, primary among these is to gain insight.Eventually we want to get to an understanding of the functional role of neuronswithin the nervous system. The reduction of a high-dimensional model to alower dimensional one facilitates analysis.

Second, even relatively simple nervous systems already contain many thou-sands of neurons, hence, we may want to reduce the computational cost of amodel for simulation purposes. The challenge is to reduce or simplify the modelwithout loosing the essential characteristics of its behavior.

In this section we closely follow chapter 5 in [26]. We briefly review how pla-nar models can be obtained from higher-dimensional conductance-based modelsand how planar models can be constructed from scratch. Despite the differentapproaches the reduced models all have some properties in common. These areexploited in Izhikevich’s computationally efficient simple model.

4.1 Minimal Models

First, we consider models that are able to generate action potentials with aminimal set of currents, so called minimal models. There are a few dozen knowncurrents any combination of which can result in interesting nonlinear behavior.

4.1.1 A Top-Down Approach

Consider a conductance based model with a limit cycle attractor 7. Removecurrents or (in)activation variables from the system until we arrive at a modelthat has;

• a limit cycle attractor for some parameter values, but

7A trajectory that forms a closed loop is called a periodic trajectory or a periodic orbit, ifthe periodic orbit attracts nearby orbits it is called a limit cycle attractor.

42

Page 43: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

• only equilibrium attractors if one of the remaining currents or (in)activationvariables is removed.

Such a model is said to be minimal or irreducible for spiking. The Hodgkin-Huxley model can be reduced to one of at least two minimal models and henceis not minimal for spiking itself.

Stripping down a complicated model to a minimal one is a top-down ap-proach. The number of complicated models with the known currents alone isenormous, so to do this for all such models is inpractical. Instead a bottom-upapproach is suggested in [26].

4.1.2 A Bottom-Up Approach

A hypothetical minimal model can be consructed by combining one fast am-plifying variable a with one slower resonant variable r, also called a recoveryvariable.

Consider a Hodgkin-Huxley-type model of the form:

CV = I −IL︷ ︸︸ ︷

gL(V − EL)−

Ii(V,a,r)︷ ︸︸ ︷gia

krl(V − Ei) (4.1a)

a = {a∞(V )− a}/τa(V ) (4.1b)

r = {r∞(V )− r}/τr(V ) , (4.1c)

near rest (V , a, r) = (0, 0, 0), where a∞(V ) and r∞(V ) are steady-state (in)activationfunctions with time constants τa(V ) and τr(V ) respectively, see section 3.2.2.The variable a is amplifying if it amplifies voltage changes dV through positivefeedback, i.e. if for the associated current Ii(V, a, r) we have:

−∂Ii∂a

a′∞(V ) > 0 ,

or equivalently (since gi , a and r are all positive) if we have:

a′∞(V )(Ei − V ) > 0 .

Similarly the variable r is resonant if it resists voltage changes dV throughnegative feedback, i.e. if (by the same reasoning) we have:

r′∞(V )(Ei − V ) < 0 .

Whether a variable is amplifying or resonant is thus determined by the derivativeof its steady state (in)activation function and the driving force V − Ei of itsassociated current near rest.

The pair of variables (a, r) may either describe the activation and inacti-vation properties of one and the same current as in (4.1) or else that of two

43

Page 44: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

different currents in a model of the form:

CV = I −IL︷ ︸︸ ︷

gL(V − EL)−

Ii(V,a)︷ ︸︸ ︷gia

k(V − Ei)−

Ij(V,r)︷ ︸︸ ︷gjr

l(V − Ej) (4.2a)

a = {a∞(V )− a}/τa(V ) (4.2b)

r = {r∞(V )− r}/τr(V ) . (4.2c)

The first form (4.1) essentially allows for two different possibilities Ei < Vrestand Ei > Vrest. While in the second form (4.2) the two reversal potentials Eiand Ej allow for four different possibilities relative to Vrest. Thus, there is atotal of six essentially different minimal models.

To be more specific, six explicit representatives can be obtained by takinginto account the ‘standard’ (in)activation functions in figure 7, and the usualreversal potentials with their depolarizing and hyperpolarizing effects8:

EK < ECl︸ ︷︷ ︸hyperpolarizing

< Vrest < Eh < ENa < ECa︸ ︷︷ ︸depolarizing

.

For example in [26] the following amplifying and resonant variables are used.

• The fast ampliflying variable a is taken to be either the activation variablem for the depolarizing current INa. or the inactivation variable h forthe hyperpolarizing current IK

9. These amplify voltage changes throughpositive feedback.

• The slower resonant variable r is taken to be either the inactivation vari-able h for a depolarizing current INa or Ih, or the activation variable nor m for the hyperpolarizing current IK . These resist voltage changesthrough negative feedback.

The resulting models are summarized in table 1.To reduce the resulting 3D models to planar systems, the relatively fast

amplifying variable a is treated as instantaneous and replaced by its steady statevalue a∞(V ), see [26] for more details. Let us consider one of these models, thepersistent sodium plus potasium model, more closely.

8Here Eh stands for the reversal potential of a hyperpolarization activated h-current Ih,see section 3.2.2. Note that a hyperpolarization activated current can be depolarizing.

9This results in the hyperpolarization activated current mentioned in section 3.2.2, the K+

inwardly rectifying current IKir.

44

Page 45: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

min

imal

model

2D

reducti

on

nullclines

TheIN

a,p

+IK

-model:

(The

pers

iste

nt

sodiu

mplu

sp

ota

ssiu

mm

odel)

CV

=I−gL

(V−E

L)−gN

am

(V−E

Na)−gKn

(V−E

K)

CV

=I−gL

(V−E

L)−gN

am∞

(V)(V−E

Na)−gKn

(V−E

K)

n=

I−

gL

(V−

EL

)−

gN

am∞

(V

)(V−

EN

a)

gK

(V−

EK

)

n=

(n∞

(V)−n

)/τn

(V)

n=

(n∞

(V)−n

)/τn

(V)

n=n∞

(V)

m=

(m∞

(V)−m

)/τm

(V)

m=m∞

(V)

TheIN

a,t

-model:

(The

transi

ent

sodiu

mm

odel)

CV

=I−gL

(V−E

L)−gN

am

3h

(V−E

Na)

CV

=I−gL

(V−E

L)−gN

am

3 ∞(V

)h(V−E

Na)

h=

I−

gL

(V−

EL

)

gN

am

3 ∞(V

)(V−

EN

a)

h=

(h∞

(V)−h

)/τh

(V)

h=

(h∞

(V)−h

)/τh

(V)

h=h∞

(V)

m=

(m∞

(V)−m

)/τm

(V)

m=m∞

(V)

TheIN

a,p

+Ih

-model:

(The

pers

iste

nt

sodiu

mplu

sh

-curr

ent

model)

CV

=I−gL

(V−E

L)−gN

am

(V−E

Na)−ghh

(V−E

h)

CV

=I−gL

(V−E

L)−gN

am∞

(V)(V−E

Na)−ghh

(V−E

h)

h=

I−

gL

(V−

EL

)−

gN

am∞

(V

)(V−

EN

a)

gh(V−

Eh)

h=

(h∞

(V)−h

)/τh

(V)

h=

(h∞

(V)−h

)/τh

(V)

h=h∞

(V)

m=

(m∞

(V)−m

)/τm

(V)

m=m∞

(V)

TheIh

+IK

ir-m

odel:

(Theh

-curr

ent

plu

sin

ward

lyre

cti

fyin

gp

ota

ssiu

mm

odel)

CV

=I−gL

(V−E

L)−gK

irhK

ir(V−E

K)−ghh

(V−E

h)

CV

=I−gL

(V−E

L)−gK

irhK

ir,∞

(V)(V−E

K)−ghh

(V−E

h)

h=

I−

gL

(V−

EL

)−

gK

irhK

ir,∞

(V

)(V−

EK

)

gh(V−

Eh)

h=

(h∞

(V)−h

)/τh

(V)

h=

(h∞

(V)−h

)/τh

(V)

h=h∞

(V)

hK

ir=

(hK

ir,∞

(V)−hK

ir)/τK

ir(V

)hK

ir=hK

ir,∞

(V)

TheIK

+IK

ir-m

odel:

(The

pers

iste

nt

plu

sin

ward

lyre

cti

fyin

gp

ota

ssiu

mm

odel)

CV

=I−gK

irh

(V−E

K)−gKn

(V−E

K)

CV

=I−gK

irh∞

(V)(V−E

K)−gKn

(V−E

K)

n=I/{g

K(V−E

K)}−gK

irh∞

(V)/gK

n=

(n∞

(V)−n

)/τn

(V)

n=

(n∞

(V)−n

)/τn

(V)

n=n∞

(V)

h=

(h∞

(V)−h

)/τh

(V)

h=h∞

(V)

TheIA

-model:

(The

transi

ent

pota

ssiu

mm

odel

orA

-curr

ent

model)

CV

=I−gL

(V−E

L)−gAmh

(V−E

K)

CV

=I−gL

(V−E

L)−gAmh∞

(V)(V−E

K)

m=

I−

gL

(V−

EL

)

gA

h∞

(V

)(V−

EK

)

m=

(m∞

(V)−m

)/τm

(V)

m=

(m∞

(V)−m

)/τm

(V)

m=m∞

(V)

h=

(h∞

(V)−h

)/τh

(V)

h=h∞

(V)

Tab

le1:

Min

imal

Mod

els

an

dT

hei

rP

lan

ar

Red

uct

ion

s.

45

Page 46: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

4.1.3 The Persistent Sodium Plus Potassium Model

The INa,p + IK-model, pronounced persistent sodium plus potassium model, isgiven by the equations

CV = I −IL︷ ︸︸ ︷

gL(V − EL)−INa,p︷ ︸︸ ︷

gNam(V − ENa)−IK︷ ︸︸ ︷

gKn(V − EK)

m = {m∞(V )−m}/τm(V )

n = {n∞(V )− n}/τn(V ) .

It consists of a fast Na+ current and a slower K+ current and is equivalentto the Morris-Lecar ICa + IK-model for describing voltage oscillations in thebarnacle giant muscle fiber.

Note that this model can be obtained either via a bottom-up approach or,alternatively, via a top-down approach by eliminating the inactivation variable hfrom the Hodgkin-Huxley model and reducing the powers of m and n. By elim-inating h, the Na+ current is turned from a transient current into a persistentcurrent.

Based on observations the Na+ activation variable m(t) reaches its asymp-totic value m∞(V ) almost instantaneously relative to the voltage variable V (t).Substituting m = m∞(V ) into the voltage equation we can reduce the three-dimensional system to the planar system:

CV = I −IL︷ ︸︸ ︷

gL(V − EL)−instantaneous INa,p︷ ︸︸ ︷

gNam∞(V )(V − ENa)−IK︷ ︸︸ ︷

gKn(V − EK) (4.3a)

n = {n∞(V )− n}/τn(V ) . (4.3b)

We already mentioned in section 3.2.2 that the steady-state (in)activationfunctions in figure 7 can be approximated by Boltzmann functions:

m∞(V ) =1

1 + exp{(V1/2 − V )/k}, (4.4)

and their unimodal voltage-dependent time-constants by Gaussian functions:

τ(V ) = Cbase + Camp exp

(−(Vmax − V )2

σ2

).

See figures 9 and 7 for a comparison.The resulting reduced planar INa,p+ IK-model (4.3) is used throughout [26]

to illustrate dynamical properties that are typical for single-compartment mod-els. We will consider some of these properties in the next chapter. For now wewill content ourselves with a few representative trajectories and coresspondingvoltage traces in figure 10. Note the solid N -shaped curve and the dashed S-shaped or sigmoidal curve termed V -nullcline and n-nullcline. These are the setof points in state space where V = 0 and n = 0 respectively. At their intersectionpoints we find equilibria. Other minimal models have similar nullclines.

46

Page 47: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 9: The steady-state (in)activation functions in figure 7 can be approxi-mated by Boltzmann functions (left) and their voltage-dependent time-constantsby Gaussian functions (right), from [26].

Figure 10: A few trajectories and voltage traces summarizing the dynamic reper-toir of the INa,p + IK-model. Other minimal models have similar nullclines anda similar dynamic repertoir, from [26].

47

Page 48: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

4.2 Approximate Invariants

4.2.1 The FitzHugh Model

Consider the Hodgkin-Huxley model with its original parameters

CV = I −

IK︷ ︸︸ ︷gKn

4(V − EK)−

INa︷ ︸︸ ︷gNam

3h(V − ENa)−IL︷ ︸︸ ︷

gL(V − EL) (4.5)

n = (n∞(V )− n)/τn(V ) (4.6)

m = (m∞(V )−m)/τm(V ) (4.7)

h = (h∞(V )− h)/τh(V ) . (4.8)

It was observed by FitzHugh in [12] that

n(t) + h(t) ≈ 0.85 , (4.9)

which can be shown by plotting h against n in the (n, h)-plane.FitzHugh used this line as a w-axis on which he projected the (n, h) coordi-

nates using w = 0.5(n−h). Similarly he projected the faster (V,m)-coordinatesonto a u-axis using u = V − 36m. Thus projecting the four dimensional state(V,m, n, h) and its orbits onto a two dimensional (u,w)-plane. This allowedhim to draw an analogy between the projected behavior of the Hodgkin-Huxleymodel and the behavior of his modified version of van der Pol’s equation fordescribing nonlinear relaxation oscillators

u = u(a− u)(u− 1)− w + I (4.10a)

w = bu− cw . (4.10b)

4.2.2 A Quantitative Reduction

An approximate invariant such as (4.9) can also be used to reduce the state ofthe HH-model explicitly. Such an approach is taken in [26] following Krinskiiand Kokoz. The HH-model is reduced from four dimensions to three using therelationship h ≈ 0.89− 1.1n.

The model can then be reduced further to two dimensions by assumingthat the very fast activation kinetics for the Na+-current is instantaneous, i.e.m = m∞(V ), resulting in:

CV = I −

IK︷ ︸︸ ︷gKn

4(V − EK)−

instantaneous INa︷ ︸︸ ︷gNam

3∞(V )(0.89− 1.1n)(V − ENa)−

IL︷ ︸︸ ︷gL(V − EL)

n = (n∞(V )− n)/τn(V ) .

Like the voltage nullclines of the minimal models and the cubic of the Fitzhughmodel this model too has an N -shaped voltage nullcline.

In contrast to the previous approaches this approach not only retains quali-tative behavior of the original model, but also some of the quantitative behavior.

48

Page 49: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

This inspired a systematic approach for reducing conductance-based models in[29]. The approach is aimed at reducing the dimension of the system, whilepreserving the bifurcation structure and some quantitative behavior, see [26].

4.3 The Izhikevich Simple Model

Except for the FitzHugh model, all the reduced models discussed so far have

1. a fast voltage variable V with an N-shaped nullcline, and

2. a slower recovery variable u with a sigmoidal or S-shaped nullcline.

The recovery variable of the FitzHugh model has an even simpler straight-linenullcline.

Whether the neuron fires or not depends on the local vector field around theresting state. The resting state corresponds to the equilibrium located at thelower left intersection of the nullclines, near the local minimum of the voltagenullcline, see figure 11.

−70 −60 −50 −400

0.1

0.2

−80 −60 −40 −20 0 200

0.2

0.4

0.6

0.8

1

m em brane potential, V (m V) m em brane potential, V (m V)

recovery variable, u

u−nullcline

V−nullcline

a b

Figure 11: Left: a typical N -shaped voltage nullcline (solid) together with asigmoidal recovery nullcline (dashed). Right: the same zoomed-in on the restingstate , from [26].

Let (Vmin, umin) be the location of this local minimum. If we approximatethe fast nullcline by the parabola

u = umin + p(V − Vmin)2 , (4.11)

with scaling coefficient p > 0, and the slow nullcline by the straight line

u = s(V − V0) , (4.12)

with slope s and V -axis intercept V0, then we can approximate the dynamicsnear the resting state with the system

V = τf(p(V − Vmin)2 − (u− umin)

)(4.13)

u = τs(s(V − V0)− u) . (4.14)

49

Page 50: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

The parameters τf and τs describe the fast and slow time scales respectively.These equations allow the variable V to escape to infinity in finite time. To

model the downstroke of an action potential the system state is reset when thevoltage variable reaches some maximum value Vmax, i.e.

(V, u)← (Vreset, u+ ureset), when V = Vmax .

After appropriate rescalings of the variables this simple model can be trans-formed into the equivalent form:

v = v2 − u+ I If v ≥ 1 , then (4.15)

u = a(bv − u) v ← c , u← u+ d , (4.16)

which has only four dimensionless parameters. The derivation of the particularform of the local vector-field is based on bifurcation-theory and normal formreduction [24].

The rich behavioral voltage responses v of this model to various types ofinput currents I are summarized in figure 12 together with their biological phe-nomenologically descriptive names. An even richer set of responses can be ob-tained with the similar quartic model in [45], which allows for self-sustainedsubreset oscillations.

An important advantage of these models is that they combine biologicallyplausible dynamics with the computational efficiency of the integrate-and-firemodel. The truncated Taylor approximations of the local vector field are quicklyevaluated as opposed to functions that require iteration. Furthermore, the highcurvature at the sharp peak of a neuronal spike normally requires a small inte-gration time step in numerical simulations, whereas the reset allows for largertime-steps.

4.4 Concluding Remarks

We have briefly discussed how high dimensional conductance-based models canbe reduced to useful planar models. These 2D-models all have similar nullclinesand hence can have very similar behaviors. On the other hand, changing onlyone parameter in such a nonlinear model may result in a dramatic change ofbehavior. This is true in particular for models that are near a so called bifur-cation.

In the following section bifurcation-theory is introduced to explain some ofthe dynamical properties that are found in neuron models. The INa,p + IK-model and the Izhikevich simple model are used to illustrate these properties.The FitzHugh model will return in a later chapter when we use it to reproducemeasurements made at the squid giant synapse.

50

Page 51: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

(A) tonic spiking

input dc-current

(B) phasic spiking (C) tonic bursting (D) phasic bursting

(E) mixed mode (F) spike frequency (G) Class 1 excitable (H) Class 2 excitableadaptation

(I) spike latency (J) subthreshold (K) resonator (L) integrator

(M) rebound spike (N) rebound burst (O) threshold (P) bistabilityvariability

oscillations

(Q) depolarizing (R) accommodation (S) inhibition-induced (T) inhibition-inducedafter-potential spiking bursting

DAP

20 ms

Figure 12: Voltage responses of the Izhikevich simple model (top traces) to in-put currents (lower traces) for various parameter values. The horizontal blackbar represents a time interval of 20 ms. (Electronic version of the figure andreproduction permissions are freely available at www.izhikevich.com)

51

Page 52: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

5 Bifurcations in Neurodynamics

In fulfillment of the

requirements for the

course:

Bifurcations inNeurodynamics

Lecturer:

Prof.dr. A. Doelman

Mathematical Institute

Leiden University

We have briefly discussed how high dimensional conductance-based models canbe reduced to planar models. A natural question arises; How do we know wehave not lost some essential behavior in the process? In this context bifurcationtheory may prove to be useful. It formally addresses so called qualitative changesin behavior. Furthermore, it gives us an indication of which qualitative changesare most likely to occur in practice.

We are particularly interested in the transition from rest to periodic activityand the transition from periodic activity back to rest, when only one parameter,such as the input current, is varied. It turns out that there are only four likelyways for an equilibrium point, such as a resting potential, to disappear or looseits stability. If we restrict ourselves to planar models, then similarly, there areonly four likely ways for a periodic orbit to make a transition back to rest. Fourcombinations are of particular interest to us as we will see shortly.

5.1 Preliminaries: Basic Concepts

As already mentioned bifurcations express so called qualitative changes in be-havior. Let us first make more precise what is meant by such a qualitativechange in behavior. Our exposition will be cursory and informal, for a compre-hensive background the reader is referred to [15] and [32]. The reader who isfamiliar with bifurcation theory may skip to the summary 5.4 and read on fromthere.

5.1.1 Dynamical Systems, Flows and Orbits

We will refer to a system of ordinary differential equations of the form

x = f(x), x ∈ Rn (5.1)

as a dynamical system. Furthermore we will assume that the so called vectorfield given by the map f : Rn → Rn is sufficiently smooth for our purposes.

52

Page 53: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

A function x(t) on an interval I is called a solution of (5.1) if it satisfiesx(t) = f(x(t)) for all t ∈ I. For each x0 ∈ Rn (and for f sufficiently smooth)there is a unique solution starting at x0 at time t0 defined for all t in someneighborhood I of t0 [15]. Such a solution is called a specific solution throughx0 at t0. To emphasize this dependence on an initial condition we may writex(t) = ϕ(t, x0) where ϕ(t0, x0) = x0. The family ϕ(t, x) of all such specificsolutions is a function (given implicitly) of two variables called the flow of x =f(x). If we fix x then the map ϕ(x) : I → Rn defines a solution curve, trajectoryor orbit through x [15], [20].

5.1.2 Qualitative Equivalence

Two dynamical systems

x = f(x), x ∈ Rn, and y = g(y), y ∈ Rn

are said to be topologically equivalent if there is a continuous map with con-tinuous inverse, i.e. a homeomorphism h : Rn → Rn, mapping orbits of onesystem onto orbits of the other, preserving the direction of time. When themapping is defined only locally, the systems are said to be locally topologicallyequivalent. Systems are said to have (locally) qualitatively similar behavior ifthey are (locally) topologically equivalent.

If a system is topologically equivalent to any perturbation x = f(x) + εη(x)with ε sufficiently small and η(x) smooth, then it is said to be structurally stable[20] [32].

5.1.3 Bifurcations: Qualitative Changes in Behavior

We are now in a position to explain what we mean by a bifurcation. Considera parameterized system

x = f(x, µ), x ∈ Rn, µ ∈ Rm .

A parameter value µ0 is said to be regular or nonbifurcational if there is anopen neighborhood U of µ0 such that any system x = f(x, µ) with µ ∈ U istopologically equivalent to x = f(x, µ0). The behavior of the system is thusqualitatively similar for all µ in a neighborhood of µ0.

The bifurcation set is the complement of the set of regular values, thus asystem is at a bifurcation point µ = µb if any neighborhood of µb containssome µ for which the system has qualitatively different behavior. A system orvector field at a bifurcation is not structurally stable. The converse is not truesince bifurcations depend on the parameterization of the vector field f whereasstructural stability does not [20].

It turns out that certain qualitative changes are more likely to occur thanothers , because the conditions for such so called generic bifurcations involveonly one exact strict equality. Such bifurcations are said to be of codimensionone, see [15] and [26]. In multidimensional systems a complete list of all suchcodimension-1 or one-parameter bifurcations is unknown [32]. For two dimen-sional systems however such a list is known and is presented in [15].

53

Page 54: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

5.1.4 Near Equilibria

In determining the behavior of a system one often starts by studying the stabilityof its equilibria, i.e. the points x for which f(x) = 0. Explicit analytic solutionsmay be known, however, since even relatively low order polynomials do not haveexplicit solutions, numerical methods are often unavoidable. Some equilibriamay be found numerically by initializing the system at a state within theirdomain of attraction [32].

Hyperbolic Equilibria and Linearization An equilibrium point x0 is saidto be hyperbolic if all the eigenvalues of the Jacobian matrix L = Df(x0) havenonzero real parts. It is nonhyperbolic if at least one eigenvalue has zero realpart.

The Grobman-Hartman theorem states that, near a hyperbolic equilibriumpoint, the dynamical system x = f(x) is locally topologically equivalent toits linearization x = Lx. In this case the stability of the equilibrium pointis completely determined by the eigenvalues of the Jacobian and is preservedunder small perturbations of the vector field. If all eigenvalues have negativereal part the equilibrium is said to be locally hyperbolically stable, see [15],[26]and [20].

Figure 13: Equilibria types according to trace and determinant. (From [26])

The Planar Case Let us focus on planar systems. The eigenvalues locally de-termine the geometry of the vector-field near equilibrium. One can differentiatebetween three types of equilibria: nodes, saddles and foci, see figure 13.

• Node At a node the eigenvalues are real and of the same sign. Hence,they satisfy λ1λ2 = detL > 0. The node is stable when both are negative(trL < 0) and unstable when both are positive (trL > 0).

• Saddle At a saddle the eigenvalues are real and of opposite signs. Hence,they satisfy λ1λ2 = detL < 0. Since one eigenvalue is always positive,saddles are always unstable.

54

Page 55: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

• Focus At a focus the eigenvalues are complex-conjugate, so λ1λ2 = detL >0. The focus is stable when the eigenvalues have negative real parts (trL <0) and unstable when they have positive real parts (trL > 0).

Nonhyperbolic Equilibria and Center Manifold Reduction Nonhyper-bolic equilibria often correspond to bifurcations in the system. Although be-havior can be complicated near such equilibria, it is often possible to simplifyanalysis by locally reducing the dimension of a system. One then essentiallyrestricts the dynamics to an (attracting) invariant manifold or surface called acenter manifold [20]. We will not discuss center manifold reduction any further,instead we will focus on planar systems

5.2 Local Bifurcations

5.2.1 Saddle-Node Bifurcation

Consider the one dimensional system

x = a+ x2 . (5.2)

For a < 0 it has two equilibria, a stable one at x = −√a and an unstable one

at x =√a. It has one equilibrium x = 0 for a = 0, and no equilibria for a > 0.

Clearly the system undergoes a qualitative change as the parameter a crossesthe critical value a = 0. This system considered at a = 0 is the prototypicalexample of a system at a saddle-node or fold bifurcation [20].

Such prototypical examples in polynomial form are derived using Normalform theory and are called normal forms. We will not discuss normal formtheory here, instead we will turn to the situation on the plane.

In two dimensions an analogue of the one dimensional normal form (5.2)may occur on a one dimensional attracting invariant (center) manifold or curve.In this case the local flow looks qualitatively similar to the flow depicted inthe top sequence of figure 14. The case where the invariant curve forms a loopis called the saddle-node on a limit cycle bifurcation. This case is depicted inthe lower sequence of figure 14. In general a high dimensional system may beat a saddle-node bifurcation when one of the eigenvalues of the Jacobian atequilibrium becomes zero and the equilibrium thus becomes nonhyperbolic.

Consider the planar system

x = f(x, y, µ) (5.3a)

y = −y + g(x, y, µ) , (5.3b)

with

f(0, 0, 0) = g(0, 0, 0) = 0 , and

(fx fygx gy

)(0, 0, 0) =

(0 00 0

), (5.4)

that is, for µ = 0, the point (x, y) = (0, 0) is an equilibrium and f and gcontribute no linear parts. Furthermore, the linear part is in so called Jordan

55

Page 56: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 14: Saddle-node bifurcations (From [26])

normal form and we can write:(xy

)=

(0 00 −1

)(xy

)+

(fg

)(x, y, 0) . (5.5)

A planar vector field whose linearization has one zero eigenvalue, λ1 = 0, andone negative eigenvalue, λ2 < 0, at equilibrium can always be put in the form(5.5) with a shift of origin, a linear change of coordinates and a rescaling of timet, see [15].

Such systems in the form (5.3) are at a saddle-node bifurcation for µ = 0if10

∂2f

∂x2(0, 0, 0) 6= 0 . (nondegeneracy condition)

and∂f

∂µ(0, 0, 0) 6= 0 , (transversality condition)

The saddle-node bifurcation is a so called codimension-1 bifurcation becauseonly one condition, the zero eigenvalue condition, involves strict equality. Theother two conditions, the transversality condition and the nondegeneracy con-dition, involve inequalities only.

Example 5.1. Consider the following so called product system:

x = x2 + µ

y = −y ,10the prove involves the implicit function theorem, Taylor approximations and a so called

bifurcation function, see [15] for more details.

56

Page 57: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

with an equilibrium point (x, y) = (0, 0) for µ = 0. This system is of theform (5.3) with f(x, y, µ) = x2 +µ and g(x, y, µ) = 0. Since fxx(0, 0, 0) 6= 0 andfµ(0, 0, 0) 6= 0, the system undergoes a saddle-node bifurcation as the parameterµ crosses the critical value µ = 0. All orbits converge to the x-axis where theessential dynamics take place. The x-axis is an attracting invariant (center)manifold. Restricting the system to the x-axis reduces it to the one dimensionalnormal form (5.2) above.

5.2.2 Poincare-Andronov-Hopf Bifurcation

Consider the following planar system in polar coordinates around an equilibriumpoint

r = r(c(b) + ar2) (5.6a)

ϕ = ω(b) + dr2 . (5.6b)

The variable r ≥ 0 is the radius and ϕ the angle. The equation for r, whichshould only be considered for r ≥ 0, is decoupled from the equation for ϕ. Letus focus on this equation first.

The function c(b) with c(0) = 0 and c′(0) 6= 0 determines the stability ofthe equilibrium at r = 0. This equilibrium is stable for c < 0, and unstable forc > 0. Clearly the system undergoes a qualitative change as c crosses the criticalvalue c = 0. Furthermore, for a < 0 there is an additional stable equilibriumpoint at r =

√−c/a when c > 0. Similarly, for a > 0 there is an additional

unstable equilibrium point at r =√−c/a when c < 0. In the full system these

correspond to stable and unstable limit cycles respectively.In the full system the function ω(b) with ω(0) 6= 0 and the parameter d deter-

mine the frequency of damped or sustained oscillations around the equilibriumpoint and its dependence on the amplitude or radius r, see [26] and [32].

This is a normal form for the Poincare-Andronov-Hopf bifurcation or morebriefly the Hopf bifurcation. We are interested in how the stable equilibriumpoint at r = 0 for c < 0 looses its stability as c crosses the critical value c = 0and becomes positive. There are two cases to consider.

• The supercritical Hopf bifurcation corresponds to the case a < 0. As ccrosses the critical value c = 0, a stable limit cycle appears from a stableequilibrium point and the equilibrium point looses its stability, see the topsequence in figure 15.

• The subcritical Hopf bifurcation corresponds to the case a > 0. As ccrosses the critical value c = 0, the stable equilibrium point is approachedby an unstable limit cycle, the two coalesce, the equilibrium looses its sta-bility and orbits spiral out-wards (in this case to infinity), see the bottomsequence in figure 15.

A high dimensional system may be at a Hopf-bifurcation, if the Jacobian Lat one of its equilibria has a pair of purely imaginary eigenvalues.

57

Page 58: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 15: Hopf bifurcations (From [26])

Consider the two dimensional parameterized system

x = c(b)x+ ω(b)y + f(x, y, b) (5.7a)

y = −ω(b)x+ c(b)y + g(x, y, b) . (5.7b)

Letc(0) = 0 , ω(0) 6= 0 , f(0, 0, b) = g(0, 0, b) = 0 . (5.8)

Furthermore, for sufficiently small b, let(fx fygx gy

)(0, 0, b) =

(0 00 0

). (5.9)

Thus, for b near zero f and g contribute no linear parts and (x, y) = (0, 0) is anequilibrium point with a pair of complex eigenvalues c(b)± iω(b).

For b = 0 the linear part of the system is of the form(xy

)=

(0 ω(0)

−ω(0) 0

)(xy

)+

(fg

)(x, y, 0) . (5.10)

A planar vector field whose Jacobian has a pair of purely imaginary eigenvaluesat some equilibrium point can always be put in the form (5.10) with a shift oforigin and a linear change of coordinates.

The system (5.7) is at a Hopf bifurcation if the following holds.

• Nonhyperbolicity. At equilibrium the eigenvalues of the Jacobian L arepurely imaginary, ±iω ∈ C, with ω 6= 0. That is, b = 0 or equivalently,trL = 0 and detL = ω2 > 0.

• Transversality. The real part of the eigenvalues c(b) must satisfy c′(0) 6= 0at b = 0.

58

Page 59: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

• Non-degeneracy. The parameter a in the normal form (5.6) must benonzero. In terms of f and g this reads:

a =1

16[fxxx + fxyy + gxxy + gyyy]

+1

16ω[fxy(fxx + fyy)− gxy(gxx + gyy)− fxxgxx + fyygyy] 6= 0 ,

see [26] and [15] for more details.

Example 5.2. Consider the system:

x = bx− y − x(x2 + y2)

y = x+ by − y(x2 + y2)

z = −z ,

with an equilibrium point (x, y, z) = (0, 0, 0) for b = 0. The equation for thez coordinate is decoupled from the other equations. All orbits converge tothe (x, y)-plane which is an attracting invariant (center) manifold where theessential dynamics take place. Restricting the system to this plane reduces itto a two dimensional system of the form (5.7) with:

c(b) = b ,

ω(b) = −1 ,

f(x, y, b) = −x(x2 + y2) , and

g(x, y, b) = −y(x2 + y2) .

At b = 0 its equilibrium point (x, y) = (0, 0) becomes nonhyperbolic: trL =2b = 0 and detL = ω2 = 1 > 0. Since the transversality condition c′(0) = 1 6= 0and the nondegeneracy condition a = − 1

4 6= 0 are both satisfied, the systemundergoes a Hopf bifurcation as b crosses the critical value b = 0. In this casethe Hopf bifurcation is supercritical, since a < 0, see [32].

5.3 Global Bifurcations and Limit Cycles

Locating limit cycles is an even more complicated problem then finding equilib-ria. Some limit cycles may be found numerically by initializing the system at astate within their domain of attraction [32].

5.3.1 Fold Bifurcation of Cycles

As a parameter varies a stable limit cycle may be approached by an unstableone, in this case both may disappear via a fold bifurcation of limit cycles, seefigure 16. The limit cycles coalesce and annihilate each other in a manneranalogous to the saddle-node or fold bifurcation of equilibria. In fact the socalled Poincare map associated with the limit cycle(s) is at a fold bifurcation.

59

Page 60: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

(We will not discuss Poincare maps here.) The periodic orbit at the point ofannihilation is stable from one side and unstable from the other and is called thefold limit cycle. Viewing the sequence in the other direction corresponds to theappearance of a stable and unstable limit cycle. The fold limit cycle bifurcationby itself cannot account for a transition from equilibrium to periodic behavior.

Figure 16: fold limit cycle bifurcation (From [26])

5.3.2 Homoclinic Orbit Bifurcation

The two orbits associated with the stable and unstable direction of a saddle arecalled the stable manifold and the unstable manifold respectively. These canjoin up to form a homoclinic loop as in the middle of the sequence depictedin figure 17. Typically however, these manifolds miss each other and one goesinside the other.

When the two form a homoclinic loop the system may be at a homoclinicorbit bifurcation. As a parameter crosses its critical bifurcation value the un-stable and unstable manifold switch their relative inside and outside positions.As a consequence a limit cycle may either appear or disappear. In the figure thesequence from left to right corresponds to the disappearance of a limit cycle.The cycle first becomes an orbit homoclinic to the saddle equilibrium, its periodbecomes infinite and then the limit cycle disappears alltogether. The sequencefrom right to left corresponds to the appearance of a limit cycle. Note that thesaddle persists.

The eigenvalues λ1 and λ2 of a saddle are real and have opposite sign. At ahomoclinic orbit bifurcation the sign of the the saddle quantity or saddle value,λ1 + λ2, distinguishes between two cases11.

• The case λ1 + λ2 < 0 corresponds to the appearance or disappearance ofa stable limit cycle as in the figure and is termed supercritical.

• The case λ1 + λ2 > 0 corresponds to the appearance or disappearance ofan unstable limit cycle (not shown) and is called subcritical.

11The Andronov-Leontovich theorem establishes the conditions for the existence and sta-bility of a limit cycle in the presence of a homoclinic orbit, see [32] for more details.

60

Page 61: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

We restrict ourselves to the supercritical case because it seems to be more com-mon in neuron models. The supercritical homoclinic orbit bifurcation by itselfhowever does not explain the transition from resting to periodic behavior.

Figure 17: The supercritical (saddle) homoclinic orbit bifurcation in the plane,from [26]

5.4 Summary of One-Parameter Bifurcations

From the qualitative point of view, our main interest in neuron behavior is: thetransition from rest to periodic spiking behavior and back. Hence, the order ofexposition has been somewhat unorthodox.

5.4.1 One-Parameter Bifurcations of Equilibria

In sum, if only one parameter is varied, then an equilibrium can:

• disappear via a saddle-node bifurcation which can be

– on a limit cycle, or

– off a limit cycle,

• and loose its stability via a Hopf bifurcation which can be

– supercritical, or

– subcritical.

5.4.2 Planar One-Parameter Bifurcations of Periodic Orbits

If we restrict ourselves to one-parameter bifurcations in planar systems, thensimilarly, a periodic orbit can disappear via

• a fold bifurcation of cycles, and

• a homoclinic orbit bifurcation.

Furthermore, it can shrink to a point via

• a supercritical Hopf bifurcation,

and can be ‘cut’ by

• a saddle-node on a limit cycle bifurcation.

61

Page 62: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

5.5 Examples in Neuron Models

We are now ready to consider some of the excitable properties of neurons interms of bifurcation theory, such as the transition from a quiescent state to thatof a repetitive firing state.

Two of the bifurcations discussed are special in that they both account forthe transition from rest to periodic behavior as well as for the transition fromperiodic behavior back to rest. These are the supercritical Hopf bifurcation andthe saddle-node on a limit cycle bifurcation. The other bifurcations only accountfor one transition and need to be paired with another bifurcation to account forboth transitions. In typical planar neuron models at least, a particular pairingseems natural as we will see.

For instance, the homoclinic orbit bifurcation accounts for the disappearanceof a periodic orbit and requires the existence of a saddle. However, it does notaccount for the return to equilibrium. Thus we have a saddle and we need anode. Hence, this bifurcation could potentially be paired with the saddle andnode required for the saddle-node bifurcation, which would then account for thedisappearance of an equilibrium point. The bifurcation from rest to periodicbehavior would occur at a parameter value different from the value for thebifurcation from periodic behavior back to rest. Such a phenomenon is calledhysteresis, see [15].

Similarly the fold bifurcation of cycles accounts for the disappearance ofa periodic orbit, but not necessarily for the return to equilibrium. This lossof periodic behavior requires the existence of an unstable periodic orbit. Thesubcritical Hopf bifurcation provides the necessary unstable limit cycle, thestable node and a the way for the node to loose its stability. Hence, thesebifurcations too may potentially be paired into one hysteresis phenomenon.

Thus, 4 types of behavior are of particular interest to us:

• the saddle-node on a limit cycle bifurcation,

• the supercritical Hopf bifurcation,

• a hysteresis behavior combining a subcritical Hopf bifurcation with a foldof cycles bifurcation, and

• a hysteresis behavior combining a saddle-node bifurcation with a homo-clinic orbit bifurcation.

These behaviors can indeed be observed in the INa,p + IK-model as we discussbelow.

5.5.1 Bifurcations in the INa,p + IK-Model

In the previous chapter we showed that many models can be reduced to planarmodels with N -shaped voltage nullcline and sigmoidal shaped recovery null-cline. The INa,p + IK-model is a prototypical example of such a model. All thebehaviors mentioned above can be observed in its planar reduction:

62

Page 63: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

CV = I −IL︷ ︸︸ ︷

gL(V − EL)−instantaneous INa,p︷ ︸︸ ︷

gNam∞(V )(V − ENa)−IK︷ ︸︸ ︷

gKn(V − EK)

n = {n∞(V )− n}/τ(V ) ,

where the steady state activation functions m∞(V ) and n∞(V ) are of the form:

m∞(V ) =1

1 + exp{(V1/2 − V )/k},

and τ(V ) = constant for simplicity. Recall from table 1 on page 45 that thenullclines of the reduced INa,p + IK-model do not depend on the time constantτ , see [26] for more details.

Saddle-Node/Homoclinc Hysteresis The saddle-node and homoclinic or-bit hysteresis pair discussed above can be observed in the model with param-eters as in figure 18. The saddle-node bifurcation occurs at the bifurcationvalue I ≈ 4.51, and for τ(V ) = 0.16 the homoclinic orbit bifurcation occurs atI ≈ 3.08.

Saddle-Node on a Limit Cycle Bifurcation If instead of τ(V ) < 0.17, wechoose τ(V ) > 0.17, the saddle-node bifurcation will occur on a limit cycle, forthe same bifurcation value, I ≈ 4.51, and the same nullclines.

Subcritical Hopf/Fold of Cycles Hysteresis The subcritical Hopf andfold of cycles hysteresis pair discussed above can be observed for parameters asin figure 19. The subcritical Hopf bifurcation occurs at the bifurcation valueI ≈ 48.75. The fold limit cycle bifurcation occurs at I ≈ 42.18.

Supercritical Hopf Bifurcation The supercritical Hopf bifurcation can beobserved at bifurcation value I ≈ 14.66 with parameters C = 1, EL = −78 mV,gL = 8, gNa = 20, gK = 10, m∞(V ) has V1/2 = −20 and k = 15, n∞(V ) hasV1/2 = −45 and k = 5, and τ(V ) = 1, ENa = 60 mV and EK = −90 mV.

5.5.2 Bifurcations in the Izhikevich Simple Model

Consider the Izhikevich simple model:

v = v2 − u+ I If v ≥ 1 , then (5.11)

u = a(bv − u) v ← c , u← u+ d . (5.12)

For a > 0 the ‘sub-reset’ part of the system undergoes

• a saddle-node bifurcation when b2 = 4I,

63

Page 64: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 18: The geometry of a bistable neuron-model in response to a transient(input) current. The resting state disappears via a saddle-node (fold) bifur-cation. The limit cycle attractor disappears via a homoclinic orbit bifurcation.Such hysteresis behavior may be observed when the model is near a codimension-2 saddle-node homoclinic orbit bifurcation. It can be observed in the reduced 2DINa,p + IK-model with the following parameters: C = 1, EL = −80mV , gL = 8,gNa = 20, gK = 10, m∞(V ) has V1/2 = −20 and k = 15, n∞(V ) has V1/2 = −25and k = 5, τ(V ) < 0.17, ENa = 60mV , EK = −90mV . The saddle-node bi-furcation occurs at I ≈ 4.51, the codimension-2 saddle-node homoclinic orbitbifurcation at (I, τ) ≈ (4.51, 0.17). For τ(V ) = 0.16 the homoclinic orbit bifur-cation occurs at I ≈ 3.08, from [26].

64

Page 65: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 19: The geometry of a bistable neuron-model in response to a transient(input) current. The resting state loses its stability to a stable limit cycle viaa subcritical Andronov-Hopf bifurcation. The limit cycle attractor disappearsvia a fold of cycles bifurcation. Such hysteresis behavior may be observed whenthe model is near a codimension-2 Bautin bifurcation. It can be observed inthe INa,p + IK-model with parameters: gL = 1, gNa = gK = 4, m∞(V ) hasV1/2 = −30mV and k = 7, n∞(V ) has V1/2 = −45 and k = 5, EL = −78mV ,C = 1, τ(V ) = 1, ENa = 60mV , EK = −90mV . From [26].

65

Page 66: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

• an Andronov-Hopf bifurcation when a < b and a2 − 2ab+ 4I = 0, and

• a codimension-2 Bogdanov-Takens bifurcation when a = b = 2√I.

The Andronov-Hopf bifurcation is always subcritical, see [26] and [45] for moredetails. Thus, a possibly important reversible codimension-1 bifurcation, thesupercritical Hopf bifurcation for switching from rest to spiking and back ismissing in the Izhikevich model 12.

Including the reset adds new possibilities. For b = d = u(0) = 0 all orbitsremain on the attracting invariant v-axis and the model reverts to the quadraticintegrate-and-fire model

v = v2 + I If v ≥ 1 , then v ← c . (5.13)

This model is considered to be the topological normal form for the codimension-2 saddle-node homoclinic orbit bifurcation, see figure 20. It can also display the

Figure 20: A bifurcation diagram of the quadratic integrate and fire modelv = v2 + b with reset: v ← vreset if v ≥ 1. This model is considered to bethe topological normal form of the codimension-2 saddle-node homoclinic bifur-cation, from [26].

saddle-node on a limit cycle bifurcation and the homoclinic orbit bifurcation.

12In [45] a very similar model called the quartic model is introduced that can display self-sustained oscillations corresponding to the supercritical Hopf bifurcation.

66

Page 67: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Together with the reset, it may well be that the Izhikevich simple modelcan display an analogue of the hysteresis behavior that combines the subcriticalHopf bifurcation with the fold of cycles bifurcation.

5.6 Concluding Remarks

As mentioned some qualitative changes, the codimension-1 bifurcations, aremore likely to occur then others. For equilibria these bifurcations remain themost likely ones in high dimensional systems. For periodic orbits however,adding dimensions opens the possibilities for other one-parameter bifurcations.Thus, when reducing the dimension of a neuron model, there are no guaranteesthat essential behavior has not been lost in the process. Nevertheless beforedelving into more and more exotic bifurcations one may first want to focuson other important issues such as the spatial extend of a neuron or models ofsynaptic transmission.

Bifurcation theory and normal form theory are useful when we want tocapture a large repertoire of behaviors in simple abstract models. The reasonfor trying to capture such a large repertoire however, is that we do not knowwhat is truly important. In an attempt to get to a better understanding ofneuronal function, we propose to use inverse systems in the analysis of synaptictransmission. In the next section we introduce the mathematical tools relatedto systems inversion.

67

Page 68: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

6 Nonlinear Systems Analysis

In fulfillment of the

requirements for the course:

Nonlinear SystemsAnalysis

Lecturer:

Prof.dr. A. Doelman

Mathematical Institute

Leiden University

In this section we will introduce some mathematical tools related to inversesystems. Recall from section 1 that, in an attempt to better understand neu-ronal function, we intend to use systems inversion in our analysis of synaptictransmission. How exactly will be explained in chapter 7.

It will come as no surprise that inverse systems play an important role innonlinear control, where systems inversion is used to linearize systems by meansof a state feedback. We will not discuss feedback linearization here, but we willintroduce many of the concepts that are usually discussed in that context. Fora nice overview of the material the reader is referred to chapter 3 in [16]. Fora more in depth treatment the reader is referred to the classic text [23] or,alternatively, to [39].

6.1 Inversion and Normal Form

Consider nonlinear single-input single-output systems of the form

x = f(x) + g(x)u (6.1a)

y = h(x) , (6.1b)

with x ∈ Rn the state variable, u ∈ R the time-varying input, y ∈ R the output,f and g sufficiently smooth vector fields on Rn and h : Rn → R the readout oroutput map. Note that the system is linear with respect to the input u, suchsystems are said to be control affine [16]. We are interested in finding the (left)inverse for such systems, that is, given the output y, we want to reconstruct theinput u that caused it.

6.1.1 A One-Dimensional State Variable

Suppose n = 1, h is invertible and differentiable and g(x) 6= 0 for all x. In thiscase, an inverse for (6.1) can easily be obtained as follows. First, we rearrangeequation (6.1a) to express u in terms of x

u =x− f(x)

g(x).

68

Page 69: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Second, we express x in terms of y

x = h−1(y) , (6.2)

and finally, we work out the expression for x in terms of y

x =d

dth−1(y) =

1

h′(h−1(y))y

resulting in

u =y − h′(x)f(x)

h′(x)g(x), with x = h−1(y) .

Remark 6.1. Note that the inverse system requires knowledge of the derivativeof the original output or a way to measure the derivative from the output.This poses a problem when one demands the inverse to have a proper ’physical’state-space realization. We will deal with this problem later.

The idea above can be extended to more general systems of a particularform, which we will discuss next.

6.1.2 Byrnes-Isidori Normal Form

A single-input single-output (SISO) system is said to be in Byrnes-Isidori normalform if it is of the form

z1 = z2

z2 = z3

...zr−1 = zrzr = b(z) + a(z)u

zr+1 = qr+1(z)...

zn = qn(z)y = z1 ,

(6.3)

where z = (z1, . . . , zn) and a(z) 6= 0. Such a system is said to have relativedegree r, see [23] and [39]. We will return to the notion of relative degreeshortly. It will sometimes be convenient to use the following notation in normalform:

ξ =

z1

...zr

, together with η =

zr+1

...zn

. (6.4)

Figure 21 shows a block diagram of the normal form using this notation.

69

Page 70: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 21: Block diagram of the normal form, after [23].

6.1.3 Normal Form Inversion

For systems in normal form (6.3) an inverse is given by:

ξ1 = ξ2ξ2 = ξ3

...

ξr−1 = ξrξr = y(r)

η = q(ξ, η)u = 1

a(ξ,η) [y(r) − b(ξ, η)]

(6.5)

where now u is the output and z = (ξ, η) is the n-dimensional state variable.The system can be viewed either as taking the rth order temporal derivative y(r)

of y as input or as a system utilizing an rth order differentiator or differentialoperator 13. However, note that we have

ξ =

yy...y(r−1)

. (6.6)

Thus, since y is given, the system can be given more compactly as

η = q(y, y, . . . , y(r−1), η) (6.7a)

u =y(r) − b(y, y, . . . , y(r−1), η)

a(y, y, . . . , y(r−1), η), (6.7b)

13In this form the inverse is called a Hirschorn inverse [16].

70

Page 71: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 22: Block diagram of the inverse system, after [39].

or using the shorthand (6.6) as

η = q(ξ, η) (6.8a)

u =y(r) − b(ξ, η)

a(ξ, η), (6.8b)

where the state z is now reduced to the (n− r)-dimensional state variable η.The output u(t) of the inverse system depends on the solution η(t) of the

forced subsystem (6.8a). In particular, it depends on the initial condition η(0).Hence, we desire these so called internal dynamics14, the η-dynamics for givenξ, to be exponentially stable so that solutions with different initial conditionsall converge to the same solution. A block diagram of the inverse system (6.8)is depicted in figure 22.

Clearly the normal form is a particularly convenient form when one is in-terested in the inverse of a system. A question then arises. Can we put moregeneral systems of the form (6.1) into normal form? The answer is: undercertain conditions, yes, as we will see shortly.

Note that (6.6) already suggests a way to convert to normal form. To dealwith the repeated derivatives in (6.6), it will be convenient to introduce somenotation first.

6.2 Conversion to Normal Form

When converting to normal form, it is customary to make use of Lie derivativenotation.

6.2.1 Lie Derivative Notation

By the chain rule of differentiation, the temporal derivative of some function halong the trajectories x(t) of a system x = f(x) is given by

14In the special case when the forcing satisfies ξ(t) ≡ 0 these internal η-dynamics are calledzero dynamics.

71

Page 72: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

d

dth(x(t)) =

∂h

∂xf(x) =

∑i

∂h

∂xifi(x) , (6.9)

where∂h

∂x=

[∂h

∂x1

∂h

∂x2· · · ∂h

∂xn

](6.10)

denotes the Jacobian matrix of the scalar function h. Thus, the temporal deriva-tive is equal to the rate of change in the direction of f(x). To emphasize thisdependence on the vector field, we will use the the Lie derivative notation

Lfh(x) :=∂h

∂xf(x) . (6.11)

We say that Lfh is the derivative of h along the vector field f . Thus, first takingthe derivative of h along f and then along the vector field g gives

LgLfh(x) =∂(Lfh)

∂xg(x) . (6.12)

Furthermore, when dealing with repeated derivatives the Lie derivative allowsfor a convenient notation. If h is differentiated k times along the vector field fone writes:

Lkfh(x) :=∂(Lk−1

f h)

∂xf(x) , (6.13)

where L0fh(x) := h(x), see [23].

6.2.2 Relative Degree

We already noted that (6.6) suggests a way to convert to normal form. Thus,to transform a system of the form (6.1) into Byrnes-Isidori normal form (6.3),we will look for a state transformation z = φ(x) such that

φ1(x) = yφ2(x) = y

...φr(x) = y(r−1) .

(6.14)

The chain rule of differentiation allows us to express these derivatives in termsof the original state variable x until we arrive at an rth order derivative of ythat depends explicitly on the input u. We do this locally around some pointx0 = x(t0) along the trajectory, we will deal with the remaining componentsφr+1(x), . . . , φn(x) later.

In order to express these derivatives of the output y in terms of the readoutmap h and the vector fields f and g, we will make use of the Lie derivativenotation introduced earlier.

72

Page 73: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Since y = h(x) and x = f(x) + g(x)u, the first derivative is quickly found

y =∂h

∂x[f(x) + g(x)u]

= Lfh(x) + Lgh(x)u .

Suppose Lgh(x) = 0, then y = Lfh(x) does not explicitly depend on u. We canrepeat this process of differentiation again and again

y =∂Lfh(x)

∂x[f(x) + g(x)u] = L2

fh(x) + LgLfh(x)︸ ︷︷ ︸=0

u

...

y(r−1) =∂Lr−2

f h(x)

∂x[f(x) + g(x)u] = Lr−1

f h(x) + LgLr−2f h(x)︸ ︷︷ ︸=0

u ,

until we arrive at some rth order derivative

y(r) =∂Lr−1

f h(x)

∂x[f(x) + g(x)u] = Lrfh(x) + LgL

r−1f h(x)u

of y that does explicitly depend on the input u, that is, for which we haveLgL

r−1f h(x) 6= 0. We are now ready to state the notion of relative degree.

A single input single output system of the form (6.1) is said to have relativedegree r at a point x0 if

(i) LgLkfh(x) ≡ 0 for all x in a neighborhood of x0 and all k < r − 1

(ii) LgLr−1f h(x0) 6= 0 .

The relative degree can be interpreted as the number of times one needs to dif-ferentiate the output y(t) at time t0 in order for the input u to appear explicitly.A very important property of the relative degree is that it is invariant understate transformations, see [23] and [39].

If LgLr−1f h(x0) 6= 0 then this holds for an open neighborhood of x0. How-

ever, some points x0 may lie on the level set:

S = {x|LgLr−1f h(x) = 0} ,

while LgLr−1f h(x0) 6= 0 for x near x0. At these points x0 the relative degree is

not defined, see [23], [39] and example 6.1 and 6.2.Note that mathematically speaking it is generically unlikely that LgL

kfh(x) ≡

0 on an open neighborhood. However, in practice this occurs naturally whensystems are connected in series. For example, in kinetic schemes of chain re-actions when the input substance is at one end of the chain and the outputsubstance at the other. Or when a cilindrical section of dendrite or axon isdivided into n identical compartments of relative degree 1, in which case therelative degree wil be n.

73

Page 74: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

6.2.3 State Transformation

We now return to the desired local coordinate transformation z = φ(x) suggestedin (6.14). For a system of relative degree r ≤ n, we can now express the first rcomponents given by the output y and its first r − 1 derivatives as

φ1(x) = h(x)φ2(x) = Lfh(x)

...φr(x) = Lr−1

f h(x) .

(6.15)

What remains is to choose the other n− r components φr+1(x), . . . , φn(x). Tothis end, note that the evolution of the n− r-dimensional state component η inthe normal form (6.3) does not explicitly depend on the input u. Therefor, wechoose the remaining n− r components φi such that

Lgφi(x) =∂φi∂x

g(x) = 0 . (6.16)

For systems of well-defined relative degree r one can show that this can alwaysbe done in such a way that the mapping φ(x) has a nonsingular Jacobi matrixat x0 , see [23] for a proof. Hence, the resulting coordinate transformation

z =

[ξη

]= φ(x) =

h(x)Lfh(x)

...Lr−1f h(x)

φr+1(x)...

φn(x)

(6.17)

transforms a system of the form (6.1) into the Byrnes-Isidori normal form (6.3).

Remark 6.2. Note that even though a solution to the system of n − r partialdifferential equations (6.16) is guaranteed to exist, it may still be difficult tofind such a solution in explicit form.

Example 6.1 (FitzHugh Model with Conductance Input). Consider the fol-lowing modified FitzHugh model:

x1 = −x1(x1 − a)(x1 − 1)− x2 +

−Isyn︷ ︸︸ ︷u(p− x1)

x2 = bx1 − cx2

y = x1 ,

where we take the output y = x1 to be the membrane potential, see (4.10) onpage 48 for the orignal model. The usual injected input current has been re-placed by a synaptic current Isyn with driving force p−x1 and reversal potentialp. Hence, the input u is a conductance input.

74

Page 75: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

In terms of the general form (6.1) we have

x =

[−x1(x1 − a)(x1 − 1)− x2

bx1 − cx2

]︸ ︷︷ ︸

f(x)

+

[p− x1

0

]︸ ︷︷ ︸

g(x)

u

y = h(x) = x1 .

and therefore:

∂h

∂x=[

1 0],

Lfh(x) =∂h

∂xf(x) = −x1(x1 − a)(x1 − 1)− x2 ,

Lgh(x) =∂h

∂xg(x) = p− x1 .

Note that Lgh(x) 6= 0 if x1 6= p, so the system has relative degree 1 except atthose points where y = x1 = p where it is undefined. The system is already inthe normal form (6.3). The inverse, in the compact form (6.7), is given by:

η = by − cη

u =1

p− y{y + y(y − a)(y − 1) + η} ,

and has exponentially stable internal η-dynamics for c > 0.At this point we have not yet dealt with the problematic input derivative y,

see remark 6.1. We will discuss this in more depth in section 6.3.

Example 6.2 (Fitzhugh Model with Transmitter Input). Let us consider thesame model as in the previous example, except with a transmitter-dependentconductance. Furthermore, let us take a = 0 for simplicity. The model is givenby:

x1 = −x21(x1 − 1)− x2 +

−Isyn︷ ︸︸ ︷x3(p− x1)

x2 = bx1 − cx2

x3 = qu(1− x3)− rx3

y = x1

where the synaptic current Isyn = −x3(p−x1) now has a conductance 0 ≤ x3 <1 that depends on transmitter input u. This conductance evolves according to(3.23) on page 37, with q > 0 and r > 0.

75

Page 76: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

In terms of the general form (6.1) we have

x =

−x21(x1 − 1)− x2 + x3(p− x1)

bx1 − cx2

−rx3

︸ ︷︷ ︸

f(x)

+

00

q(1− x3)

︸ ︷︷ ︸

g(x)

u

y = h(x) = x1 .

and therefore

∂h

∂x=[

1 0 0], (6.18)

Lgh(x) =∂h

∂xg(x) = 0, (6.19)

Lfh(x) =∂h

∂xf(x) = −x2

1(x1 − 1)− x2 + x3(p− x1) , (6.20)

∂Lfh(x)

∂x=[

(−3x21 + 2x1 − x3) −1 (p− x1)

], (6.21)

LgLfh(x) =∂Lfh(x)

∂xg(x) = q(p− x1)(1− x3) . (6.22)

Note that LgLfh(x) 6= 0 as long as x3 6= 1 and x1 6= p, so the system hasrelative degree 2 except at those points where x3 = 1 or y = x1 = p where it isundefined. Furthermore, we have

L2fh(x) =

∂Lfh(x)

∂xf(x)

= (−3x21 + 2x1 − x3){−x2

1(x1 − 1)− x2 + x3(p− x1)}− (bx1 − cx2)− rx3(p− x1) . (6.23)

To find the normal form, we take

z1 = φ1(x) = y = h(x) = x1 ,

z2 = φ2(x) = y = Lfh(x) = −x21(x1 − 1)− x2 + x3(p− x1) ,

and look for a function φ3(x) that satisfies the PDE:

Lgφ3(x) =∂φ3

∂xg(x) =

∂φ3

∂x3q(1− x3) = 0 .

This condition is satisfied by any function φ3(x1, x2) that does not depend onx3. In this case the Jacobian matrix of the transformation z = Φ(x) is given by

∂Φ

∂x=

1 0 0−3x2

1 + 2x1 − x3 −1 p− x1∂φ3

∂x1

∂φ3

∂x20

,

76

Page 77: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

with det Φ′ = ∂φ3

∂x2(p− x1). For x1 6= p the matrix is nonsingular if we choose:

z3 = φ3(x) = x2 . (6.24)

The inverse transformation x = Φ−1(z) is then given by

x1 = z1 (6.25a)

x2 = z3 (6.25b)

x3 =z2 + z2

1(z1 − 1) + z3

p− z1. (6.25c)

For x3 6= 1 and y = x1 6= p we can now write the system in its canonical ornormal form:

z1 = z2 = y

z2 = β(z) + α(z)u = y

z3 = bz1 − cz3

y = z1 ,

where α(z) = LgLfh(Φ−1(z)) and β(z) = L2fh(Φ−1(z)) are now given by (6.22)

and (6.23) together with (6.25). Hence, according to (6.7), the inverse systemis given by:

z3 = by − cz3

u =1

α(y, y, z3)(y − β(y, y, z3)) ,

and has exponentially stable internal η-dynamics for c > 0. Note the inputderivatives y and y. We will discuss the realizability of systems with inputderivatives in section 6.3.

6.2.4 More General Nonlinear Systems

In [42] some of the notions discussed here are generalized to (nonlinear) singleinput single output system of the form:

x = X(x, u) (6.26a)

y = h(x) . (6.26b)

We will only need the generalized notion of relative degree. A system of theform (6.26) is said to have relative degree r at a point x0 if

(i) L∂X/∂uLkXh(x) = 0 for all x in a neighborhood of x0 and all k < r − 1

(ii) L∂X/∂uLr−1X h(x0) 6= 0 .

For X(x, u) = f(x) + g(x)u we have ∂X∂u = g(x) and this definition coincides

with the one given earlier. The relative degree r is the number of times one hasto differentiate the output y(t) in order for the obtained expression to explicitlydepend on the input u [42].

77

Page 78: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

6.3 Nonlinear Realization Theory

When looking for a state-space realization of the inverse system, we are facedwith a problem, as already mentioned in remark 6.1. The inverse system dependson derivatives of its input. The problem is that these temporal derivatives arenot directly available. At any one time t we are given only y(t). In order tocalculate a temporal derivative:

y(t) = limh→0

y(t)− y(t− h)

h, (6.27)

we need the values of y at two times that are infinitsimally close together. Isthere a way to accurately measure these derivatives?

We could try to approximate the derivative y by introducing a delay ormemory state to make y(t− h) available, however, then, for small 0 < h << 1,any error in measurements of y will be amplified by a large value 1/h when onetries to approximate y. Hence, this solution will often not be satisfactory.

When can a system in which input derivatives appear explicitly be given anequivalent classical state-space representation in which input derivatives are nolonger present? This is one of the topics of realization theory.

6.3.1 The Realization Problem

The starting point of the realization problem is a system in the form of aninput-output differential equation:

y(n) = ϕ(y, y, . . . , y(n−1), u, u, . . . , u(s)) , (6.28)

with s ≤ n, called the external differential form of the system. Here y ∈ Ris the output, u ∈ R is the input and ϕ depends explicitly on u(s), that is∂ϕ/∂u(s) 6≡ 0.

The challenge is to find, if possible, a state variable x ∈ Rn such that thesystem can be reexpressed in the form:

x = f(x, u) (6.29a)

y = h(x, u) , (6.29b)

called the classical state-space realization of (6.28), see [30] and [16]. Note thatinput derivatives do not appear in the classical state-space realization.

Although the realization problem starts with systems in external differentialform, systems do not always come in this form. It is instructive to consider howthis form can be obtained from systems in state-space form and how this relatesto relative degree.

6.3.2 Brief Intermezzo: The State Elimination Problem

The opposite problem; obtaining the external differential form (6.28) of a systemin state-space form (6.29) by eliminating the state variable x, is the subject ofelimination theory.

78

Page 79: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

As in the case of conversion to Byrnes-Isidori normal form, one considers astate transformation constructed from repeated derivatives of the output. Inthis case however, one continues past the rth-order derivative until one arrivesat the nth order derivative, where n is the dimension of the system and r isthe relative degree. Unlike (6.17) this state transformation is thus not onlyexpressed in terms of the original state x, but also in terms of the input u andits derivatives. This results in the normal form:

z1 = z2

...zn−1 = znzn = ϕ(z, u, u, . . . , un−r)y = z1 ,

(6.30)

called the Fliess canonical form or observability canonical form. Which thusgives us the external differential form

y(n) = ϕ(y, y, . . . , y(n−1), u, u, . . . , u(n−r)) , (6.31)

corresponding to (6.28) withs = n− r , (6.32)

see [16] and the references therein.The relative degree r is always non-negative15, so for realizable systems, we

have that s = n − r ≤ n. This clarifies why, in the realization problem above,one only considers external differential forms in which the order s of temporalderivatives of the input u does not exceed the order n of derivatives of the outputy.

In inverse systems the roles of input and output are reversed. Hence, aninverse system in external differential form:

u(s) = ϕ(u, u, . . . , u(s−1), y, y, . . . , y(n)) , (6.33)

can only be realizable if n ≤ s. This suggests that, for systems in externaldifferential form, one can only hope to find a realization of both the originalsystem and its inverse if s = n. Of course this can only be the case when r = 0,that is for systems with ‘relative degree’ zero16, where the output y = h(x, u) ofthe desired realization explicitly depends on the input u. In this case we locallyhave ∂h/∂u 6= 0.

If the ouput is locally invertible in u, we may try to obtain the inverse systemfrom the state-space form by rearranging the output equation u = h−1

x (y) andplugging it into the state equation.

15Recall that the relative degree is the number of times one needs to differentiate the outputfor the input to appear explicitly.

16We did not define the notion of relative degree for systems with outputs that explicitlydepend on the input.

79

Page 80: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Example 6.3. Consider the system:

x = −x3 + x+ u

y = x+ u .

The external differential form can be obtained by differentiation of the output:

y = x+ u

= −x3 + x+ u+ u

= −(y − u)3 + (y − u) + u+ u .

The output y explicitly depends on the input u and in the external form theorder of derivatives of the output indeed equals the order of derivatives of theinput. The state-space realization of the inverse system:

x = −x3 + y

u = y − x ,

can be obtained from the state-space form by rearranging the output equationu = h−1

x (y) = y−x and plugging it into the state equation. (This inverse systemis asymptotically stable.)

Let us now return to the realization problem.

6.3.3 Extended State Space

Using so called generalized coordinates (z, w), where z = (y, y, . . . , y(n−1)) andw = (u, u, . . . , u(s)), one can express the system (6.28) in generalized or extendedstate space as follows:

z1 = z2

...zn−1 = znzn = ϕ(z, w)w1 = w2

...ws = ws+1

ws+1 = v

(6.34)

where the (s+1)-th order derivative v = u(s+1) is treated as the input. Note thesimilarity with the observability canonical form (6.30). Thus, given an input-output differential equation (6.28) one can immediately write down the extendedstate-space realization.

To find a classical state-space realization (6.29), i.e. a realization that doesnot involve derivatives of the input, one tries to find a generalized state transfor-mation (x,w) = (ψ(z, w), w) such that in the new coordinates the evolution of

80

Page 81: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

the first n coordinates no longer explicitly depends on derivatives of the input.The remaining s+1 coordinates w corresponding to the input and its derivativesare left unchanged. It is not always possible to find such a transformation, see[30] and [5].

Several different realizability conditions for the existence of such a transfor-mation have been proposed. In [30] these are reviewed and their equivalence isestablished. In section A.1 of the appendix we briefly review the constructiveapproach independently proposed by Glad (1988) and by Crouch and Lamnabhi-Lagarrigue (1988). In chapter 7 we will use this constructive method togetherwith systems inversion to obtain biologically motivated realizations of functionalneuronal connections.

6.4 Concluding Remarks

In this section we have introduced the Byrnes-Isidori normal form and its use inthe context of nonlinear systems inversion. As we will see, most neuron modelsare already in this convenient normal form. Models with well-defined relativedegree that are not in normal form, can be brought into this form if one canfind a solution to the system of PDE’s (6.16).

When one is interested in state-space realizations however, the appearanceof the input derivatives in the resulting inverse system poses a problem. Ap-proximate realizations of inverse systems and biologically motivated realizationsof hypothetical synapse models are the topic of the next section.

81

Page 82: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

7 A New Analysis of Synaptic Transmission

So far we have discussed physiology, modeling, model reductions, bifurcations,and, in the previous chapter, systems inversion. In this chapter we will finallybe able to show the benefits of considering inverse neuron models in the contextof synaptic transmission and in moving to a higher level of abstraction.

Recall that our ultimate goal is to understand the principles of adaptivebehavior in animals. Thus we are keen to learn how the behavior of an animalarises from the behavior of its neurons. However, even if we resrict ourselvesto the reduced neuron-models, the level of detail still seems inappropriate whenone considers the outward behavior of an animal. For this, it seems, we need tomove to a higher level of abstraction. In fact we will take simplification to bea requirement for understanding animal behavior. Thus, to us, the question isnot whether we should simplify or not , but the question is: How?

Before we introduce the method we propose, let us briefly consider the typeof level we aim for.

7.1 The Network Level of Abstraction

In the appendix some popular network level abstractions are summarized, thespiking-type networks, and the artificial neural networks such as the multilayerperceptron and the Hopfield-type recurrent networks. Considering the fact thatmany challenges are shared by robots and animals alike, the level of abstractionshared among artificial neural networks used in nonlinear control seems the mostappropriate.

In [22] a nice survey of artificial neural networks in the language and notationof systems theory can be found. It discusses nonlinear systems modeling, iden-tification and adaptive control using neural networks and their learning rules.In [8] it is proposed that the main learning paradigms for such tunable systems,supervised, reinforcement and unsupervised learning may potentially be associ-ated with the cerebellum, the basal ganglia and the cerebral cortex respectively.Some theoretical brain architectures are also considered.

Many of the better-known neural networks fall within a standard unifiedmodel of neural networks consisting of computational units of the form depictedin figure 23. Such a unit consists of a weighted summer, a linear single-inputsingle-output dynamical system and a nonlinear output map, see [22]. There isone important observation we should make, these units allow for the output ofother units to serve as input!

The relative succes of the artificial neural networks such as the Hopfield-type networks and the perceptron, lies in their conceptual simplicity. Theseabstract networks are amenable to analysis and hence, facilitate the derivationof learning rules.

In contrast, the neuron models we have discussed so far, even the reducedones, seem hopelessly complex when it comes to incorporating these modelsinto networks with learning capabilities. Furthermore there is still somethingmissing. At a higher network level of abstraction one not only needs to know

82

Page 83: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 23: A general artificial neuron, external inputs ui and outputs yj fromother neurons are weighted and summed together with a constant ρi. The resultvi is passed to a single-input single-output linear system L. The filtered signalxi serves as the input for the final static nonlinear map ϕ resulting in the outputyi. after [22].

how output is generated from inputs, but also how this output can serve asinput to other cells. So how do we get to such a level?

7.2 The Main Idea

We have models describing how a presynaptic conductance can lead to voltagechanges, we even discussed models that convert a transmitter concentrationinto a postsynaptic conductance, however we have not yet given a model forconverting a presynaptic potential into a neurotransmitter concentration or apostsynaptic conductance. Thus, we have not yet completed the signal pathneeded to ‘wire up’ these models into networks.

What we need are complete input-output descriptions from voltage-to-voltage,from transmitter-to-transmitter or from conductance-to-conductance. Let uscall such a complete description a neuronal connection model at the networklevel. So how do we find such a neuronal connection?

We could try to model the missing link separately. That is, we could try tofind a model of transmitter release on the basis of physiological considerationsand recordings as usual. However, there are at least three problems with thisapproach.

1. There is no general agreement on the physiological mechanisms involvedin transmitter release, see chapter 2.

2. A physiological approach would not readily provide us with a simple modelat the network level. In fact, it would introduce more variables and pa-rameters.

3. Getting the parameters of a physiological model even slightly wrong maylead us to miss a simple input-output interpretation even if it exists. We

83

Page 84: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

are dealing with nonlinear systems, small deviations in parameters may re-sult in wildly different input-output behaviors at the neuronal connectionlevel.

The first argument says that we do not yet have enough information to followthe physiological approach. The other two however, tell us that even if we had,it would probably not be a good idea to follow this route if we are interested ina simple network level description. Instead we assume excitability and synaptictransfer are intimately related and that so should their models and parameters.The immediate question then becomes: How are they related?

Let us first consider a complete signal path from a presynaptic input conduc-tance u to a postsynaptic output conductance y, see figure 24. We will considerother possibilities later. Note that it is easy to visualize how multiple inputconductances from several neurons would add up.

Figure 24: A complete signal path from conductance to conductance. The presy-naptic conductance input u is mapped to a postsynaptic conductance output y.The presynaptic potential y1 = u2 is an intermediate representation.

Let us for the sake of argument suppose that we have a good model ofmembrane excitability. That is we have an input-output dynamical system Σ1

that converts a presynaptic conductance input u into a presynaptic potentialy1. What we need to complete the signal path is a model of synaptic transfer,i.e. a second system Σ2 that takes a presynaptic potential u2 = y1 as input andproduces a postsynaptic conductance y as output. The complete signal path isthen given by the two models connected in series ΣH = Σ2 ◦Σ1, see figure 25.

So system ΣH represents the network level, the neuronal connection, we aretrying to get to. Note that we could express the second system, the unknownsystem of synaptic transfer, as Σ2 = ΣH ◦ Σ−1

1 , however:

1. We do not yet know ΣH . This is exactly the system we are trying toobtain.

2. We do not know whether Σ1 is invertible or not.

Assuming for the moment that we can invert Σ1, is there a way to overcomethe first issue?

The idea is to use the standard method of science: postulate a hypothesisand verify or falsify its validity. Thus in our case:

84

Page 85: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 25: The neuronal connection as a cascade of subsystems.

1. We make an educated guess, the hypothesis, of what the functional con-nection at the network level might be, and we formalize this hypothesis inthe form of an input-ouput dynamical system ΣH .

2. We derive the inverse system Σ−11 .

3. We combine the two in a verifyable model Σ2 = ΣH ◦Σ−11 for the synaptic

transfer.

4. We verify or falsify the model Σ2 against measurements. If the model isfalsified, we update our hypothetical system ΣH , i.e. we return to step 1.

As one iterates this model-prediction loop, the key is to keep our hypothesisΣH as simple as possible. Of course, given the complexity of the activitiesmeasured, there are no guarantees that the network level can be given a simpleinterpretation. However, what we must realize is that some of this complexitymay be due to a simple requirement: the signal must travel from A to B. Thus, ifwe consider the complex voltage traces as a signal representation in a transportdomain, then the actual signal transformation may be far more simple thenexpected.

This approach, at least for the squid giant synapse, will actually lead us toa convincing model of synaptic transfer with a simple hypothetical system ΣHat the network level!

7.3 An Introductory Example

Let us first give a toy example of this method in action. Since this is a hypotheti-cal example, we will omit the fourth step, the verification against measurements.

Example 7.1 (Quadratic Integrate-and-Fire Neuron). Suppose that the ex-citable properties of nerve cells in some hypothetical animal are accurately rep-resented by the quadratic integrate-and-fire model Σ1. Recall that this modelis given by:

Σ1 :

{ξ = ξ2 + u if ξ ≥ 1 , then ξ ← c

y1 = ξ ,

(7.1a)

(7.1b)

85

Page 86: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

where y1 is the membrane potential, and u is some input. As the state reachessome maximum value ξ = 1, the state ξ is reset to c. (We may think of the inputu as the limiting case of a current εu( 1

ε − ξ) elicited by an input conductanceεu with Nernst equlibrium potential 1

ε as ε → 0, in this case the examplecorresponds to figure 24.)

To get to a network level, that is, to a sufficiently convincing system ΣH ,we still need a model Σ2 for converting the presynaptic potential y1 = u2 intoa postsynaptic input y. To this end, we first formulate our hypothesis of whatΣH might be, and then work out the verifyable implications for Σ2. Tests onΣ2 may then support or falsefy our hypothesis.

Suppose that we suspect that most cells, and two cells in particular, areprimarily involved in direct reflex-like feed-forward transmission. In this caseour first guess or hypothesis for ΣH could be that one cell passes the signalonto the other unaltered. Hence, in this hypothetical example, we consider acascade that realizes an identity mapping ΣH = Σ2 ◦ Σ1 = Id. Let us focus onthe ‘subreset’ part first.

Formal InverseWe are looking for Σ2 = Σ−1

1 . The ‘subreset’ part of Σ1 is in normal form and,by rearranging equation (7.1a) and substituting y1 for ξ, the inverse system Σ−1

1

is seen to be:

Σ−11 : u = y1 − y2

1 . (7.2)

Non-RealizabilitySystem (7.2) is in external differential form (6.33). One problem is that thetemporal derivative y1 is not directly available 17. More importantly however,since Σ1 does not have relative degree zero, we cannot expect its inverse to havea classical state-space realization, see section 6.3.2. Hence, we are forced toconsider an approximate inverse that would result in an approximate identityat the network level.

It may seem tempting to try to approximate the derivative y1 , however, weare dealing with nonlinear systems and any error, however small, may result inlarge deviations from the desired identity map ΣH at the network level. Insteadwe choose to alter our hypothesis. We cosider an approximate identity ΣH , such

that the associated system Σ2 = ΣH ◦ Σ−11 does have a state space realization.

So how do we achieve realizability?

Realizability by ApproximationRecall that the relative degree is an invariant of state transformations. Thekey is to realize that, since we are dealing with systems connected in series, therelative degree of ΣH is constrained by the relative degree of Σ1.

Note that system Σ1 has relative degree 1, we need to differentiate the inter-mediate representation y1 once to get to u. Hence, we need to differentiate the

17Recall that at any one time t we are given only y1(t) and in order to calculate the temporalderivative, we need the values of y1 at two times that are infinitsimally close together.

86

Page 87: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

output y of the full cascade ΣH at least once to get to u. Thus, we expect ΣHto have a relative degree of at least 1 too. Instead of an exact identity mappingΣH : y = u, we therefore consider the input-output system:

ΣH : εy = u− y .

This system has relative degree 1 and allows the output y to follow suffi-ciently wellbehaved continuous inputs u arbitrarily closely by taking ε > 0small enough18.

The introduction of the term εy will allow us to transform Σ2 into classicalstate space form. It will be convenient to write the system in the followingstate-space input-output form:

ΣH :

{εζ = u− ζy = ζ ,

(7.3a)

(7.3b)

where u is the input, ζ is the real-valued state variable and y is the real-valuedoutput.

Substituting u2 for y1 in (7.2) and connecting the systems Σ−11 and ΣH in

series gives us the following input-output system

Σ2 = ΣH ◦ Σ−11 :

{εζ = u2 − u2

2 − ζy = ζ ,

(7.4a)

(7.4b)

which is depicted in figure 26.

Figure 26: By using an approximation ΣH of the hypothesis with a relative

degree matched to that of Σ1, one may achieve realizability of Σ2 = ΣH ◦ Σ−11 .

Elimination of the input derivativeSystem (7.4) still depends on derivatives of the input u2. However, thanks tothe added term εζ = εy, it can now be given a proper classical state-spacerealization by introducing a new state variable ϑ = εζ − u2, with ϑ = εζ − u2

18Note that the parameter ε becomes part of the new hypothesis ΣH

and that the ε-valuethat best fits the data need not be small.

87

Page 88: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

and ζ = 1ε [ϑ+ u2] 19. This results in the following system

Σ2 = ΣH ◦ Σ−11 :

ϑ = −u2

2 −1

ε[ϑ+ u2]

y =1

ε[ϑ+ u2] .

(7.5a)

(7.5b)

In this classical state-space realization derivatives of the input u2 are no longerpresent20.

Derivation of the Reset MapAlthough the reset condition u2 = y1 = ξ ≥ 1 given in (7.1a) has not changedby introducing the state variable ϑ, a problem does arise. If the state ξ of Σ1 isinstantaneously reset at time tj , then what is the measured output y1 = u2 attime tj? We will assume that limt↑tj u2 is available at reset times tj . This willbe justified for a more general class of reset models later.

To derive an appropriate reset map for the new state variable ϑ, we observethat in order for the output y to satisfy the approximate identiy ΣH , it mustbe continuous at reset times. Thus if a reset occurs at time tj , we must have

limt↑tj

y = limt↓tj

y .

From (7.5b) and taking into account the reset value limt↓tj u2 = c and the resetcondition y1 = u2 ≥ 1 of the original system Σ1, this results in:

limt↑tj

ϑ+ 1 = limt↓tj

ϑ+ c .

Thus, the approximate inverse Σ2 = ΣH ◦Σ−11 of the full system Σ1 is given

by:

Σ2 :

ϑ = −u2

2 −1

ε[ϑ+ u2] if lim

t↑tju2 ≥ 1 , then: (7.6a)

y =1

ε[ϑ+ u2] ϑ← ϑ+ 1− c . (7.6b)

In a more realistic setting one would verify this model against measurements.

7.4 Realization of Inverse Neuron Models

Because of its value in the analysis of synaptic transmission, we will extendthe previous derivation of an approximate identity to a general class of neuronmodels below. In general however, the developed approach extends to otherchoices of ΣH , as we will see when we apply it to the squid giant synapse insection 7.6.2.

19This is an example of a generalized state transformation, see section 6.3.3. We will extendthis approach to a general class of neuron models.

20The factor 1/ε could pose a problem if noise is present and ε is small. In appendix A.3noise is taken into consideration and it is suggested that nerve cells may have developed away to overcome noise problems.

88

Page 89: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

7.4.1 A General Neuron Model

Consider single-input single-output hybrid systems of the form:

Σ1 :

ξ = β(ξ, η) + α(ξ, η)u if s(ξ, η) ≥ 0 , then: (7.7a)

η = q(ξ, η) (ξ, η)← R(ξ, η) (7.7b)

y1 = ξ , (7.7c)

with a real-valued input u(t), an n-dimensional state variable z = (ξ, η) and areal-valued output y1 = ξ. Here the zero set of the map s : Rn → R defines an(n − 1)-dimensional reset-boundary surface S = {(ξ, η) ∈ Rn|s(ξ, η) = 0} anda sub-reset domain D = {(ξ, η) ∈ Rn|s(ξ, η) ≤ 0}. Whenever a trajectory z(t)‘hits’ the reset-boundary S, the state reset map R : S → D instantaneouslyresets the state back to a new ‘initial’ value in D. If we assume that α(ξ, η) 6= 0in D, then the sub-reset part is in normal form with well defined relative degreeone.

Taking ξ to correspond to the membrane potential, this general form in-cludes all single-compartment neuron models (with injected current input ornon-shunting conductance input) discussed so far, that is; The Hodgkin-Huxley-type conductance based models, the FitzHugh model, the minimal models suchas the INa,p + IK-model, the nonlinear integrate-and -fire-type models such asthe Izhikevich simple model, and many more.

Many processes in cell signaling seem to be ‘triggered’ by trans-membranepotential, hence it seems natural to consider ξ to be the output. However,depending on what the other state variable η stand for, these may also beavailable, thus in some cases we may also consider an output y1 = (ξ, η).

Suppose we have a good model of excitability Σ1 of the form (7.7). To getto a network level ΣH , we still need a model Σ2 for converting the presynapticpotential y1 = u2 into a postsynaptic input y. We consider again a cascade thatrealizes the identity mapping ΣH = Σ2 ◦Σ1 = Id. This turns out to be a useful‘building block’ when we consider a more plausible hypothesis in section 7.6.2.As before we are looking for Σ2 = Σ−1

1 and we will focus on the ‘subreset’ partfirst.

7.4.2 Inverse of the Sub-Reset Part

We assume that the internal dynamics, given by the forced subsystem η = q(ξ, η)driven by ξ, are exponentially stable. If this is not the case we take the full statez to be available from the output, i.e. in that case we take the output to bey1 = (ξ, η).

Formal Inverse Recall from section 6.1.3 that the inverse of a system innormal form can either be given by the compact form (6.7) or by the Hirschorninverse (6.5). For the sub-reset part of the general form (7.7) the compact formcorresponds to:

89

Page 90: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Σ−11 :

η = q(y1, η) (7.8a)

u =1

α(y1, η)(y1 − β(y1, η)) , (7.8b)

and the Hirschorn inverse is given by

Σ−11 :

ξ = v (7.9a)

η = q(ξ, η) (7.9b)

u =1

α(ξ, η)(v − β(ξ, η)) , (7.9c)

where the derivative v = y1 is treated as the input.

Non-Realizability of the Inverse As before the temporal derivative v = y1

appearing in both (7.8) and (7.9) is not directly available. Furthermore, sinceΣ1 does not have relative degree zero, we cannot expect its inverse to havea classical state-space realization, see section 6.3.2. Thus we are again forcedto consider an approximate inverse. In nonlinear systems approximating thesystem Σ−1

1 directly may lead to large deviations from the desired identity mapΣH = Σ2 ◦Σ1. We therefor consider an approximate identity ΣH , such that the

associated system Σ2 = ΣH ◦Σ−11 does have a state space realization as before.

Realizability by Approximation The relative degree of the cascade ΣH isconstrained by the relative degree of Σ1. Since Σ1 has relative degree 1, ΣH hasa relative degree of at least 1 too. Instead of an exact identity mapping for thefull cascade we therefore consider the input-output system:

ΣH : εy = u− y . (7.10)

This system has relative degree 1 and allows the output y to follow sufficientlywell-behaved continuous inputs u arbitrarily closely by taking ε > 0 smallenough.

The introduction of the term εy will allow us to transform Σ2 into classicalstate space form. It will be convenient to write the system in the followingstate-space input-output form:

ΣH = Lε,1 :

{εζ = u− ζy = ζ ,

(7.11a)

(7.11b)

where u is the input, ζ is the real-valued state variable and y is the real-valuedoutput. (We introduce the notation Lε,1 for this parameterized linear system ofrelative degree 1, because we will later reuse this system in the analysis of thesquid giant synapse.)

Substituting u2 for y1 in Σ−11 and connecting the system in series with ΣH

gives us the cascade system Σ2 depicted in figure 26. Preceding this cascade

90

Page 91: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

with the original system Σ1 would result in the external behavior (7.10) afterconvergence of the η-dynamics.

Using the Hirschorn inverse (7.9) together with (7.11), we can write thecomposite system Σ2 as

Σ2 = ΣH◦Σ−11 :

ζ

ξη

=

− 1ε

(β(ξ,η)α(ξ,η) + ζ

)0

q(ξ, η)

︸ ︷︷ ︸

f(z)

+

1εα(ξ,η)

10

︸ ︷︷ ︸

g(z)

v (7.12a)

y = ζ , (7.12b)

where, for now, v = u2 is treated as the input, z = (ζ, ξ, η) = (ζ, u2, η) is thestate and y is the output.

Elimination of the Input Derivative System (7.12) still depends on tem-poral derivatives of u2. However, thanks to the added term εζ = εy, it can nowbe given a proper classical state-space realization by introducing a change ofcoordinates. To eliminate v = u2 , we look for an elementary diffeomorphism ϑthat only changes the coordinate ζ into a new coordinate ϑ = ϑ(ζ, u2, η) suchthat in

dt=∂ϑ

∂ζζ +

∂ϑ

∂u2u2 +

∂ϑ

∂η1η1 + . . .+

∂ϑ

∂ηn−1ηn−1 ,

the u2-term cancels against the term u2/εα(u2, η) resulting from ζ. That is, welook for a solution of the PDE21

Lgϑ(z) =1

εα(u2, η)

∂ϑ

∂ζ+

∂ϑ

∂u2= 0 . (7.13)

Remark 7.1. Note that the other coordinates η remain unchanged. Thus, thereset-condition s(u2, η) = 0 too remains unchanged.

If a solution to the PDE (7.13) exists, then it has the general form:

ϑ(ζ, u2, η) = ψ

(∫1

α(u2, η)du2 − εζ , η

),

which can be obtained using the method of characteristics22. The dependenceon η is arbitrary.

21Note that a curvilinear coordinate ϑ that satisfies this PDE Lgϑ(z) = 〈∇ϑ(z), g(z)〉 = 0is everywhere perpendicular to the vector field g(z) controlled by the input v = u2 in (7.12a).This is an application of the constructive realization method reviewed in [30], see also section6.3.3 and appendix A.1. Note also the analogy with the condition (6.16) for finding the lastn−r components of the transformation into Byrnes-Isidori normal form, for an n-dimensionalsystem of relative degree r.

22As in the introductory example 7.3 this is an example of a generalized state transformation,see section 6.3.3.

91

Page 92: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

For this solution to be a proper coordinate transformation, we demand it tobe invertible in ζ. In this context it is convenient to treat η as a parameter andwrite ψ( , η) = ψη( ), so that ζ can be expressed as:

ζ =1

ε

[∫1

α(u2, η)du2 − ψ−1

η (ϑ)

].

(The subscript should not be confused with partial differentiation.)For the subreset part, i.e. for s(u2, η) < 0, we can now give a classical

state-space realization:

Σ2 =ΣH ◦Σ−11 :

ϑ =

∂ψ

∂ζ

[β(u2, η)

α(u2, η)+ ζ(u2, ϑ, η)

]+∂ψ

∂ηq(u2, η)

η = q(u2, η)

y = ζ(u2, ϑ, η) =1

ε

[∫1

α(u2, η)du2 − ψ−1

η (ϑ)

],

(7.14a)

(7.14b)

(7.14c)

of the approximate inverse23.

Classical Realization This shows that it is usually convenient to choose amonotone function ψ that is independent of η, such as ψ(x, η) = x, so that wehave:

Σ2 =ΣH ◦Σ−11 :

ϑ = β(ϑ, η, u2) =

β(u2, η)

α(u2, η)+

1

ε

[∫1

α(u2, η)du2−ϑ

]η = q(u2, η)

y = h(ϑ, η, u2) =1

ε

[∫1

α(u2, η)du2 − ϑ

].

(7.15a)

(7.15b)

(7.15c)

For s(u2, η) < 0 or models without a reset, the systems Σ1 and Σ2 , connectedin series, thus realize the dynamics:

ΣH = Σ2 ◦ Σ1 : εd

dty = u− y ,

after convergence of the internal dynamics (7.15b).

7.4.3 Derivation of the Reset Map

What remains is to derive an appropriate reset map for Σ2. In deriving an in-verse for such hybrid systems however, the reset map poses additional problems.

Integrate-and-fire-type models are idealizations that, apart from efficientsimulation, are supposed to facilitate analysis, not hamper it. Hence, these

23Recall that in the presence of noise the factor 1/ε could pose a problem if ε is small. Fornoise considerations see appendix A.3.

92

Page 93: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

models would defy their purpose of simplicity if we would allow them to com-plicate matters. Furthermore, in the context of our proposed method, suchreset-models have lost at least some of their use, since, if we finally arrive at ahypothetical system that gains experimental support, then computational effi-ciency is only important for the mapping hypothesis at the neuronal connectionlevel.

Nevertheless these models may continue to serve a purpose, and we will offera satisfactory solution to the reset-map of the inverse. Let us first introducesome notation.

Notation Suppose a trajectory (ξ(t), η(t)) ‘hits’ the surface S at time t = t0,then, as in [2], we write:

(ξ−, η−) = limt↑t0

(ξ(t), η(t)) , (7.16)

for the state immediately before the reset, and:

(ξ+, η+) = limt↓t0

(ξ(t), η(t)) = R(ξ−, η−) , (7.17)

for the state immediately after the reset R.

Assumptions In causal systems, there is a problem with detecting the resetfrom the output. As already mentioned in the introductory example, if the stateof Σ1 is instantaneously reset at time tj , then what is the measured output y1 =u2 at time tj? To alleviate this problem, we assume that u−2 = y−1 = h(ξ−, η−)is available at the reset times ti. We are justified in doing so, since:

1. the smooth case is solved and we do not expect real neurons to changestate instantaneously, and since,

2. in simulations and analysis at the network level, we would use the mappinghypothesis instead, once obtained.

Derivation Since the η coordinates have remained unchanged and the inputu−2 is given, we already know the reset conditions s(u−2 , η) ≥ 0, see remark 7.1.

Remark 7.2. Note, that even though two solutions η1(t) and η2(t) with differ-ent initial conditions converge exponentially, in general, they will hit the resetboundary at different times. Hence, if the reset conditions depend on η ex-plicitly, then the initial conditions of the inverse Σ−1

1 must match the initialconditions η(0) of the original Σ1 in order for the resets to occur at the correcttimes24.

24Recall however, that one of the main reason for the use of reset models is to allow forlarge step-sizes in simulations. In this context the reset times will usually not be differentdespite different initial conditions.

93

Page 94: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

The reset for η remains equal to the η-component Rη of R, that is:

η+ = Rη(u−2 , η−) . (7.18)

To determine the reset for ϑ, we observe that if the external behavior:

ΣH : εy = u− y ,

is to be satisfied, then the output y must be continuous at reset times, that is:

y+ − y− = h(ϑ+, η+, u+2 )− h(ϑ−, η−, u−2 ) = 0 .

We can write (u+2 , η

+) = R(u−2 , η−) and thus we choose the reset for ϑ such

that:h(ϑ+, R(u−2 , η

−)) = h(ϑ−, η−, u−2 ) .

From (7.15c) this results in the reset:

ϑ+= ϑ−+

∫1

α(u2, η)du2

∣∣∣∣(u2,η)=R(u−2 ,η

−)

−∫

1

α(u2, η)du2

∣∣∣∣(u2,η)=(u−2 ,η

−)

. (7.19)

7.4.4 The Inverse-Neuron

We can now collect the results (7.15), (7.19) and (7.18) and write down theequations for Σ2 = ΣH ◦ Σ−1

1 . For s(u2, η) < 0, the approximate inverse of thefull model (7.7) is given by:

ϑ = β(ϑ, η, u2) =β(u2, η)

α(u2, η)+

1

ε

[∫1

α(u2, η)du2 − ϑ

](7.20a)

η = q(u2, η) (7.20b)

y = h(ϑ, η, u2) =1

ε

[∫1

α(u2, η)du2 − ϑ

]. (7.20c)

If s(u−2 , η) ≥ 0, then the state is reset according to:

ϑ 7→ ϑ+

∫1

α(u2, η)du2

∣∣∣∣(u2,η)=R(u−2 ,η)

−∫

1

α(u2, η)du2

∣∣∣∣(u2,η)=(u−2 ,η)

(7.20d)

and:

η 7→ Rη(u−2 , η) . (7.20e)

The systems Σ1 and Σ2 connected in series thus realize the dynamics:

ΣH = Σ2 ◦ Σ1 : εd

dty = u− y , (7.21)

after convergence of the exponentially stable internal η-dynamics. More pre-cisely the dynamics of ΣH converge to these dynamics if one fo the following issatisfied:

94

Page 95: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

(i) either the neuron model Σ1 has no reset, or

(ii) the reset condition s(u−2 , η) ≥ 0 does not explicitly depend on η, or

(iii) the initial conditions of Σ−11 match the initial conditions η(0) of Σ1,

see remark 7.2 and its footnote.

7.5 More Examples

Example 7.2 (Izhikevich Simple Model). Consider the Izhikevich simple modelgiven by:

Σ1 :

ξ = ξ2 − η + u if ξ ≥ 1, then

η = a(bξ − η) (ξ, η)← (c, η + d)

y1 = ξ .

In terms of the general form (7.7) on page 89 we have:

β(ξ, η) = ξ2 − η ,α(ξ, η) = 1 ,

q(ξ, η) = a(bξ − η) ,

s(ξ, η) = ξ − 1 ,

R(ξ, η) =

(c

η + d

).

Thus in terms of the inverse (7.20) we get:∫1

α(u2, η)du2 = u2 , (up to a constant)

β(ϑ, η, u2) =β(u2, η)

α(u2, η)+

1

ε

[∫1

α(u2, η)du2 − ϑ

]= u2

2 − η +1

ε[u2 − ϑ] ,

h(ϑ, η, u2) =1

ε[u2 − ϑ] .

This results in the following hybrid state space realization for the approximateinverse:

95

Page 96: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Σ2 = ΣH◦Σ−11 :

ϑ = u2

2 − η +1

ε[u2 − ϑ] if u−2 ≥ 1, then

η = a(bu2 − η) (ϑ, η)← (c− 1 + ϑ, η + d)

y =1

ε[u2 − ϑ] .

The systems Σ1 and Σ2 connected in series thus realize the dynamics:

ΣH = Σ2 ◦ Σ1 : εd

dty = u− y ,

after convergence of the internal η-dynamics. Simulations of the Izhikevichsimple model and the reconstruction of its input are shown in figure 27 and 28,see appendix A.4 for the numerical Euler schemes.

The numerical analogue of the Izhikevich simple model Σ1 : u → y1 issubjected to various input currents u. These inputs are then reconstructedfrom the voltage responses y1 = u2 using the numerical approximate inverseΣ2 : u2 → y, that is, the numerical analogue of Σ2 = ΣH ◦ Σ−1

1 . This resultsin the reconstructed currents y. In figure 27 results are shown for various ε-values. In figure 28 it is shown that for a relatively small ε-value the systemΣH mapping the input current u to the reconstructed current y indeed realizesan approximate identity. ♦

Example 7.3 (FitzHugh Model with Conductance Input). Consider again theFitzHugh model with a conductance input, see example 6.1. The equations aregiven by:

Σ1 :

ξ = −ξ(ξ − a)(ξ − 1)− η + u(p− ξ)η = bξ − cηy1 = ξ .

In terms of the general form (7.7) on page 89 we have:

β(ξ, η) = −ξ(ξ − a)(ξ − 1)− η ,α(ξ, η) = p− ξ ,q(ξ, η) = bξ − cη .

Thus in terms of the inverse (7.20) we get:∫1

α(u2, η)du2 = ln |p− u2| , (up to a constant and for p 6= u2)

β(ϑ, η, u2) =β(u2, η)

α(u2, η)+

1

ε

(∫1

α(u2, η)du2 − ϑ

)=−u2(u2 − a)(u2 − 1)− η

p− u2+

1

ε(ln |p− u2| − ϑ) ,

h(ϑ, η, u2) =1

ε(ln |p− u2| − ϑ) .

96

Page 97: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

(Q) depolarizing after-potential

(D) phasic bursting

Figure 27: Reconstruction of input currents u from voltage outputs y1 for variousε-values, other parameters as in figure 28 D and Q. The Izhikevich simple modelΣ1 : u→ y1 is subjected to different input currents u (lower green traces). Theinput currents are then reconstructed from the voltage responses y1 = u2 (topblue traces) using the approximate inverse Σ2 : u2 → y given by Σ2 = ΣH ◦Σ−1

1 .This results in the reconstructed currents y (lower red traces).

97

Page 98: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

(A) tonic spiking (B) phasic spiking (C) tonic bursting (D) phasic bursting

(E) mixed mode (F) spike freq. adapt (G) Class 1 excitable (H) Class 2 excitable

(I) spike latency (J) subthreshold osc. (K) resonator (L) integrator

(M) rebound spike (N) rebound burst (O) thresh. variability (P) bistability

(Q) DAP

Figure 28: As in figure 27 except for a relatively small ε-value. Note that thesystem ΣH mapping the input current u to the reconstructed current y indeedrealizes an approximate identity (red over green). (Adapted from Izhikevich,see [25] for the parameters of Σ1. Also see [26] and figure 12, an electronicversion of that figure and its reproduction permissions are freely available atwww.izhikevich.com.)

98

Page 99: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

This results in the following classical state-space realization for the approximateinverse:

Σ2 = ΣH◦Σ−11 :

ϑ =−u2(u2 − a)(u2 − 1)− η

p− u2+

1

ε(ln |p− u2| − ϑ)

η = bξ − cη

y =1

ε(ln |p− u2| − ϑ) .

For y1 = u2 6= p, the systems Σ1 and Σ2 connected in series thus realize thedynamics:

ΣH = Σ2 ◦ Σ1 : εd

dty = u− y ,

after convergence of the internal η-dynamics. We will use the inverse of theFitzHugh model next in a biologically motivated realization of synaptic transfer.♦

7.6 The Squid Giant Synapse: An Elaborate Example

7.6.1 Biological Background

Below we will apply our method to a particularly favorable preparation, thesquid giant synapse. To place our results in context we briefly review someneurobiology and elucidate the role of the associated giant fiber system in escapebehavior.

The Nervous System of Molluscs Cephalopods, such as squid, octopusesand cuttlefish belong to the phylum known as molluscs. Other molluscs includethe gastropods such as snails or slugs and the bivalves such as oyster, clams andmussels.

The central nervous systems of molluscs consist of about 5 interconnectedleft-right pairs of sometimes fused ganglia arranged around the gut and nearthe head in bilateral symmetry. The nervous system further consists of smallerperipheral ganglia and usually a nerve net. There is considerable variationamong species. Sensory receptors may range from simple sense cells to highlydeveloped sense organs. Their behavioral repertoir is as diverse as their nervoussystems.

The nervous system of cephalopods in particular consists of about 108 neu-rons. It has advanced integrative and adaptive capabilities and a highly devel-oped visual system [38].

Giant Fiber Systems and Startle Behavior Startle and escape responsessuch as the fast jet-propelled escape responses of squid are so called fixed actionpatterns. In contrast to the reflex, which is in general a graded function of thestimulus, fixed action patterns have all-or-nothing properties. As the releasing

99

Page 100: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

stimulus crosses a critical threshold a stereotyped behavioral pattern is gener-ally elicited with full strength. Non-cephalopod examples include sneezing andvomiting.

The release of such a fixed action pattern corresponds to a decision. Motorcircuits are activated and lead to the discharge of a coordinated motor pattern,In some cases such coordinated patterns are the product of ‘hard-wired’ circuitsand occur independently of further input [38].

The jet-propelled escape responses of squid are controlled by a so called giantfiber system. Other examples of giant fiber systems include the Mauthner cellsin fish and the medial and lateral giant fibers in crayfish. These and other giantfiber systems too have been associated with escape and startle responses [38].

Figure 29: Simplified representation of giant fiber systems, after [38].

Giant fibers involved in startle behavior can be viewed as decision-makinginterneurons. Most function according to the same principles, see the diagram infigure 29. In the simplest case preprocessed sensory information converges ontothe giant fiber and the fiber responds to escape-initiating stimuli in a threshold-type fashion. Activation of the giant fiber is often sufficient to initiate a completeescape response and sometimes necessary for particular behavioral ‘components’of the response.

Since the cells produce their effect by activating other neuronal circuits, it isthe state of these other circuits that determines the evoked behavioral pattern.This state may be effected by [38]:

• parallel feedforward motor networks,

• sensory feedback,

• descending brain signals, and

• neuromdulators such as hormones.

Measurements at the Giant Synapse In [1] measurements were made atthe squid giant synapse. Its location is indicated in figure 30, where a diagramis shown of part of the squid giant fiber system; the squid stellate ganglion.

The relation between Ca2+ entry at the presynaptic terminal and postsy-naptic currents in the giant axon was studied using voltage-clamp techniques.

100

Page 101: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

The post-synaptic membrane potential of the giant axon was held constant nearits resting potential using a two-micro-electrode clamp arrangement. The presy-naptic terminal was depolarized by brief (3-6 ms) pulses using a three-electrodevoltage-clamp. The duration of the pulses was deliberately kept short and sep-arated by at least 60 s to avoid synaptic adaptation such as depression.

The arrangement (we leave out some details) allows one to simultaneouslymeasure the postsynaptic current and the presynaptic calcium current in re-sponse to presynaptic depolarization. The original results are shown in figure31, see both [1] and [27]. For our purposes, we are mainly interested in therelationship between the presynaptic potential and the postsynaptic current.

In what follows we will use the FitzHugh charicature of the squid giant axonand a simple threshold hypothesis for escape responses to come to a modelof synaptic transmission that in essence reproduces the above measurements.Thus relating the level of neurodynamics to the level of functional transmission,possibly even to the level of animal behavior.

Figure 30: diagram of the squid stellate ganglion showing the giant axon, itspresynaptic axon and the location of the giant synapse, after [27].

101

Page 102: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 31: Postsynaptic currents (P.s.c.) in response to presynaptic depolarizingpulses (Vpre) of 6 ms (from a holding potential of -70 mV?). Also shown Ca2+-currents (ICa), from [1] see also [27].

102

Page 103: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

7.6.2 A Biological Realization of an Abstract Model

In the following example we will try to relate measurements made at the squidgiant synapse to the possible functional role of the synapse. In particular wewant to relate these measurements to the all-or-nothing decisions that followescape-initiating stimuli as they cross a certain critical threshold.

Suppose that sensory information converging onto the giant fiber systemleads to a total presynaptic conductance u. Furthermore, let us assume thata good model Σ1 exists for the voltage response y1 of the presynaptic axonto this conductance. To get to a network level, that is, to a convincing yetsimple system ΣH from presynaptic conductance to postsynaptic conductance,we still need a model Σ2 for converting the presynaptic potential u2 = y1 intoa postsynaptic conductance y.

As before, we first formulate our hypothesis of what ΣH = Σ2 ◦ Σ1 mightbe. Then, we verify the associated synapse model Σ2 = ΣH ◦ Σ−1

1 against themeasurements in figure 31. Before we do however, note that the responses tothe presynaptic depolarizing pulses in the figure are postsynaptic currents. Sohow are these currents related to the postsynaptic conductances?

In the orignal experiment the postsynaptic potential of the giant axon washeld constant at its resting potential Vr . If we assume that only postsynapticchannels of one particular type open in response to presynaptic depolarizations,or if we lump all such channels together, then, due to the constant postsynapticpotential, the postsynaptic conductance y(t) is proportional to the postsynapticcurrent I(t):

I(t) = Vr y(t) .

Hence, the conductances obtained from Σ2 can be compared directly to thepostsynaptic currents in figure 31.

In sum, we want the neuron model Σ1 to agree with the typical Hodgkin-Huxley-like responses, and at the same time we want the synaptic transfersystem Σ2 to agree with the measurements in figure 31. What should ΣH be?

To answer this question we follow the steps outlined below, we:

1. state an initial hypothesis ΣH ,

2. determine a plausible relative degree from Σ1 and a biologically motivatedbut not fully specified form of Σ2,

3. approximate the hypothesis with ΣH to agree with the relative degree,

4. derive the associated hypothetical synapse model Σ2 = ΣH ◦ Σ−11 ,

5. verify the model against measurements,

6. derive a state transformation from the hypothetical form of Σ2 into thepartially specified biological form of Σ2, and

7. state its invertibility conditions.

Let us start with the hypothesis.

103

Page 104: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

The Initial Hypothesis Motivated by the threshold-type escape responses,the S-shaped activation functions of artificial neuron models, and the fact thatthe postsynaptic conductance y must ultimately be bounded, let us take as aninitial hypothesis the bounded smooth invertible map:

ΣH : y = S(µu− ρ) , (7.26)

where µ > 0 and S is the standard logistic function

S(x) =1

1 + e−x, with S′(x) = S(x)(1− S(x)) . (7.27)

As with artificial neurons we view the inflection point at ρ/µ as a threshold, asthe input crosses the threshold u > ρ/µ the squid ‘decides’ to escape.

Note that y → 1 as u→∞ and y → 0 as u→ −∞. Hence, with this initialhypothesis ΣH , the postsynaptic output conductance satisfies 0 < y < 1, i.e.the conductance y equals the fraction of channels in the open state. The mapΣH is equivalent to the Boltzmann function (4.4), see figure 9.

Relative Degree from Biologically Motivated Form Suppose that thedynamics of the postsynaptic conductance due to transmitter in the synapticcleft are well-described by equation (3.23) on page 37. Then, the synapticmechanisms for converting a presynaptic potential y1 = u2 into a postsynapticconductance y are partially prescribed as follows:

Σ2 :

w1 = q [wn2 (1− w1)− rw1] (7.28a)

wi = Wi(w, u2) (not specified) (7.28b)

y = w1 . (7.28c)

Here, y = w1 represents the conductance due to neurotransmitter w2. The equa-tions (7.28b) for the other wi (including w2) remain unspecified. In this case,system Σ2 has a relative degree of at least 2, that is we have to differentiate theoutput y at least twice for the input u2 to appear explicitly.

Since equation (7.28a) represents one of the simplest models of transmitter-dependent conductance, let us assume that Σ2 indeed has a relative degree of atleast 2. Furthermore, we assume that the neuron model, system Σ1, has relativedegree 1. (We will state our model of choice later.) Thus, we have to differentiatethe output y once more to get to u. Consequently, since ΣH = Σ2 ◦Σ1, we needto increase the relative degree of ΣH to (at least) 3.

New Hypothesis due to Relative Degree To increase the relative degreeof ΣH to 3 we use:

104

Page 105: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

ΣH = ΣH◦Lε,3 :

εz1 = z2 − z1 (7.29a)

εz2 = z3 − z2 (7.29b)

ε ˙z3 = u− z3 (7.29c)

y = S(µz1 − ρ) (7.29d)

Where Lε,3 denotes a cascade of three systems of the form Lε,1 given in (7.11),see figure 32. Thus, our initial hypothesis has necessarily changed to a newhypothesis ΣH due to the constraint on relative degree.

Figure 32: A cascade of r linear systems of relative degree 1 results in a linearsystem of relative degree r.

Note that system (7.29) agrees with the general form of artificial units de-picted in figure 23, that is, the total summed input u is filtered by a linearsingle-input single-output system Lε,3 and then passed to a nonlinear outputmap ΣH . Furthermore, observe that it has only three parameters, a parameterρ for shifting the threshold of the output map, a parameter µ > 0 for adjustingits slope, and a parameter ε > 0 for adjusting the amount of input-smoothing.

Resulting Hypothetical model of Synaptic Transmission Recall thatwe already found a realization for (7.29c) using a general neuron model and itsinverse, see (7.21). Thus, restricting ourselves to models without a reset, we cannow write down system Σ2 in terms of the inverse and the new hypothesis:

Σ2 = ΣH◦Σ−11 :

εz1 = z2 − z1 (7.30a)

εz2 = h(z, u2)− z2 (7.30b)

zi = Zi(z, u2) (7.30c)

y = S(µz1 − ρ) , (7.30d)

where Z represent the vectorfield of the approximate inverse-neuron model andz3 = h(z, u2) is its output map. (Both Z and h are independent of z1 and z2.)Note that at this point we still need to choose our model Σ1 for the presynapticneuron.

105

Page 106: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Verification against Measurements Let us now see if our hypothesis ΣH issupported by the measurements in figure 31. To this end, we take the FitzHughmodel to be a good charicature of the presynaptic axon. Instead of an injectedinput current however we incorporate a conductance input. This model wasconsidered in example 7.3 where we derived a realization of its approximateinverse. The model is given by the equations:

Σ1 :

x1 = −x1(x1 − a)(x1 − 1)− x2 + u(p− x1)

x2 = bx1 − cx2

y1 = x1 ,

where the output y1 = x1 represents the membrane potential, p is the Nernstequilibrium potential, p−x1 is the driving force and u is the input conductance.

As in the experiment, we subject our derived model Σ2 = ΣH ◦Lε,3◦Σ−11 to a

series of presynaptic depolarizing pulses. The model then converts these presy-naptic potentials y1 = u2 into postsynaptic conductances y. Note that Σ2 sharessome parameters with the modified FitzHugh model Σ1. These parameters, a,b, c and p, are chosen not only such that they agree with the measurements infigure 31, but also such that the FitzHugh model generates an action potentialfor certain input conductances u. Hence, these parameters are constrained bothby synapse behavior and by squid axon behavior. Furthemore, the timescaleis determined by the duration of a nerve impulse25, so in this timescale, theduration of the presynaptic depolarizing pulses is chosen to roughly agree with6 ms.

Due to the simplicity of the higher level hypothesis, it is not easy to findparameters that agree with the measurements. Nevertheless, using the numer-ical Euler-schemes in appendix A.4 for the inverse Σ−1

1 and the linear filterLε,3, we are able to qualitatively reproduce the measurements in figure 31 fromtheoretical considerations! See figure 33. Recall that due to the constant post-synaptic potential we take the postsynaptic currents (P.s.c.) in figure 31 tobe proportional to the postsynaptic conductances y in figure 33. In figure 33depolarizations from a presynaptic holding potential of -40 mV are used. Otherfigures in [1] suggest that a holding potential of -70 mV was used, although thisis not stated explicitly for figure 31. Nevertheless, in figure 34 results are shownfor a presynaptic holding potential of -70 mV.

Note the characteristic features in the results. First the postsynaptic conduc-tance y steadily grows around the ‘off’ command with each increasing depolar-izing pulse u2, then a dimple occurs around the ‘off’ command. The pre-dimplepart of y diminishes with further increase of the pulses until it dies out com-pletely. The post-dimple part also diminishes, but slower, until eventually onlya small after effect remains. Also shown, for the same parameters, are voltageresponses y1 of the modified FitzHugh model Σ1 to steps in the presynapticconductance input u.

We thus have a model ΣH from conductance to conductance with only three

25See, [27] for the time-frame of a non-propagating or space-clamped action potential.

106

Page 107: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 33: An approximation of the measurements in figure 31 as producedby our theoretical model Σ2 : u2 → y. In the original experiment the post-synaptic potential is held constant at its resting potential, thus the (lumped)postsynaptic conductance y is proportial to the postsynaptic current (P.s.c) inthe original figure. Presynaptic voltage steps u2 as in the original figure (Vpre),from a holding potential of −40mV and scaled and shifted such that the interval[−40mV, 57mV ] maps to the interval [0, 0.53], according to 0.53

Vpre−Vhold

57−Vhold. Also

shown in lower right corner are voltage responses y1 of the modified FitzHughmodel Σ1 : u → y1 to 0.0015-increments of presynaptic conductances u forthe same parameters and time-scale. Parameters of both the modified FitzHughmodel Σ1 and its inverse: a = 0.08, b = 0.0173, c = 0.08, p = 2. Parameters ofthe Hypothesis ΣH : µ = 800, ρ = 7, ε = 14. (Recall that the parameter ε is nowpart of the hypothesis and that its relative value need not be small.) Timestepfor the numerical Euler method τ = 0.1 units of time, with a timespan of 240units and a pulse duration of 80 units (6 ms), the unit of time thus correspondsto 6/80 ms.

107

Page 108: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 34: As in figure 33, except for different parameters and with depolar-izations from a holding potential of −70mV . The presynaptic depolarizationsare scaled and shifted such that the interval [−70mV, 57mV ] maps to the in-

terval [0, 0.42], according to 0.42Vpre−Vhold

57−Vhold. Parameters of the Hypothesis ΣH :

µ = 500, ρ = 20, ε = 14. Timestep for the numerical Euler method τ = 0.1units of time, with a timespan of 231 units and a pulse duration of 77 units (6ms), the unit of time thus corresponds to 6/77 ms.

108

Page 109: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

parameters, that can be made to agree with the measurements through anapproriate choice of the intermediate neuron model and its inverse. Apart fromthe initial hypothesis ΣH , the main assumptions that have led to these resultsare:

1. Both the synaptic system and the path from conductance to conductancecan be modeled by an input-output state space system.

2. The FitzHugh model is a sufficiently good model for the presynaptic axon.

3. The relative degree of the synaptic system is (at least) 2.

Note that except for its relative degree, we have not used the partially pre-scribed biological form (7.28). Hence, assuming that the synaptic system takesthis particular form is unnecessary. However, suppose that we are interestedin the time-course of the neurotransmitter concentration, is it then possibleto transform the hypothetical system into this partially prescribed form? Theanswer is yes, as we will show by deriving the transformation explicitly.

Transformation into Biological Form We will now look for a smooth statetransformation w = φ(z) with smooth inverse z = φ−1(w) = ψ(w), transformingthe hypothetical form (7.30) of system Σ2 into its partally prescribed biologicalform (7.28). In order to succeed, both forms should have the same relative degreeand the same state dimension. We already ensured that the relative degrees ofthe systems agree. If an explicit state transformation can be found then thetime course of the neurotransmitter concentration w2(t) can be obtained froma trajectory z(t) of system Σ2 in the form (7.30).

The first component of the desired transformation φ is quickly found. Equat-ing the outputs y from Σ2 given by both (7.28) and (7.30) we find

w1 = φ1(z) = S(µz1 − ρ) . (7.32)

From this we also have w1 = µS′(µz1 − ρ)z1 and therefore

q[wn2 (1− w1)− rw1] =µ

ε(z2 − z1)S′(µz1 − ρ) . (7.33)

It is convenient to choose q = µ/ε. Using (7.32) we can substite for w1 in (7.33)and express the second component w2 of φ in terms of z

w2 = φ2(z) =

{(z2 − z1)S′(µz1 − ρ) + rS(µz1 − ρ)

1− S(µz1 − ρ)

}1/n

(7.34)

=

{(z2 − z1)S(µz1 − ρ) +

rS(µz1 − ρ)

1− S(µz1 − ρ)

}1/n

, (7.35)

We have now expressed the only two coordinates w1 and w2 appearing in thepartial prescription (7.28a) in terms of z.

109

Page 110: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Note that from (7.32) we also have

z1 = ψ1(w) =1

µ[ρ+ S−1(w1)] (7.36)

and from (7.33) and (7.36) we have

z2 = ψ2(w) =wn2 (1− w1)− rw1

w1(1− w1)+

1

µ[ρ+ S−1(w1)] (7.37)

so [w1

w2

]=

[φ1(z1)φ2(z2, z1)

], and

[z1

z2

]=

[ψ1(w1)ψ2(w1, w2)

](7.38)

Apparently we may leave the other coordinates unaltered. In fact since theother mechanisms and their equations are unknown we have some freedom inchoosing the other coordinates wi provided that we make sure that the statetransformation is invertible and sufficiently smooth. (Note that, except for thedimension, the coordinate transformation does not depend on the particularchoice of neuron model Σ1.)

Invertibility Conditions In order to check the invertibility conditions wederive the partially prescribed Jacobian

ψ′(w) =

∂1ψ1 ∂2ψ

1

∂1ψ2 ∂2ψ

2 0

0 I

,

and its determinant detψ′(w) = ∂1ψ1∂2ψ

2 − ∂1ψ2∂2ψ

1.From (7.36) we have ∂ψ1/∂w2 = 0 and

∂ψ1

∂w1=

1

µS′(S−1(w1))=

1

µw1(1− w1). (7.39)

From (7.37) we have

∂ψ2

∂w2=nwn−1

2

w1, (7.40)

and

∂ψ2

∂w1=

1

µw1(1− w1)− wn2w2

1

− r

(1− w1)2. (7.41)

This results in the partially prescribed Jacobian

ψ′(w) =

1

µw1(1−w1) 0

1µw1(1−w1) −

wn2

w21− r

(1−w1)2nwn−1

2

w1

0

0 I

,

110

Page 111: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

with determinant

detψ′(w) =nwn−1

2

µw21(1− w1)

for 0 < w1 < 1 . (7.42)

Thus, since µ 6= 0 and 0 < w1 < 126, the coordinate transformation isinvertible if either n = 1 or w2 6= 0. In biological terms this means that eitherthe number of transmitter molecules n needed to open a postsynaptic channel isonly one or else there is always some residual transmitter w2 left in the synapticcleft, that is the transmitter concentration w2 is never zero.

Given a state trajectory z(t) of the hypothetical form (7.30) of Σ2, the time-course of the neurotransmitter concentration w2(t) can now be plotted usingthe second component (7.34) of the state transformation. This can be done fordifferent numbers n of transmitter molecules needed to open a channel. Thisis exactly what is done in figure 35, where the response of the full cascade andits intermediate representations to a transient input conductance are plottedfor the same parameters as in figure 34. Note that, below a certain criticaltransmitter concentration, channels that need few molecules to open are moreresource efficient, while above it, channels that need more molecules are moreefficient.

The response of the initial hypothesis ΣH is also shown. The altered hy-pothesis is smoother and slightly delayed compared to the initial hypothesis.The delay corresponds to about 4 ms. The results can be given a tentativeinterpretation. As the total conductance u due to an escape-initiating stimulusramps up, the giant fiber system at some point u > ρ/µ ‘decides’ to escape.

Summary In this example we have shown that, at least for the squid giantsynapse, the data allows for a simple yet convincing neuronal connection modelΣH = Σ2◦Σ1 based on a simple initial hypothesis ΣH . In particular the modifiedFitzHugh model Σ1 agrees with the typical Hodgkin-Huxley-like responses of thesquid axon while the derived synapse model Σ2 = ΣH ◦ Σ−1

1 agrees with themeasurements made at the giant synapse. Of course given the simplicity of thehypothesis, it is no surprise that the agreement is not perfect. What is amazinghowever, is that we can come this close with such a simple model at the networklevel.

We also derived an explicit state transformation that transforms the derivedrealization of Σ2 into a partially prescribed biological form. Except for thedimension, this transformation does not depend on the choice of neuron modelΣ1.

The important point is that, despite the realistic dynamics, the neuronalconnection model ΣH at the network level is simpler then either of its compo-nents Σ1 and Σ2. In fact, in this particular example, our proposed method does

26Note that for r > 0 and transmitter concentrations w2 > 0 in (7.28) we have that if0 < w1(t) < 1 holds for t = 0, it holds for all t ≥ 0. The steady state of w1 is given by

w1 =wn

2r+wn

2.

111

Page 112: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

0 100 200 300 400 500 600 700 800 900 10000

0.02

0.04

0.06

0 100 200 300 400 500 600 700 800 900 1000−0.5

0

0.5

1

0 100 200 300 400 500 600 700 800 900 10000

1

2

3

0 100 200 300 400 500 600 700 800 900 10000

0.5

1

Figure 35: A transient presynaptic input conductance u (top) is converted intoa presynaptic voltage response y1 (second from above) by the modified FitzHughmodel Σ1. The presynaptic potential leads to a neurotransmitter concentrationw2(t) = φ2(z(t)) (third from above), where z(t) is the state trajectory of thehypothetical form of Σ2. The appropriate neurotransmitter concentration givenby φ2(z) also depends on the number of transmitter molecules needed to open apostsynaptic channel (blue: 1, green: 2, red: 3). The postsynaptic conductance y(bottom blue) in response to u is produced by the neuronal equivalent of ΣH , i.e.the full cascade. The response of the initial hypothesis ΣH (green) is also shown.The unit of time is approximately 6/77 ms, thus the slight delay correspondsto about 4 ms. Tentative interpretation: As the total conductance u due toan escape-initiating stimulus ramps up, the giant fiber system at some pointu > ρ/µ = 0.04 ‘decides’ to escape. Parameters: q = µ/ε and r = 0.015, otherparameters as in figure 34.

112

Page 113: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

exactly what it was meant to do. It achieves a higher level simplification thatis still supported by the data.

7.7 Concluding Remarks

Up until now we only considered signal paths from conductance to conductance.One could also consider paths from voltage to voltage or from transmitter totransmitter. The reason for our choice is two-fold.

1. With the simple kinetic model (3.23) on page 37, the transmitter-dependentconductance is the last exponentially stable system before the positivefeedback associated with excitability and spiking. This makes it thesmoothest and most ‘human-readable’ signal representation in the chain.

2. In single compartment models the conductance is the last representationbefore the signals of several neurons come together.

When one considers neurons without impulses such as amacrine cells a pathfrom voltage to voltage may be more appropriate, see appendix A.3 where theeffects of noise are also taken into account.

In this section we developed a new method for analyzing and modeling synaptictransmission. A first trial run on the squid giant synapse shows very promisingresults. As with many new methods the approach also suggests many newdirections. Some of these are indicated in the next and final section where wedraw this thesis to its conclusion.

113

Page 114: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

8 Conclusion

In this concluding section we briefly summarize the contributions of this thesis,draw a few tentative conclusions, and suggest some future directions. Currently,the contributions are the most important. Let us start with those.

8.1 Contributions

8.1.1 A Structured Method

In this thesis we have developed a method for getting to a network level ofabstraction, while at the same time acknowledging single neuron dynamics. Itallows one to circumvent unknown synaptic mechanisms without ignoring theirdynamic effect. The method uses inverse neuron models and is based on thestandard hypothesis-and-test method of science. Given a hypothetical modelat the network level the approach provides a way to verify this model againstmeasurements at a more detailed level of abstraction. In particular, connectingthe inverse-neuron model in series with the hypothetical network level modelgives us a verifiable model of synaptic transmission.

The method makes a clear distinction between intermediate representationand actual functional transformation. Hence, it provides one with insight andfacilitates analysis. Once the functional model at the network has been obtainedand verified, the identity map, in the form of the neuron model and its inverse,can be omitted. Thus, the approach may also lead to a reduced computationalcost in network simulations.

8.1.2 A General Inverse-Neuron

We derived an inverse for the general form of a class of neuron models whichincludes most of the models one will encounter, including the: Hodgkin-Huxleymodel, FitzHugh model, Persistent sodium plus potassium model, integrate-and-fire model, Izhikevich simple model, and many more. Although the exactinverse of the general model does not have a state-space realization, a realizationof a satisfactory approximate inverse has been derived.

8.1.3 An Example Application: The Squid Giant Synapse

Some initial examples of the method in action have been provided. Notably, weapplied the method to the squid giant synapse and arived at an understandingof its role in functional transmission. It has been shown how a simple thresholdhypothesis at the network level actually leads to a convincing model of synap-tic transmission that is in surprisingly good agreement with the measurementsmade at the synapse.

These are among the more important contributions of this thesis, but what canwe conclude from the current work? In neuroscience, it is generally difficult todraw any hard conclusions. Nevertheless, let us state some conclusions that wefeel can be drawn on the basis of the present work.

114

Page 115: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

8.2 Conclusions

8.2.1 Neuron and Synapse Are Formally Related

At least for single compartment models with directional synapses, the intimaterelation between neuron and synapse can be expressed formally. The relationdistinguishes between intermediate representation and actual transformation orhigher level functional. The higher level functional is mildly constrained by therealizability of its sub-systems. A state transformation allows one to transforma particular state-space realization into another input-output equivalent formsuch as a partially specified biological form.

8.2.2 Hypotheses Lead to Synapse Models

The formal relation between neuron and synapse can be exploited. Given anappropriate neuron model, one can derive a hypothetical synapse model from ahigher level input-output hypothesis. As a consequence, the resulting synapsemodel is directly related to the neuron model both in complexity and parame-ters. For the squid giant synapse at least, this approach leads to a convincingmodel of transmission from a simple higher level hypothesis. The results are insurprisingly good agreement with measurements made at the giant synapse.

8.2.3 Stimuli Are Well-Represented

How well a stimulus or input is represented by a neuronal response dependson how well it can be recovered from it. Neuron models are invertible butnot realizable. However, we can get arbitrarily close with a realization of anapproximate inverse, so real-valued inputs or stimuli are well-represented.

These are among the conclusions one may draw from the present work, so whereto go from here? Many questions are still unanswered and there are manydirections one can take from here, below we mention just a few.

8.3 Future Directions

8.3.1 A General Neuronal Connection Model

The Izhikevich simple model nicely captures a large behavioral repertoire of ex-citable cells. Similarly, we would like to obtain a simple yet sufficiently generalmodel for the neuronal connection, the input-output functional at the networklevel. This model should both facilitate analysis at the network level, and pro-vide an intuitive insight into function. We should also strive for computationalefficiency in network simulations.

Finding such a general model goes hand in hand with designing the rightexperiments directed towards an understanding of function. For the model to besufficiently general we may need to consider mixed reciprocal synapses, that is,bidirectional synapses with both electrical transmission through gap junctionsand chemical transmission through the use of neurotransmitters.

115

Page 116: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

8.3.2 Combined Excitatory and Inhibitory Inputs

Up until now we only considered input conductances for one type of channel,either excitatory or inhibitory. In reality neurons may receive inhibitory, excita-tory or shunting inputs. Adding multiple input conductances of the same type,i.e. with the same reversal potential or driving force, is straight-forward. Com-bining inputs of different types, without loosing the simple higher level interpre-tation, is less obvious. However, many of the mathematical tools for nonlinearsingle-input single-output systems carry over to the multi-input multi-outputcase and may prove to be useful here.

Using small excitatory-inhibitory circuits, we would like to realize basicbuilding blocks for larger network architectures. In particular, we would liketo realize basic elements such as: elementary functions, differential operators,oscillators, sign-inverters, rectifiers, variable delays, and so on.

8.3.3 Robustness against Noise

Real neurons operate in a noisy environment. Thermal molecular motion in-fluences the stochastic opening and closing of ion-channels and at least someneurotransmitters seem to be released in quanta in a probabilistic fashion. It isinteresting to consider how the vectorfield and output map of a dynamic neuronmodel can be chosen to reduce such noise effects and to see how this comparesto real neurons. We already provided an example for static mappings, see ap-pendix A.3. The question is whether this idea can be generalized to includedynamic models.

As voltage fluctuations propagate along neurites, signal corruption may ac-cumulate. Hence, we may want to take spatio-temporal activity into accountand consider PDE’s as we study this problem.

8.3.4 Choice and Adjustment of Parameters

Even if there exists a simple higher level interpretation that can be describedby a simple model, it may still be difficult to find the right parameters. Whatis needed is a systematic way to explore parameter space.

Neurons operate under physiological constraints. Hence, the right parame-ters may be the parameters that conserve energy or the ones that reduce noise.Thus, it may be useful to take energy conservation and noise reduction into ac-count. In addition, one may want to initiate the search for the right parametersat bifurcation values of high codimension, since these values serve as organizingcenters of several qualitatively different behaviors.

Self-adjustment of the parameters is ultimately what most scientists asso-ciate with learning and adaptation. Hence, a systematic procedure for findingthe right parameters may be related to finding models or rules for learning oradaptation, whether artificial or biologically plausible.

All-in-all, these contributions point the way to many new possibilities, and onthis happy note, we conclude this thesis.

116

Page 117: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

A Appendix

A.1 Classical Realization by Generalized State Transfor-mation

Consider the following external differential form:

y(n) = ϕ(y, y, . . . , y(n−1), u, u, . . . , u(s)) . (A.1)

Using the generalized coordinates z = (y, y, . . . , y(n−1), u, u, . . . , u(s)) ∈ Rn+s+1,we express the system in generalized or extended state space:

z1 = z2

...zn−1 = znzn = ϕ(z)zn+1 = zn+2

...zn+s = zn+s+1

zn+s+1 = v ,

(A.2)

where the (s+ 1)-th order derivative v = u(s+1) is treated as the input.To find a classical state-space realization:

x = f(x, u) (A.3a)

y = h(x, u) , (A.3b)

i.e. a realization that does not involve derivatives of the input, we look for apartial generalized state transformation x = ψ(y, y, . . . , y(n−1), u, u, . . . , u(s)) ∈Rn for the first n coordinates z1, . . . , zn, such that in the new coordinates theevolution of the first n coordinates no longer explicitly depends on derivativesof the input. The remaining s + 1 coordinates zn+1, . . . , zn+s+1 correspondingto the input and its derivatives are left unchanged. It is not always possible tofind such a transformation.

Here we briefly review the constructive approach independently proposed byGlad (1988) and by Crouch and Lamnabhi-Lagarrigue (1988) and reviewed in[30]. Our exposition differs from the one in [30] only slightly in order to allowfor an intuitive geometric interpretation.

First Step Suppose that the highest derivative of u appears linearly, that isϕ is of the form

ϕ(z) = β(z) + α(z)u(s) , (A.4)

where α and β are independent of zn+s+1 = u(s).In this case, instead of v = u(s+1), we can choose the input to be v = u(s).

We can then drop the last equation from (A.2), and write:

117

Page 118: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

z1

...zn−1

znzn+1

...zn+s−1

zn+s

=

z2

...znβ(z)zn+2

...zn+s

0

︸ ︷︷ ︸

f(z)

+

0...0

α(z)0...01

︸ ︷︷ ︸

g(z)

v . (A.5)

To eliminate the highest derivative from the first n equations, we look for anew coordinate zn = r(z) such that its evolution does not explicitly depend onthe derivative v = u(s) of u, that is such that in:

˙zn =∂r

∂zz =

∂r

∂z[f(z) + g(z)v]

= Lfr(z) + Lgr(z)v ,

we have Lgr(z) = 0. Thus we look for a solution of the PDE27

Lgr(z) =∂r

∂zg(z) = 〈∇r(z), g(z)〉 = α(z)

∂r

∂zn+

∂r

∂zn+s= 0 . (A.6)

Note that a curvilinear coordinate r that satisfies this PDE is everywhere per-pendicular to the vector field component g(z) controlled by the input v = u(s).

If a solution to the PDE exists, we choose the other coordinates zi = zi ifi 6= n and in the new coordinates the state-space system is now of the form:

˙z1 = z2

...˙zn−2 = zn−1

˙zn−1 = ϕ1(z1, . . . , zn+s)˙zn = ϕ2(z1, . . . , zn+s)˙zn+1 = zn+2

...˙zn+s = zn+s+1

˙zn+s+1 = v .

(A.7)

In this system the first n coordinates only depend on the temporal derivativesof u up to order s−1, which is one differentiation less then before. If no solutionto the PDE exists then there exists no classical state-space realization.

27Note the analogy with the condition (6.16) for finding the last n − r components of thetransformation into Byrnes-Isidori normal form, for an n-dimensional system of relative degreer.

118

Page 119: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Second Step If ϕ1 and ϕ2 can be put in the form

ϕi(z) = βi(z) + αi(z)u(s−1) , for i = 1, 2 , (A.8)

where βi and αi do not depend on zn+s = u(s−1), that is u(s−1) appears linearly,then the process can be repeated. Instead of v = u(s+1) or v = u(s) we choosethe input to be v = u(s−1), and write:

˙z1

...˙zn−2

˙zn−1

˙zn˙zn+1

...˙zn+s−2

˙zn+s−1

=

z2

...zn−1

β1(z)β2(z)zn+2

...zn+s−1

0

︸ ︷︷ ︸

f(z)

+

0...0

α1(z)α2(z)

0...01

︸ ︷︷ ︸

g(z)

v . (A.9)

To eliminate the explicit dependence on the highest input derivative, we lookfor two independent coordinates

zn−1 = r1(z) , and (A.10)

zn = r2(z) , (A.11)

that both satisfy the PDE:

Lgr(z) =∂r

∂zg(z) = α1(z)

∂r

∂zn−1+ α2(z)

∂r

∂zn+

∂r

∂zn+s−1= 0 . (A.12)

If two independent solutions r1 and r2 can be found then choosing the othercoordinates zi = zi for i 6= n, n− 1, results in generalized state-space equationswhich only depend on derivatives of u up to order s − 2. If this process canbe repeated at each step , then finally we will get to a classical state spacerealization.

A.2 Brief Review of Abstract Neural Networks

A.2.1 Artificial Neural Networks

The Multilayer Feedforward Perceptron A popular level of abstractionin applications are the so called artificial neural networks (ANN’s). One of thebest known and most widely used artificial neural networks is the multilayerperceptron. It has a layer of input units, one or more layers of nonlinear hiddenunits and a layer of linear output units. The output of each layer serves asthe input for the next, there are no feedback connections, i.e. it has a strictlyfeedforward structure.

119

Page 120: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Figure 36: A general artificial neuron, external inputs ui and outputs yj fromother neurons are weighted and summed together with a constant ρi. The resultvi is passed to a single-input single-output linear system. The filtered signal xiserves as the input for the final static nonlinear map ϕ resulting in the outputyi, after [22].

The output y of a hidden unit is given by

y = ϕ(∑

jbjuj − ρ

)= ϕ(b · u− ρ) ,

where u is an input vector, ϕ is a nonlinear (activation) function, b is a weightvector and ρ is a threshold. It is based on the McCulloch-Pitts (1943) ‘all-or-none’ abstraction of a real neuron, which explicitly assumes no dependence onpast output and has a step activation function ϕ with a discontinuous jump atthe threshold.

For the perceptron usually a differentiable activation function such as thelogistic function:

ϕ(z) =1

1 + e−z(A.13)

is used. To relate this back to biology this is then often interpreted as the firingrate or firing probability of a neuron or population of neurons. However, themain reason for choosing a differentiable function is the error backpropagationlearning algorithm for the adjustment of the parameters.

A multilayer perceptron with one hidden layer may be written in the form:

y1(t) = Φ(Bu(t)− ρ)

y2(t) = Ay1(t) ,

where B is the weight matrix from the input vector u to the activation vector y1

of the hidden units, Φ is a vector-valued activation function and A is the weightmatrix fom the hidden activations y1 to the output vector y2. The propertiesof such feedforward neural networks are well documented. For instance themultilayer perceptron with one hidden layer is known to have the capacity toapproximate functions to any degree of accuracy, see [3],[21] and [13].

120

Page 121: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Recurrent Hopfield-Type Networks Another popular abstract neural net-work is the additive neural network or continuous-time Hopfield-type network:

zi = −zi + ϕ(ui +

∑jaijzj

)(A.14)

where zi represents the activity of the ith neuron, the parameters aij representthe synaptic connection strengths, ui represents the external input to the ithneuron from other sources such as receptors or other brain areas, and ϕ is thelogistic activation function.

An alternative form for a continuous-time recurrent neural network is givenby:

xi = −xi + ui +∑

jaijϕ(xj)

For constant input u this form can be obtained from (A.14) by setting:

xi = ui +∑

jaijzj , (A.15)

see [20]. The latter form corresponds to the figure.

A.2.2 Spiking Networks

In theoretical neuroscience action potentials are usually treated as a sequenceof stereotypical events. Even though action potentials or spikes do vary induration, amplitude and shape, such a sequence is often represented by its spiketimes tj alone [4]. Hence, the spike is often thought of as the elementary unitof neuronal signal transmission [14] or as the basic element of a neural alphabet[10].

The synaptic current in response to a spike is typically modeled as a functionh(t) with a characteristic time-course, such as the exponential function, thealpha function, or the double exponential. The current in response to a spikesequence is then usually expressed by a linear sum

I(t) =∑j

h(t− tj) .

Since the focus is on the spike, a natural question arises: ”What neural codeis used by neurons in the nervous system to encode a stimulus or input in aseries of spike times tj ?” It will come as no surprise that this neural code is atopic of a vigorous debate in neuroscience. Is it a rate code or a temporal code?Note that by focussing on spikes one excludes neurons with graded potentialssuch as amacrine cells.

Even when using the same stimulus repeatedly, spike times can vary fromtrial to trial [4]. Hence, the input signal or stimulus and the spike timingresponse are typically treated stochastically and the field is heavily dominated bystatistical approaches. For the generation of a sequence of spike times differentmodeling choices are possible.

121

Page 122: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

For the neural code in large scale model networks a popular choice is thepopulation rate code. We represent the spike sequence by a sum of Dirac deltafunctions

ρ(t) =∑j

δ(t− tj) ,

called the neural response function. Then, using a property of convolutions, thespike triggered synaptic current I at a single synapse may be written as:

I(t) =∑j

h(t− tj) =

∫ t

−∞h(t− s)ρ(s)ds .

The neural response function ρ is then replaced by a time-dependent populationfirng rate as follows.

If, for a neuron k, we count the number of spikes in a time interval ∆t, andtake the average spike-count of N identical neurons in a population of neurons,then the population rate may be written as:

u(t) =1

N∆t

∑k

∫ t+∆t

t

ρk(s)ds︸ ︷︷ ︸# spikes in ∆t

The smaller we take ∆t the larger N needs to be, see [14]. The total time-averaged synaptic current due to a population of identical neurons is then takento be:

I(t) =

∫ t

−∞h(t− s)u(s)ds .

By the variantions of constants formula, a particular choice for the synaptickernel:

h(t) = CeAtB ,

allows one to write down state space equations for I(t)

x = Ax+Bu

I = Cx .

The firing rate y(t) is assumed to follow:

τ y = −y + ϕ(I) ,

where ϕ is the steady state firing rate function of the input current, also calledthe activation function. In this approach synaptic currents from several differentneuron populations sum linearly, see [4] and [10].

122

Page 123: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

A.3 Noise Considerations

Suppose that neuronal signal transmission has developed a way to overcomenoise problems, say because excitable membranes and synaptic mechanismshave co-evolved over many millions of years. (The first multicellular animalsor metazoans appeared some 600 million years ago [43].) How, in our proposedframework, can noise be reduced?

We consider a few simple examples only and leave the more general problemto future research. We ignore dynamics at first and start with a static map.

Example A.1 (Noise Reduction). Suppose that a signal u(t) is transformedby a map y = ϕ(u). Suppose further that the resulting signal y(t) is corruptedby noise δ(t) either additively; yδ = y + δ, or multiplicatively; yδ = δy. Wetransform the signal back using the inverse map u = ϕ−1(yδ).

Figure 37: Signal encoding and decoding.

We now want to choose a map ϕ such that the effects of the disturbance δare minimal or negligible. Let us first consider the additive case.

Additive Case:In this case we can write:

u = ϕ−1{ϕ(u) + δ} , (A.16)

where for δ = 0 we have u = u. To reduce the effect of the disturbance δ on uwe require its first order partial derivative to be small:

0 <∂

∂δu =

1

ϕ′(u)=

1

ϕ′(ϕ−1{ϕ(u) + δ})= ε << 1 (A.17)

around δ = 0. Thus resulting in the condition ϕ′(u) = 1ε which is satisfied in

particular by y = ϕ(u) = 1εu with u = ϕ−1(y) = εy, thus:

u = ϕ−1(yδ) = ε(y + δ) (A.18)

= u+ εδ . (A.19)

Note that choosing ε small results in a steep derivative ϕ′(u) = 1ε for the ‘en-

coding’ map ϕ, and a low ‘graded’ derivative for the ‘decoding’ map ϕ−1. Nowlet us consider the multiplicative case.

Multiplicative Case:In this case we can write:

u = ϕ−1{δϕ(u)} , (A.20)

123

Page 124: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

where for δ = 1 we have u = u. To reduce the effect of the disturbance δ on uwe again require its first order partial derivative to be small:

∂δu =

1

ϕ′(u)ϕ(u) =

1

ϕ′(ϕ−1{δϕ(u)})ϕ(u) = ε (A.21)

around δ = 1. Thus resulting in the differential equation εϕ′(u) = ϕ(u) whichis satisfied in particular by y = ϕ(u) = exp( 1

εu) with ϕ−1(y) = ε ln(y), thus:

u = ϕ−1(yδ) = ε ln(δy) (A.22)

= ε ln(δ exp(1

εu)) (A.23)

= u+ ε ln(δ) . (A.24)

Note that in this case some noise δ ≈ 1 is preferred. Choosing ε small resultsin extreme expansion: ϕ′(u) = 1

ε exp( 1εu) > 1, of the signal for u > ε ln(ε).

We may expect similar or better arguments to be available in the literature oncommunication-theory.

It is interesting to consider how the vectorfield and output map of a dynamicneuron model can be chosen to reduce noise effects and to see how this comparesto real neurons. For now, and motivated by the steep derivative and extremeexpansion of the ‘encoding’ maps in these examples, we assume that positivefeedback, divergence of orbits, and sensitive dependence on both input andinitial conditions, are desirable properties for the observable output states ofa transmitting or encoding device Σ1, but not for the states of a receiver ordecoder Σ2. We desire the internal dynamics of such inverse decoding devicesto be exponentially stable. We may associate these with the excitable propertiesof axons and the passive electrotonic properties of dendrites respectively.

This may provide an interesting insight into spike-timing variability. With-out a good model of the spike generation process, trial to trial spike-timingvariability seems unpredictable. However, if positive feedback divergence andsensitive dependence are indeed desirable for a transmitting device, then per-haps spike-timing variability is not as unpredictable to the receiving cell, whichwe assume has co-evolved an implicit inverse model of excitability.

Up until now we only considered excitable neuron models and in that context,we only considered signal paths from presynaptic conductance to postsynapticconductance. Let us now consider neurons without impulses. Given the fore-going discussion on noise reduction, what would the appropriate signal pathbe?

Example A.2 (Graded Neuron). Suppose that a graded neuron without im-pulses can be accurately modeled by the system

Σ2 :

{x = −x+ u2(p− x)y = x

,

124

Page 125: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

where y = x is the membrane potential and u2 ≥ 0 is a transmitter inducedconductance input with driving force p− x and Nernst equilibrium potential p.The input is excitatory for p > 0, inhibitory for p < 0, and shunting for p = 0.This system has relative degree 1 for p 6= x, that is, the output y needs to bedifferentiated only once with respect to time for the input u to appear explicitly.

This system is exponentially stable. For constant input u2 the output con-verges to y = x = u2p

1+u2. There is no positive feedback and orbits will not

diverge. Hence, given the foregoing discussion on noise reduction this systemwould be more like a receiving device.

We now want to find the appropriate transmitter device, that is, a synapsemodel Σ1 for converting a presynaptic potential u into a postsynaptic conduc-tance y1 = u2. Hence, compared to previous examples the neuron model andsynapse model have changed places and we now consider a complete signal pathfrom presynaptic potential u to postsynaptic potential y, see figure 38. Weassume that the two cells are primarily involved in accurate information trans-mission.

Figure 38: A complete signal path from potential to potential. The presynapticpotential u is mapped to a postsynaptic potential y. The postsynaptic conduc-tance y1 = u2 is an intermediate representation.

Consider the cascade, ΣH = Σ2 ◦ Σ1, depicted in the diagram below.

presyn. potentialΣ1 //

ΣH

''

postsyn. conductance

Σ2

��postsyn. potential

We want ΣH to realize an approximate identity mapping. Note that, since Σ2

already has relative degree 1, the full cascade ΣH has a relative degree of atleast 1 too. Thus, let us consider the input output system:

ΣH : εy = u− y ,

125

Page 126: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Which has relative degree 1 and allows the output y to follow sufficiently well-behaved continuous inputs u arbitrarily closely by taking ε > 0 small enough.

We are looking for Σ1 = Σ−12 ◦ ΣH . By rearranging the neuron equations

Σ2, the inverse system is seen to be:

Σ−12 : u2 =

y − yp− y

Substituting y1 for u2 and, using a copy of ΣH with an internal state variablez to substitute for y, we arrive at:

Σ1 :

{εz = u− zy1 =

1ε [u−z]−zp−z .

In practice we use the smoothest signals as representations of stimuli andresults of computations. These should also be easier to relate to the smoothlimb movements that constitute the outward behavior of an animal.

A.4 Euler Implementation

Consider again the general form, (7.7) on page 89, given by the differentialequations

ξ = f(ξ, η) + g(ξ, η)uη = q(ξ, η)

}if s(ξ, η) < 0

and by the reset map[ξη

]7→[R(ξ, η)Q(ξ, η)

]if s(ξ, η) ≥ 0 .

Let the output be given by y = ξ.

A.4.1 Implementation of the Original System

We will implement the original system using a fixed-step first-order Euler method

ξt+τ = ξt + τ [f(ξt, ηt) + g(ξt, ηt)ut]

ηt+τ = ηt + τq(ξt, ηt)

y−t+τ = ξt+τ or y−t+τ =

[ξt+τηt+τ

]if s(ξt+τ , ηt+τ ) ≥ 0, then the state is reset according to the reset map:[

ξt+τηt+τ

]7→[R(ξt+τ , ηt+τ )Q(ξt+τ , ηt+τ )

]

126

Page 127: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

and some or all of the new state values are passed on to y+ according to:

y+t+τ = ξt+τ or y+

t+τ =

[ξt+τηt+τ

]else, if the state has not been reset, the pre- and post-reset outputs are equal:

y+t+τ = y−t+τ .

The output is given by:

yt =

[y−ty+t

]. (A.25)

The time-step τ is assumed to be sufficiently small. Note that y−t and y+t can

only differ if a reset has occurred28.

A.4.2 Implementation of the Inverse

System with Stable Internal Dynamics If only the ξ-component of thestate is passed to the output, then the numerical inverse is given by:

ξt+τ = y+t+τ (A.26a)

ηt+τ = ηt + τq(ξt, ηt) (A.26b)

ut =1

g(ξt, ηt)

[y−t+τ − ξt

τ− f(ξt, ηt)

]. (A.26c)

if s(y−t+τ , ηt+τ ) ≥ 0, then

ηt+τ ← Q(y−t+τ , ηt+τ )

obviously, since yt+τ is only available at time t+ τ , the signal ut will be delayedby one time-step τ compared to the original signal u.

System with Fully Available State If the state is fully available, that isboth the ξ-component and the η-component are passed to the output, then theinverse is given by[

ξt+τηt+τ

]= y+

t+τ (A.27a)

ut =1

g(ξt, ηt)

[ξ−t+τ − ξt

τ− f(ξt, ηt)

], (A.27b)

where ξ−t+τ is the ξ-component of y−t+τ .

28It may be convenient to only consider reset maps that have no fixed points on the resetboundary, i.e. for which y− and y+ = (R(ξ, η), Q(ξ, η)) are never equal at reset.

127

Page 128: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

Agreement with Continuous-Time Case In order for these numerical in-verses to coincide with the continuous-time case, they still need to be followedby the low-pass filter Lε,1 given by equation (7.11) on page 90. Its numericalversion is given by:

ζt+τ = ζt +τ

ε[ut − ζt] (A.28)

with output:

ut = ζt . (A.29)

Example A.3 (Izhikevich Simple Model). For the Izhikevich simple modelgiven by:

ξ = ξ2 − η + u if ξ ≥ 1, then

η = a(bξ − η) (ξ, η)← (c, η + d)

y = ξ ,

we have:

f(ξ, η) = ξ2 − ηg(ξ, η) = 1

q(ξ, η) = a(bξ − η)

s(ξ, η) = ξ − 1

R(ξ, η) = c

Q(ξ, η) = η + d .

The resulting numerical implementation was used to generate figure 28.

128

Page 129: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

References

[1] GJ Augustine, MP Charlton, and SJ Smith (1985) Calcium Entryand Transmitter Release at Voltage-Clamped Nerve Terminals of Squid,J.Physiol. 367:163-181.

[2] M di Bernardo, CJ Budd, AR Champneys and P Kowalczyk (2008)Piecewise-Smooth Dynamical Systems: Theory and Applications, Springer-Verlag, London.

[3] G Cybenko (1989) Approximation by Superpositions of a Sigmoidal Func-tion, Mathematics of Control, Signals, and Systems (MCSS), 2:303-314.

[4] P Dayan and LF Abbott (2001) Theoretical Neuroscience: Computationaland Mathematical Modeling of Neural Systems, MIT Press, Cambridge,Massachusetts.

[5] E Delaleau and W Respondek (1995) Lowering the Orders of Derivativesof Controls in Generalized State Space Systems, Journal of MathematicalSystems, Estimation, and Control, 5:1-27

[6] A Destexhe, ZF Mainen and TJ Sejnowski (1994) Synthesis of Models forExcitable Membranes, Synaptic Transmission and Neuromodulation Usinga Common Kinetic Formalism, Journal of Computational Neuroscience,1:195-230.

[7] A Destexhe, ZF Mainen and TJ Sejnowski (2003) Synaptic interactions,in MA Arbib (ed.) The Handbook of Brain Theory and Neural Networks,Second Edition, MIT Press, Cambridge Massachusetts.

[8] K Doya (1999) What are the computations of the cerebellum, the basalganglia and the cerebral cortex?, Neural Networks, 12:961-974.

[9] Y Dunant (1986) On the Mechanism of Acetylcholine Release, Progress inNeurobiology 26:55-92.

[10] C Eliasmith and CH Anderson. (2003) Neural Engineering: Computation,Representation, and Dynamics in Neurobiological Systems, MIT Press,Cambridge, Massachusetts.

[11] B Ermentrout (1998) Neural Networks as Spatio-Temporal Pattern-Forming Systems, Rep. Prog. Phys. 61:353-430.

[12] R FitzHugh (1961) Impulses and Physiological States in Theoretical Modelsof Nerve Membrane, Biophysical Journal, 1:445-466.

[13] KI Funahashi (1989) On the Approximate Realization of Continuous Map-pings by Neural Networks, Neural Networks, 2:183-192.

[14] W Gerstner and WM Kistler (2002) Spiking Neuron Models: Single Neu-rons, Populations, Plasticity, Cambridge University Press, UK.

129

Page 130: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

[15] JK Hale and H Kocak (1991) Dynamics and Bifurcations, Springer-Verlag,New York.

[16] MA Henson and DE Seborg (ed.) (1997) Nonlinear Process Control, Pren-tice Hall, Englewood Cliffs, New Jersey.

[17] B Hille (1992) Ionoc Channels of Excitable Membranes, Second Edition,Sinauer Associates, Sunderland, Massachusetts.

[18] AL Hodgkin and AF Huxley (1952) A Quantitative Description of Mem-brane Current and its Application to Conduction and Excitation in Nerve,J. Physiol. 117:500-544.

[19] ND Holland (2003) Early Central Nervous System Evolution: An Era ofSkin Brains?, Nature Reviews Neuroscience, 4:617-627.

[20] FC Hoppensteadt and EM Izhikevich (1997) Weakly Connected Neural Net-works, Springer-Verlag, New York.

[21] K Hornik, M Stinchcombe and H White (1989) Multilayer FeedforwardNetworks are Universal Approximators, Neural Networks, 2:359-366.

[22] KJ Hunt, D Sbarbaro, R Zbikowski and PJ Gawthrop (1992) Neural Net-works for Control Systems-A Survey, Automatica, 28:1083-1112.

[23] A Isidori (1995) Nonlinear Control Systems, Third Edition, Springer-Verlag, London.

[24] EM Izhikevich (2003) Simple Model of Spiking Neurons, IEEE Transactionson Neural Networks, 14:15691572.

[25] EM Izhikevich (2004) Which model to use for cortical spiking neurons?,IEEE Transactions on Neural Networks, 15:1063-1070.

[26] EM Izhikevich (2007) Dynamical Systems in Neuroscience: The Geometryof Excitability and Bursting, MIT Press, Cambridge, Massachusetts.

[27] D Johnston and SMS Wu (1995) Foundations of Cellular Neurophysiology,MIT Press, Cambridge, Massachusetts.

[28] C Kambhampati, S Manchanda, A Delgado, GGR Green, K Warwick andMT Tham (1996) The Relative Order and Inverses of Recurrent Networks,Automatica, 32:117-123.

[29] TB Kepler, LF Abbott and E Marder (1992) Reduction of Conductance-Based Neuron Models, Biological Cybernetics, 66:381-387.

[30] U Kotta and T Mullari (2003) Realization of Nonlinear Systems Desribedby Input/Output Differential Equations: Equivalence of Different Methods,Proc. of the European Control Conference (ECC), Cambridge, UK.

130

Page 131: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

[31] VY Kreinovich (1991) Arbitrary Nonlinearity Is Sufficient to Represent AllFunctions by Neural Networks: A Theorem, Neural Networks 4:381-383.

[32] YA Kuznetsov (2004) Elements of Applied Bifurcation Theory, Third Edi-tion, Springer-Verlag, New York.

[33] K Langley and NJ Grant (1997) Are Exocytosis Mechanisms Neurotrans-mitter Specific?, Neurochemistry International, 31:739-757.

[34] DS Melkonian (1993) Transient analysis of chemical synaptic transmission,Biological Cybernetics, 68:341-350.

[35] WJH Nauta and M Feirtag (1986) Fundamental Neuroanatomy, WH Free-man and Company, New York.

[36] MG Paulin (2004) Book Review - Neural Engineering:Computation, Rep-resentation and Dynamics in Neurobiological Systems, Neural Networks,17:461-463.

[37] AS Poznyak, EN Sanchez and W Yu (2001) Differential Neural Networks forRobust Nonlinear Control: Identification, State Estimation and TrajectoryTracking, World Scientific, Singapore.

[38] H Reichert (1992) Introduction to Neurobiology, George Thieme Verlag,Stuttgard, Germany.

[39] S Sastry (1999) Nonlinear Systems: Analysis, Stability, and Control,Springer, New York.

[40] A Scott (2002) Neuroscience: A Mathematical Primer, Springer-Verlag,New York.

[41] AM Shaw, FJ Doyle and JS Schwaber (1997) A Dynamic Neural NetworkApproach to Nonlinear Process Modeling, Computers and Chemical Engi-neering 21:371-385.

[42] H Sira-Ramirez (1989) Sliding regimes in general non-linear systems: arelative degree approach, Int. J. Control, 50:1487-1506.

[43] LW Swanson (2003) Brain Architecture, Oxford University Press, NewYork.

[44] L Tauc (1997) Quantal Neurotransmitter Release: Vesicular or Not Vesic-ular?, Neurophysiology, 29:219-226.

[45] J Touboul (2008) Bifurcation Analysis of a General Class of NonlinearIntegrate-And-Fire Neurons, Siam J. Appl. Math. 68:1045-1079.

[46] MV Tsodyks and H Markram (1997) The neural code between neocorti-cal pyramidal neurons depends on neurotransmitter release probability, TheNational Academy of Sciences of the USA, Neurobiology, 94:719-723.

131

Page 132: Synaptic Transmission and Inverse-Neuron Dynamics · 2020-06-10 · Synaptic Transmission and Inverse-Neuron Dynamics Thesis Artificial Intelligence Author: H.T. van der Scheer Supervisor:

[47] J Vautrin (1994) Vesicular or Quantal and Subquantal Transmitter Release,Physiology, 9:59-64.

[48] F Vyskocil, AI Malomouzh, EE Nikolsky (2009) Non-Quantal AcetylcholineRelease at the Neuromuscular Junction, Physiol. Res. 58:763-784.

132