29
PDE models of neural networks Benoˆ ıt Perthame

PDE models of neural networks - Brown University

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

PDE models of neural networks

Benoıt Perthame

Introduction

The electrically active cells are characterized by an action potential

• Hodgkin-Huxley

• FitzHugh-Nagumo

• Morris-Lekar

• Izhikevich

• Mitchell-Schaeffer

Introduction

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Solutions to the Hodgkin-Huxley model and to the FitzHugh-Nagumo model

These models are accurate

but very expensive/difficult to use for large assemblies of neurones.

Introduction

The Wilson-Cowan model (1972) describes the firing rates N(t, x) of

neuron assemblies located at position x through an integral equation

d

dtN(x, t) = −N(x, t) +

∫w(x, y)σ

(N(y, t)

)dy + s(x, t)

Feature : multiple steady states and bifurcation theory (Bressloff-Golubitsky,

Chossat-Faugeras)

• σ(·) = sigmoid

• wij = connectivity matrix

• s = source

Introduction

The Wilson-Cowan model (1972) describes the firing rates N(t, x) of

neuron assemblies located at position x through an integral equation

d

dtN(x, t) = −N(x, t) +

∫w(x, y)σ

(N(y, t)

)dy + s(x, t)

Feature : multiple steady states and bifurcation theory (Bressloff-Golubitsky,

Chossat-Faugeras)

Aim : large scale brain activity, visual hallucinations (Kluver, Oster, Siegel...)

.

OUTLINE OF THE LECTURE

I. Principle of Noisy Integrate and Fire model

II. The nonlinear Noisy Integrate and Fire model

III. The elapsed time approach

Leaky Integrate and Fire

The Leaky Integrate & Fire model is simpler

dV (t) =(− V (t) + I(t)

)dt+ σdW (t), V (t) < VFiring

V (t−) = VFiring =⇒ V (t+) = Vreset.

The idea was introduced by L. Lapicque (1907).

• I(t) input current

• Noise or not

• Stochastic firing

Leaky Integrate and Fire

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.00.5

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Solution to the LIF model

• N. Brunel, V. Hakim, W. Gerstner and W. Kistler...

• Fit to measurements

• Explains qualitatively observations on the brain activity

Leaky Integrate and Fire

Written in terms of PDEs, the probability n(v, t) to find a neuron atthe potential v

∂n(v,t)∂t + ∂

∂v

leak+external currents︷ ︸︸ ︷[(− v + I(t)

)n(v, t)

]−

Noise︷ ︸︸ ︷a∂2n(v, t)

∂v2=

neurons reset︷ ︸︸ ︷δ(v = VR)N(t), v ≤ VF ,

n(VF , t) = 0, n(−∞, t) = 0,

N(t) := −a∂n(VF ,t)∂v ≥ 0, (the total flux of neurons firing at VF ).

N(t) is also a Lagrange multiplier for the constraint∫ VF−∞

n(v, t)dv = 1.

Leaky Integrate and Fire

∂n(v,t)∂t + ∂

∂v

[(− v + I(t)

)n(v, t)

]− a∂

2n(v,t)∂v2 = δ(v = VR)N(t), v ≤ VF ,

n(VF , t) = 0, p(−∞, t) = 0,

N(t) := −a∂n(VF ,t)∂v ≥ 0, (the total flux of firing neurons at VF ).

Properties (M. Caceres, J. Carrillo, BP) The solutions satisfy

• n ≥ 0,∫ VF−∞ n(v, t)dv = 1,

• For I(t) ≡ 0, n(v, t) −→t→∞

P (v) the unique steady state of integral 1

(desynchronization)

• The convergence rate is exponential

Leaky Integrate and Fire

The proof uses

• the Relative Entropy

d

dt

∫ VF−∞

P (v)H(n(v, t)

P (v)

)dv ≤ 0,

for H(·) convex,

• Hardy/Poincare inequality,∫ VF−∞

P (v)|u(v)|2dv ≤ C∫ VF−∞

P (v)|∇u(v)|2dv,

for∫ VF−∞ P (v)u(v)dv = 0 [notice P (VF ) = 0].

Ledoux, Barthe and Roberto

Noisy LIF networks

For networks, the current I(t) is related to the total activity of the

network

∂n(v,t)∂t + ∂

∂v

[(− v+bN(t)

)n(v, t)

]− a

(N(t)

)∂2n(v,t)∂v2 = δVR(v)N(t), v ≤ VF ,

n(VF , t) = 0, n(−∞, t) = 0,

N(t) := −a(N(t)

)∂∂vn(VF , t) ≥ 0, total flux of firing neurons at VF .

Constitutive laws

• I(t) = bN(t) • b = connectivity

• b > 0 for excitatory neurones • b < 0 for inhibitory neurones

• a(N) = a0 + a1N

Noisy LIF networks

Theorem (J. Carrillo, BP, D. Smets) Assume

• a = a0 > 0 and b < 0 (inhibitory)

• the initial data is bounded by a supersolution (in a certain sense)

Then,

• There are global solutions

• Uniformly bounded for all t > 0

Noisy LIF networks

Theorem (M. Caceres, J. Carrillo, BP)

Assume

• a ≥ a0 > 0 and b > 0

• the initial data is concentrated enough around v = VF .

Then,

• there are NO global weak solutions

• larger nonlinear diffusion does not help

Possible interpretation

• N(t)→ ρδ(t− tBU).

• partial synchronization

Noisy LIF networks

Theorem (M. Caceres, J. Carrillo, BP)

Assume

• a ≥ a0 > 0 and b > 0

• the initial data is concentrated enough around v = VF .

Then,

• there are NO global weak solutions

• larger nonlinear diffusion does not help

Possible interpretation

• N(t)→ ρδ(t− tBU).

• partial synchronization (see S. Ha)

Noisy LIF networks

Numerical solution of the blow-up phenomena

probability density n(v) Total neuronal activity N(t)

Noisy LIF networks

Theorem (Steady states) For

• b > 0 small enough, there is a unique steady state

• b > (VF − VR), 2ab < (VF − VR)2VR, then there are at least 2 steadystates

• b > 0 large enough, there are no steady states.

2 4 6 8 10

1

2

3

4

5

6

1

b!3

b!1.5

b!0.5

Noisy LIF networks

Similarity with a Keller-Segel type model

by V. Calvez and R. Voituriez

for microtubules arrangments on the membrane

∂n(z,t)∂t − ∂

∂z [µ(t)n(z, t)]− ∂2n(z,t)∂z2 = 0, z ≥ 0,

∂∂zn(0, t) + µ(t)n(0, t) = 0,

dµ(t)dt = n(0, t)− µ(t)

L .

• Blow-up for large mass

• Smooth solutions for small mass (and stable steady state)

Elapsed time structured model

K. Pakdaman, J. Champagnat, J.-F. Vibert have proposed to

structure by time rather than potential which is a possible coding of

neuronal information

Elapsed time structured model

• s represents the time elapsed since the last discharge

• n(s, t) probability of finding a neuron in ’state’ s at time t

• p(s,N) ≤ 1 represents the firing rate of neurons in the ’state s’

• N(t) = activity of the network + external signaling

∂n(s,t)∂t

elapsed time advances︷ ︸︸ ︷+∂n(s, t)

∂s+

firing neurons︷ ︸︸ ︷p(s, bN(t)) n(s, t) = 0,

n(s = 0, t) =∫ +∞

0p(s, bN(t)) n(s, t)ds︸ ︷︷ ︸

neurons reset

,

This model always satisfies∫ +∞

0n(s, t)ds = 1.

Elapsed time structured model

• s represents the time elapsed since the last discharge

• n(s, t) probability of finding a neuron in ’state’ s at time t

• p(s,N) ≤ 1 represents the firing rate of neurons in the ’state s’

• being given a total activity N

∂n(s,t)∂t + ∂n(s,t)

∂s + p(s, bN(t)) n(s, t) = 0,

n(s = 0, t) =∫+∞0 p(s, bN(t)) n(s, t)ds,

N(t) := n(s = 0, t)

• b > 0 connectivity of the network

• excitatory neurons are represented by ∂p(s,N)∂N > 0

Elapsed time structured model

∂n(s,t)∂t + ∂n(s,t)

∂s + p(s, bN(t)) n(s, t) = 0,

n(s = 0, t) =∫+∞0 p(s, bN(t)) n(s, t)ds,

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.00.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

N3 N2 N1

the function s 7→ p(s,N) (refractory state+ fast transition)

Elapsed time structured model

With synaptic integration

∂n(s,t)∂t + ∂n(s,t)

∂s + p(s,X(t)) n(s, t) = 0,

N(t) := n(s = 0, t) =∫+∞0 p(s,X(t)) n(s, t)ds,

X(t) := b∫ t

0N(t− u)ω(u)du.

Elapsed time structured model

∂n(s,t)∂t + ∂n(s,t)

∂s + p(s, bN(t)) n(s, t) = 0,1bN(t) := n(s = 0, t) =

∫+∞0 p(s, bN(t)) n(s, t)ds,

Properties

• n ≥ 0,∫∞0 n(s, t)ds = 1,

• N(t) ≤ 1, n(s, t) ≤ 1,

• there is a unique solution,

Linear case For p ≡ p(s) then

• n(s, t) −→t→∞

P (s) the unique steady state.

Elapsed time structured model

The proof goes through Generalized Relative Entropy

d

dt

∫ ∞0

Φ(s)P (s)H(n(s, t)

P (s)

)ds ≤ 0,

for H(·) convex.

Elapsed time structured model

Properties

• For small or large connectivity (b small or large) then

desynchronization still holds

n(s, t) −→t→∞

Pb(s)

• There are several periodic solutions (explicit),

• These are stable (observed numerically).

Elapsed time structured model

16 18 20 22 24 26 28 30 32 34 360.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.50.40

0.45

0.50

0.55

0.60

0.65

0.70

0.75

0.80

35 40 45 50 55 60 65 70 75 800.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0 2 4 6 8 10 12 140.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Conclusion

THANKS TO MY COLLABORATORS

M. J. Carceres (U.Granada), J. A. Carrillo (U. A. Barcelona)

(for I & F ; J. Mathematical Neurosciences 2011)

D. Smets (work in preparation)

K. Pakdaman, D. Salort (Inst. J. Monod, U. Paris Diderot)

(for elapsed time ; Nonlinearity 2010)

Conclusion

THANK YOU ALL