86
TNI: Computational Neuroscience structors: Peter Latham Maneesh Sahani Peter Dayan s: Arthur Guez, [email protected] Marius Pachitariu, [email protected] bsite: http://www.gatsby.ucl.ac.uk/~aguez/tn1/ ctures: Tuesday/Friday, 11:00-1:00. view: Tuesday, starting at 4:30. mework: Assigned Friday, due Friday (1 week later). first homework: assigned Oct. 7, due Oct. 1

TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

  • Upload
    inga

  • View
    14

  • Download
    0

Embed Size (px)

DESCRIPTION

TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan TAs:Arthur Guez, [email protected] Marius Pachitariu, [email protected] Website:http://www.gatsby.ucl.ac.uk/~aguez/tn1/ Lectures:Tuesday/Friday, 11:00-1:00. - PowerPoint PPT Presentation

Citation preview

Page 1: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

TNI: Computational Neuroscience

Instructors: Peter LathamManeesh SahaniPeter Dayan

TAs: Arthur Guez, [email protected] Pachitariu, [email protected]

Website: http://www.gatsby.ucl.ac.uk/~aguez/tn1/

Lectures: Tuesday/Friday, 11:00-1:00.Review: Tuesday, starting at 4:30.

Homework: Assigned Friday, due Friday (1 week later).first homework: assigned Oct. 7, due Oct. 14.

Page 2: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What is computational neuroscience?

Our goal: figure out how the brain works.

Page 3: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

10 microns

There are about 10 billion cubes ofthis size in your brain!

Page 4: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

How do we go about making sense of this mess?

David Marr (1945-1980) proposed three levels of analysis:

1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)

Page 5: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #1: memory.

the problem:recall events, typically based on partial information.

Page 6: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #1: memory.

the problem:recall events, typically based on partial information.associative or content-addressable memory.

an algorithm:dynamical systems with fixed points.

r3

r2

r1 activity space

Page 7: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #1: memory.

the problem:recall events, typically based on partial information.associative or content-addressable memory.

an algorithm:dynamical systems with fixed points.

neural implementation:Hopfield networks.

xi = sign(∑j Jij xj)

Page 8: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #2: vision.

the problem (Marr):2-D image on retina → 3-D reconstruction of a visual scene.

Page 9: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #2: vision.

the problem (modern version):2-D image on retina → recover the latent variables.

housesuntreebad artist

Page 10: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #2: vision.

the problem (modern version):2-D image on retina → recover the latent variables.

housesuntreebad artistcloud

Page 11: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #2: vision.

the problem (modern version):2-D image on retina → reconstruction of latent variables.

an algorithm:graphical models.

x1 x2 x3

r1 r2 r3 r4

latent variables

low level representation

Page 12: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #2: vision.

the problem (modern version):2-D image on retina → reconstruction of latent variables.

an algorithm:graphical models.

x1 x2 x3

r1 r2 r3 r4

latent variables

low level representation

inference

Page 13: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #2: vision.

the problem (modern version):2-D image on retina → reconstruction of latent variables.

an algorithm:graphical models.

implementation in networks of neurons:no clue.

Page 14: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #1:

the problem:the algorithm:neural implementation:

Page 15: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #1:

the problem: easierthe algorithm: harderneural implementation: harder

often ignored!!!

Page 16: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #1:

the problem: easierthe algorithm: harderneural implementation: harder

A common approach:

Experimental observation → model

Usually very underconstrained!!!!

Page 17: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #1:

the problem: easierthe algorithm: harderneural implementation: harder

Example i: CPGs (central pattern generators)

rate

rate

Too easy!!!

Page 18: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #1:

the problem: easierthe algorithm: harderneural implementation: harder

Example ii: single cell modeling

C dV/dt = -gL(V – VL) – n4(V – VK) …

dn/dt = …

lots and lots of parameters … which ones should you use?

Page 19: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #1:

the problem: easierthe algorithm: harderneural implementation: harder

Example iii: network modeling

lots and lots of parameters × thousands

Page 20: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #2:

the problem: easierthe algorithm: harderneural implementation: harder

You need to know a lot of math!!!!! r3

r2

r1 activity space

x1 x2 x3

r1 r2 r3 r4

Page 21: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #3:

the problem: easierthe algorithm: harderneural implementation: harder

This is a good goal, but it’s hard to do in practice.

Our actual bread and butter: 1. Explaining observations (mathematically) 2. Using sophisticated analysis to design simple experiments that test hypotheses.

Page 22: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #3:

Two experiments:

- record, using loose patch, from a bunch of cells in culture- block synaptic transmission- record again

- found quantitative support for the balanced regime.

J. Neurophys., 83:808-827, 828-835, 2000

Page 23: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #3:

Two experiments:

- perform whole cell recordings in vivo- stimulate cells with a current pulse every couple hundred ms- build current-triggered PSTH

- showed that the brain is intrinsically very noisy, and is likely to be using a rate code.

Nature, 466:123-127 (2010)

Page 24: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #4:

the problem: easierthe algorithm: harderneural implementation: harder

some algorithms are easy to implement on a computerbut hard in a brain, and vice-versa.

these are linked!!!

Page 25: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #4:

hard for a brain, easy for a computer:

A-1

z=x+y∫dx ...

easy for a brain, hard for a computer:

associative memory

Page 26: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #4:

the problem: easierthe algorithm: harderneural implementation: harder

some algorithms are easy to implement on a computerbut hard in a brain, and vice-versa.

we should be looking for the vice-versa ones.

it can be hard to tell which is which.

these are linked!!!

Page 27: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Basic facts about the brain

Page 28: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Your brain

Page 29: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Your cortex unfolded

~30 cm

~0.5 cm

neocortex (cognition)

subcortical structures(emotions, reward,homeostasis, much muchmore)

6 layers

Page 30: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Your cortex unfolded

1 cubic millimeter,~3*10-5 oz

Page 31: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

Page 32: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

1 mm2 of a CPU:

1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire

Page 33: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

whole brain (2 kg):

1011 neurons1015 connections8 million km of axons

1 mm2 of a CPU:

1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire

whole CPU:

109 transistors2*109 connections2 km of wire

Page 34: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

whole brain (2 kg):

1011 neurons1015 connections8 million km of axons

1 mm2 of a CPU:

1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire

whole CPU:

109 transistors2*109 connections2 km of wire

Page 35: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

volt

age

100 mstime

-50 mV

+20 mV

dendrites (input)

soma (spike generation)

axon (output)

1 ms

Page 36: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan
Page 37: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

current flow

synapse

Page 38: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

current flow

synapse

Page 39: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

volt

age

100 mstime

-50 mV

+20 mV

Page 40: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

EPSP

Page 41: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

IPSP

Page 42: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

IPSP

amplitude = wij

Page 43: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

IPSP

amplitude = wij

changes withlearning

Page 44: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

current flow

wij

Page 45: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

A bigger picture view of the brain

Page 46: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

xr

sensory processing

motor processing

x'

r'

cognitionmemory

action selection

peripheral spikes

latent variables

motor actions

peripheral spikes

brain

r̂ “direct” code forlatent variables

r'̂ “direct” code formotor actions

Page 47: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Who is walking behind the picket fence?

Page 48: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

Page 49: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

Page 50: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

Page 51: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

Page 52: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

Page 53: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

you are thecutest stickfigure ever!

Page 54: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

you are thecutest stickfigure ever!

Page 55: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

xr

sensory processing

motor processing

x'

r'

cognitionmemory

action selection

peripheral spikes

latent variables

motor actions

peripheral spikes

brain

r̂ “direct” code forlatent variables

r'̂ “direct” code formotor actions

Page 56: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

xr

sensory processing

motor processing

x'

r'

cognitionmemory

action selection

peripheral spikes

latent variables

motor actions

peripheral spikes

brain

r̂ “direct” code forlatent variables

r'̂ “direct” code formotor actions

Page 57: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

In some sense, action selection is the most importantproblem:

if we don’t choose the right actions, we don’t reproduce, and all the neural coding and computation in the world isn’t going to help us.

Page 58: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Do I call him and risk rejection and humiliation,or do I play it safe, and stay home on Saturdaynight and eat oreos?

Page 59: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Do I call her and risk rejection and humiliation,or do I play it safe, and stay home on Saturdaynight and eat oreos?

Page 60: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

xr

sensory processing

motor processing

x'

r'

cognitionmemory

action selection

peripheral spikes

latent variables

motor actions

peripheral spikes

brain

r̂ “direct” code forlatent variables

r'̂ “direct” code formotor actions

Page 61: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Problems:1. How does the brain extract latent variables?2. How does it manipulate latent variables?3. How does it learn to do both?

Ask at two levels:1. What are the algorithms?2. How are they implemented in neural hardware?

Page 62: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What do we know about the brain?

Highly

biased

Page 63: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

a. Anatomy. We know a lot about what is where. But becareful about labels: neurons in motor cortex sometimes

respond to color.

Connectivity. We know (more or less) which areais connected to which.

Page 64: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

The van Essen diagram

Page 65: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

a. Anatomy. We know a lot about what is where. But becareful about labels: neurons in motor cortex sometimes

respond to color.

Connectivity. We know (more or less) which areais connected to which.

Page 66: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

wij

a. Anatomy. We know a lot about what is where. But becareful about labels: neurons in motor cortex sometimes

respond to color.

Connectivity. We know (more or less) which areais connected to which. We don’t know the wiring diagramat the microscopic level.

Page 67: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

wij

a. Anatomy. We know a lot about what is where. But becareful about labels: neurons in motor cortex sometimes

respond to color.

Connectivity. We know (more or less) which areais connected to which. We don’t know the wiring diagramat the microscopic level. But we might in a few decades!

Page 68: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

b. Single neurons. We know very well how point neurons work(think Hodgkin Huxley).

Dendrites. Lots of potential for incredibly complexprocessing.

My guess: all they do make neurons bigger and reduce wiring length (see the work of Mitya Chklovskii).

Page 69: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

m neurons

n neurons

L

L

total wire length without dendrites: ~nmL

Page 70: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

m neurons

n neurons

L

Ltotal length = mL

total length = nL

total wire length without dendrites: ~nmL

total wire length with dendrites: ~(n+m)L

Page 71: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

b. Single neurons. We know very well how point neurons work(think Hodgkin Huxley).

Dendrites. Lots of potential for incredibly complexprocessing.

My guess: all they do is make neurons bigger and reduce wiring length (see the work of Mitya Chklovskii).

How much I would bet that that’s true: 20 p.

Page 72: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

c. The neural code.

My guess: once you get away from periphery, it’s mainly firing rate: an inhomogeneous Poisson process with a refractory period is a good model of spike trains.

How much I would bet: £100.

The role of correlations. Still unknown.

My guess: don’t have one.

Page 73: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

d. Recurrent networks of spiking neurons. This is a field thatis advancing rapidly! There were two absolutely seminalpapers about a decade ago:

van Vreeswijk and Sompolinsky (Science, 1996)van Vreeswijk and Sompolinsky (Neural Comp., 1998)

We now understand very well randomly connected networks(harder than you might think), and (I believe) we are onthe verge of:

i) understanding networks that have interesting computational properties.ii) computing the correlational structure in those networks.

Page 74: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

e. Learning. We know a lot of facts (LTP, LTD, STDP).

• it’s not clear which, if any, are relevant. • the relationship between learning rules and computation is essentially unknown.

Theorists are starting to develop unsupervised learning algorithms, mainly ones that maximize mutual information. These are promising, but the link to the brain has not been fully established.

Page 75: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

e. Learning. We know a lot of facts (LTP, LTD, STDP).

• it’s not clear which, if any, are relevant. • the relationship between learning rules and computation is essentially unknown.

Theorists are starting to develop unsupervised learning algorithms, mainly ones that maximize mutual information. These are promising, but the link to the brain has not been fully established.

Page 76: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What is unsupervised learning?

Learning structure from data without any help from anybody.

Example: most visual scenes are very unlikely to occur.

1000 × 1000 pixels => million dimensional space.

space of possible pictures is much smaller, and forms a very complicated manifold:

possible visualscenes

Page 77: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What is unsupervised learning?

Learning structure from data without any help from anybody.

Example: most visual scenes are very unlikely to occur.

1000 × 1000 pixels => million dimensional space.

space of possible pictures is much smaller, and forms a very complicated manifold:

visual scenes

Page 78: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What is unsupervised learning?

Learning structure from data without any help from anybody.

Example: most visual scenes are very unlikely to occur.

1000 × 1000 pixels => million dimensional space.

space of possible pictures is much smaller, and forms a very complicated manifold:

visual scenes

Page 79: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What is unsupervised learning?

Learning from spikes:

neurons 1

neu

ron

2

Page 80: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What is unsupervised learning?

Learning from spikes:

neurons 1

neu

ron

2dog

cat

Page 81: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What is unsupervised learning?

Learning structure from data without any help from anybody.

Which is real and which is a painting?

Page 82: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

A word about learning (remember these numbers!!!):

You have about 1015 synapses.

If it takes 1 bit of information to set a synapse,you need 1015 bits to set all of them.

30 years ≈ 109 seconds.

To set 1/10 of your synapses in 30 years,

you must absorb 100,000 bits/second.

Learning in the brain is almost completely unsupervised!!!

stolen from Geoff Hinton

Page 83: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

f. Where we know algorithms we know the neuralimplementation (sort of):

vestibular system, sound localization, echolocation, addition

This is not a coincidence!!!!

Remember David Marr:

1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)

Page 84: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What we know: my score (1-10).

a. Anatomy. 5b. Single neurons. 6c. The neural code. 6d. Recurrent networks of spiking neurons. 3e. Learning. 2

The hard problems:1. How does the brain extract latent variables? 1.0012. How does it manipulate latent variables? 1.0023. How does it learn to do both? 1.001

Page 85: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Outline:

1. Basics: single neurons/axons/dendrites/synapses. Latham2. Language of neurons: neural coding. Sahani3. Learning at the network and behavioral level. Dayan 4. What we know about networks (very little). Latham

Page 86: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Outline for this part of the course (biophysics):

1. What makes a neuron spike.2. How current propagates in dendrites.3. How current propagates in axons.4. How synapses work.5. Lots and lots of math!!!