36
GUM*02 tutorial session UTSA, San Antonio, Texas Large-scale realistic modeling of neuronal networks Mike Vanier, Caltech

GUM*02 tutorial session UTSA, San Antonio, Texas Large-scale realistic modeling of neuronal networks Mike Vanier, Caltech

Embed Size (px)

Citation preview

GUM*02 tutorial sessionUTSA, San Antonio, Texas

Large-scale realistic modeling of neuronal

networks

Mike Vanier, Caltech

Structure of the talk:

General network modeling

issues

Details of how networks are

modeled in GENESIS

Part 1

General network modeling

issues

Details of how networks are

modeled in GENESIS

Why model networks?

Goal: understand the brain network of networks

Networks implement computations influence of NN theory

Networks are where the action is!

Why avoid modeling networks?

networks are too complex dozens of cell types complex connectivities, interactions

we don’t understand neurons yet not enough data want to graduate quickly

Roots of GENESIS

GENESIS: GEneral NEural SImulation System

network modeling was orig focus

and yet...

most models still either single neuron models very small networks “abstract” network models

maybe a 10:1 ratio or worse why is this?

Network modeling is hard!!!

need accurate data on: neuron models (ALL types) connectivities inputs outputs

simplifications needed scaling issues

More typical scenario

data available for some neurons only inhibitory neurons?

connectivities only vaguely known inputs vaguely known if at all outputs vaguely known if at all why bother?

Motivations

“Abandon all hope, ye who enter here.”

more exploratory, less definitive refine conceptual model of system make implicit ideas about function

explicit figure out what data to collect

The process

collect all the data you can!!! build simplified neuron models

match to data

build model of inputs build network model

match to data

graduate

Example: piriform cortex

neuron types well established little physiology for most

connection patterns known inputs partially known outputs mostly unknown

Neuron types

Simplification

Physiology: pyramidal neurons

realmodel

Physiology: inhibitory neurons

inputs

ISI distributionspike rasters

Connectivities 1

afferents

Connectivities 2

now the “fun” begins...

pick network phenomenon to model PC: response to strong, weak shocks

independent of details of bulb relatively simple

adjust parameters to tune model leave neuron parameters alone connectivities

results?

see my talk tomorrow

hint: I graduated

Part 2

General network modeling

issues

Details of how networks are

modeled in GENESIS

GENESIS basics

modeler creates simulation objects objects send messages to ea. other messages contain data

field values most messages sent each time step

or once per fixed interval [spikes break this rule]

neurons

compartmental models of neurons neuron composed of compartments compartments are isopotential channels connect to compartments

voltage-dependent calcium-dependent synaptic

setting up the neuron

create neutral /neuron1

create compartment /neuron1/soma

setfield ^ \

Em { Erest } \ // volts

Rm { RM / area } \ // Ohms

Cm { CM * area } \ // Farads

Ra { RA * len / xarea } // Ohms

spikes in genesis

spikegen object monitors Vm of compartment when past threshold, sends SPIKE

message to destination

synchan object receives SPIKE message stores time of spike in buffer generates -function when spike hits

setting up the synchan

create synchan /neuron1/syn

setfield ^ \

gmax 1.0e-9 \ // 1 nS

Ek 0.0 \

tau1 0.001 \ // rise time (sec)

tau2 0.003 // fall time

// Connect soma to synchan:

addmsg /neuron1/soma /neuron1/syn VOLTAGE Vm

addmsg /neuron1/syn /neuron1/soma CHANNEL Gk Ek

setting up the spikegen

// Create and connect spike detector:

create spikegen /neuron1/spike

setfield ^ thresh -0.020 abs_refract 0.002

addmsg /neuron1/soma /neuron1/spike INPUT Vm

connecting two neurons

// Assume we have neuron2 like neuron1

addmsg /neuron1/spike /neuron2/syn SPIKE

// Set synaptic weight and delay:

setfield /neuron2/syn \

synapse[0].weight 1.0 \

synapse[0].delay 0.001 // 1 msec

// That’s all there is to it!

building networks

Why not just do this for all synapses? 100-1000 neurons, 10,000-100,000

synapses... gets pretty tedious

faster way: large-scale connection commands volumeconnect [planarconnect] volumedelay [planardelay] volumeweight [planarweight]

volumeconnect

volumeconnect source_elements destination_elements \ -relative \ -sourcemask {box, ellipsoid} x1 y1 z1 x2 y2 z2 \ -sourcehole {box, ellipsoid} x1 y1 z1 x2 y2 z2 \ -destmask {box, ellipsoid} x1 y1 z1 x2 y2 z2 \ -desthole {box, ellipsoid} x1 y1 z1 x2 y2 z2 \

-probability p

volumedelay

volumedelay sourcepath [destination_path] \ -fixed delay \ -radial conduction_velocity \ -add \ -uniform scale \ -gaussian stdev maxdev \ -exponential mid max \ -absoluterandom

volumeweight

volumeweight sourcepath [destination_path] \ -fixed weight \ -decay decay_rate max_weight min_weight \ -uniform scale \ -gaussian stdev maxdev \ -exponential mid max \ -absoluterandom

note on connection commands

mainly useful for simple cases

more realistic cases require more

control

GENESIS script language makes it easy

to write own connection commands

output

Xodus graphical output

dump neuron data to files binary files readable by

“xview”

conclusions

network modeling is fun fascinating fundamental frustrating!

NOT for the easily discouraged!