22
Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Embed Size (px)

Citation preview

Page 1: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Bioinspired ComputingLecture 5

Biological Neural Networks

Netta Cohen

Page 2: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

We introduced swarm intelligence.

We saw how many simple agents can follow simple rules

that allow them to collectively perform more complex

tasks.

Last week:

Today...Biological systems whose manifest function is information

processing: computation, thought, memory, communication

and control. We begin a dissection of a brain:

How different is a brain from an artificial computer?

How can we build and use artificial neural networks?

Page 3: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Investigating the brain

Thecomputer

Input

program Output

Summon Scottie, your engineer

to disassemble the machines into component parts, test each part (electronically, optically, chemically…), decode the machine language, and study how components are connected.

to connect to the input & output ports of a machine, find a language to communicate with it & write computer programs to test the system’s response by measuring its speed, efficiency & performance at different tasks.

Summon Data - your software wiz

Imagine landing on an abandoned alien planet and finding thousands of alien computers. You and your crew’s mission is to find out how they work. What do you do?

part#373a

Inputs Outputs

Page 4: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

The brain as a computerHigher level functions in animal behaviour

• Gathering data (sensation)• Inferring useful structures in data (perception)• Storing and recalling information (memory)• Planning and guiding future actions (decision)• Carrying out the decisions (behaviour)• Learning consequences of these actions

Hardware functions and architectures

• 10 billion neurons in human cortex• 10,000 synapses (connections) per neuron• Machine language: 100mV, 1-2msec spikes (action potential)• Specialised regions & pathways (visual, auditory, language…)

Page 5: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Special task: program often hard-coded into system.

Hardware not hard: plastic, rewiring.

No clear hierarchy. Bi-diretional feedback up & down the system.

Unreliable components. Parallelism, redundancy appear to compensate.

Output doesn’t always match input: Internal state is important.

Development & evolutionary constraints are crucial.

Universal, general-purpose. Software: general, user-supplied.

Hardware is hard:Only upgraded in discrete units.

Obvious hierarchy: each component has a specific function.

Once burned in, circuits run without failure for extended lifetimes.

Input-output relations are well-defined.

Engineering design depends on engineer. Function is not an issue.

The brain as a computerversus

Page 6: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Neuroscience pre-history• 200 AD: Greek physician Galen hypothesises that nerves carry signals back & forth between sensory organs & the brain.

• 17th century: Descartes suggests that nerve signals account for reflex movements.

• 19th century: Helmholtz discovers the electrical nature of these signals, as they travel down a nerve.

• 1838-9: Schleiden & Schwann systematically study plant & animal tissue. Schwann proposes the theory of the cell (the basic unit of life in all living things).

• Mid-1800s: anatomists map the structure of the brain.

but…

The microscopic composition of the brain remains elusive. A raging debate surrounds early neuroscience research, until...

Page 7: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

1) Neurons are cells: distinct entities (or agents).

2) Inputs & outputs are received at junctions called synapses.

3) Input & output ports are distinct. Signals are uni-directional from input to output.

Today, neurons (or nerve cells) are regarded as the basic information processing unit of the nervous system.

The neuron doctrine Ramon y Cajal (1899)

neuronInputs

Outputs

Page 8: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

• Both have well-defined inputs and outputs. • Both are basic information processing units that comprise computational networks.

If transistors can perform logical operations, maybe neurons can too?

The neuron as a transistor

Neuronal function is typically modelled by a combination of • a linear operation (sum over inputs) and • a nonlinear one (thresholding).

input neuron output

This simple representation relies on Cajal’s concept of

Page 9: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

The basic “bit” of information is represented by neurons in spikes. The cell is said to be either at rest or active. A spike (action potential) is a strong, brief electrical pulse. Since these action potentials are mostly identical, we can safely refer to them as all-or-none signals.

Machine language

Why Spikes?

Why don’t neurons use analog signals? One answer lies in the network architecture: signals cover long distances (both within the brain and throughout the body). Reliable transmissions requires strong pulses.

Page 10: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Computation of a pyramidal neuron

Single all-or-none

output

Many inputs (dendrites)

soma axon

Page 11: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

We can now summarise our working principles:

• The basic computational unit of the brain is the neuron.

• The machine language is binary: spikes.

• Communication between neurons is via synapses.

However, we have not yet asked how information is encoded in the brain, how it is processed in the brain, and whether what goes on in the brain is really ‘computation’.

From transistors to networks

Page 12: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Examples of both neural codes and distributed representations have been found in the brain. Example in the visual system: colour representation, face recognition, orientation, motion detection, & more…

Information codes

Temporal code Neural code

Rate code Population code/

Distributed code

noise

http://www.cs.stir.ac.uk/courses/31YF/Notes/Notes_NC.html

Page 13: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Example. A spike train produced by a neuron over an interval of 100ms is recorded. Neurons can produce a spike every 2ms.

Therefore, rates (individual code words) can be produced by this neuron.

In contrast, if the neuron were using temporal coding, up to 250 different words could be represented.

In this sense, temporal coding is much more powerful.

Information content

51 different

250 different

Page 14: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Temporal codes rely on a noise-free signal transmission. Thus, we would expect to find very few ‘redundant’ neurons with co-varying outputs in that network. Accordingly, an optimal temporal coding circuit might tend to eliminate redundancy in the pattern of inputs to different neurons.

On the other hand, if neural information is carried by a noisy rate-based code, then noise can be averaged out over a population of neurons. Population coding schemes, in which many neurons represent the same information, would therefore be the norm in those networks.

Experiments on various brain systems find either coding systems, and in some cases, combinations of temporal and rate coding are found.

Circuitry depends on neural code

Page 15: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Having introduced neurons, neuronal circuits and even information codes with well defined inputs and outputs, we still have not mentioned the term computation. Is neuronal computation anything like computer computation?

Neuronal computation

101111If read 1, write 0, go right, repeat.

If read 0, write 1, HALT!

If read , write 1, HALT!

In a computer program, variable have initial states, there are possible transitions, and a program specifies the rules. The same is true for machine language. To obtain an answer at the end of a computation, the program must HALT.

Does the brain initialise variables? Does the brain ever halt?

Page 16: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Answer: The input causes the network to enter an initial state. The state of the neural network then evolves until it reaches some new stable state.

The new state is associated with the input state.

One recasting of biological brain function in these computational terms was proposed by John Hopfield in the 1980s as a model for associative memory.

Associationan example of bio-computation

Question: How does the brain associate some memory with a given input?

Page 17: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Association (cont.)

Whatever initial condition is chosen, the system will follow a well-defined route through state-space that is guaranteed to always reach some stable point (i.e., pattern of activity)

Hopfield’s ideas were strongly motivated by existing theories of self-organisation in neural networks. Today, Hopfield nets are a successful example of bio-inspired computing (but no longer believed to model computation in the brain).

Trajectories in a schematic state space

Page 18: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Learning

No discussion of the brain, or nervous systems more generally is complete without mention of learning.

• What is learning?• How does a neural network ‘know’ what computation to perform?• How does it know when it gets an ‘answer’ right (or wrong)? • What actually changes as a neural network undergoes ‘learning’?

brainSensory inputs

Motor outputs

body

environment

Page 19: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Learning (cont.)

Learning can take many forms:• Supervised learning• Reinforcement learning• Association• Conditioning• Evolution

At the level of neural networks, the best understood forms of

learning occur in the synapses, i.e., the strengthening and

weakening of connections between neurons. The brain uses its

own learning algorithms to define how connections should

change in a network.

Page 20: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

How do the neural networks form in the brain? Once formed, what determines how the circuit might change?

Learning from experience

In 1948, Donald Hebb, in his book, "The Organization of Behavior", showed how basic psychological phenomena of attention, perception & memory might emerge in the brain.

Hebb regarded neural networks as a collection of cells that can collectively store memories. Our memories reflect our experience.

How does experience affect neurons and neural networks?How do neural networks learn?

Page 21: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Synaptic Plasticity

Definition of Learning: experience alters behaviour

The basic experience in neurons is spikes.Spikes are transmitted between neurons through synapses.

Hebb suggested that connections in the brain change in response to experience.

Pre-synaptic cell

Post-synaptic celldelay

time

Hebbian learning: If the pre-synaptic cell causes the post-synaptic cell to fire a spike, then the connection between them will be enhanced. Eventually, this will lead to a path of ‘least resistance’ in the network.

Page 22: Bioinspired Computing Lecture 5 Biological Neural Networks Netta Cohen

Today... From biology to information processing

Next time... Artificial neural networks (part 1)

Focus on the simplest cartoon models of biological neural nets. We will build on lessons from today to design simple artificial neurons and networks that perform useful computational tasks.

At the turn of the 21st century, “how does it work” remains an open question. But even the kernel of understanding and simplified models we already have for various brain function are priceless, in providing useful intuition and powerful tools for bioinspired computation.