23
Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

Embed Size (px)

Citation preview

Page 1: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

Bioinspired ComputingLecture 14

Alternative

Neural Networks

Netta Cohen

Page 2: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

2

Last time

• Biologically inspired associative memories

• moves away from bio- realistic model

• Unsupervised learning• Working examples and

applications• Pros, Cons & open

questions

Today

Attractor neural nets:

• SOM (Competitive) Nets• Neuroscience applications• GasNets.• Robotic control

Other Neural Nets

Page 3: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

3

Spatial CodesNatural neural nets often code similar things close together. The auditory and visual cortex provide examples.

Neural Material

Low Freq.

High Freq.

Frequency Sensitivity

Orientation Sensitivity

Neural Material

0° 359°

Another example: touch receptors in the human body. "Almost every region of the body is represented by a corresponding region in both the primary motor cortex and the somatic sensory cortex" (Geschwind 1979:106). "The finger tips of humans have the highest density of receptors: about 2500 per square cm!" (Kandel and Jessell 1991:374). This representation is often dubbed the homunculus (or little man in the brain)

Picture from http://www.dubinweb.com/brain/3.html

Page 4: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

4

Input Nodes

Fully Connected

Output Pattern

In a Kohonen net, a number of input neurons feed a single lattice of neurons.

The output pattern is produced across the lattice surface.

Lattice

Kohonen Nets

Large volumes of data are compressed using spatial/ topological relationships within the training set. Thus the lattice becomes an efficient distributed representation of the input.

Page 5: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

5

Kohonen Nets

Important features:

• Self-organisation of a distributed representation of inputs.

• This is a form of unsupervised learning:

• The underlying learning principle: competition among nodes known as “winner takes all”. Only winners get to “learn” & losers decay. The competition is enforced by the network architecture: each node has a self-excitatory connection and inhibits all its neighbours.

• Spatial patterns are formed by imposing the learning rule throughout the local neighbourhood of the winner.

also known as self-organising maps (SOMs)

Page 6: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

6

Training Self-Organising Maps

A simple training algorithm might look like this:

1.Randomly initialise the network input weights

2.Normalise all inputs so they are size-independent

3.Define a local neighbourhood and a learning rate

4.For each item in the training set• Find the lattice node most excited by the input• Alter the input weights for this node and those

nearby such that they more closely resemble the input vector, i.e., at each node, the input weight update rule is: w = r (x-w)

5.Reduce the learning rate & the neighbourhood size

6.Goto 2 (another pass through the training set)

Page 7: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

7

Training Self-Organising Maps (cont)

Gradually the net self-organises into a map of the inputs, clustering the input data by recruiting areas of the net for related inputs or features in the inputs.

The size of the neighbourhood roughly corresponds to the resolution of the mapped features.

Page 8: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

8

How Does It Work?Imagine a 2D training set with clusters of data points

Vertical

BlueRed

HorizontalThe nodes in the lattice are initially randomly sensitive.

Gradually, they will “migrate” towards the input data.

Nodes that are neighbours in the lattice will tend to become sensitive to similar inputs.Effective resource allocation: dense parts of the input space recruit more nodes than sparse areas.

Applet from http://www.patol.com/java/TSP/index.html

Another example: The travelling salesman problem

Page 9: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

9

How does the brain perform classification?

One area of the cortex (the inferior temporal cortex or IT) has been linked with two important functions:

• object recognition

• object classification

These tasks seem to be shape/colour specific but independent of object size, position, relative motion or speed, brightness or texture.

Indeed, category-specific impairments have been linked to IT injuries.

Page 10: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

10

How does the brain perform classification (cont)?

Questions:How do IT neurons encode objects/categories? e.g.,

• local versus distributed representations/coding• temporal versus rate coding at the neuronal level

Can we recruit ANNs to answer such questions?

Can ANNs perform classification as well given similar data?

Recently, Elizabeth Thomas and colleagues performed experiments on the activity of IT neurons during an exercise of image classification in monkeys and used a Kohonen net to analyse the data.

Page 11: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

11

The experimentMonkeys were trained to distinguish between a training set of pictures of trees and various other objects. The monkeys were considered trained when they reached a 95% success rate.

Trained monkeys were now shown new images of trees and other objects. As they classified the objects, the activity in IT neurons in their brains was recorded. All in all 226 neurons were recorded on various occasions and over many different images.

The data collected was the mean firing rate of each neuron in response to each image. 25% of neurons responded only to one category, but 75% were not category specific. All neurons were image-specific.

Problem: Not all neurons were recorded for all images & No images were tested across all neurons. In fact, when a Table of neuronal responses for each image was created, it was more than 80% empty.

E. Thomas et al, J. Cog. Neurosci. (2001)

Page 12: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

12

Experimental Results

Question: Given the partial data, is there sufficient information to classify images as trees or non-trees?

Answer: A 2-node Kohonen net trained on the Table of neuronal responses was able to classify new images with an 84% success rate.

E. Thomas et al, J. Cog. Neurosci. (2001)

Question: Are categories encoded by category-specific neurons?

Answer: Delete data of category-specific neuron responses from Table. The success rate of the Kohonen net was degraded but only minimally. A control set with random data deletions yielded similar results. Conclusion: Category-specific neurons are not important for categorisation!

Page 13: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

13

Experimental Results (cont.)

E. Thomas et al, J. Cog. Neurosci. (2001)

Conclusions: The IT employs a distributed representation to encode categories of different images. The redundancy in this encoding allows for graceful degradation so that even with 80% of data missing and many neurons deleted, sufficient information is present for classification purposes. The fact that only rate information was used suggests that temporal information is less important here.

Question: Which neurons are important, if any?

Answer: An examination of the weights that contribute most to the output in the Kohonen net revealed that a small subset of neurons (<50) that are not category-specific yet respond with different intensities to different categories are crucial for correct classification.

Page 14: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

14

Space in Neural Nets Kohonen nets teach us an important lesson about the ability of neurons to encode information, not only in weights, but also in spatial organisation. What are the consequences for network dynamics? Can these principles be extended beyond simple centre-surround constraints of self-excitation and neighbour inhibition?

Once again, insight may be gained by returning to the biological domain and asking how space affects brain activity.

While always aware of the immense richness of neuronal behaviour, we have, until today, considered them to be minimal processors communicating via well-defined circuits. What have we neglected? We have turned our networks into abstract, computing tools, disconnected from the real world in which problems are defined. We have also robbed the networks of enormous freedom by restricting the encoding of information to series of weights.

Page 15: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

15

Neurotransmitters in brain

Unlike standard neurotransmitters which are unable to travel far from their point of origin, NO is a small gas molecule that is free to diffuse slowly away from its origin, unhindered by intervening cellular structures.

NO secreted by a neuron affects all neurons within range – regardless of circuitry. Such influences go beyond excitation or inhibition. NO has the potential to modulate many aspects of the neuron’s behaviour.

• many neurotransmitters do not just excite or inhibit• neurons release gases such as nitric oxide (NO)• the behaviour of these diffusing gaseous modulators is

very different from that of standard neurotransmitters…

Page 16: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

16

GasNets

The concentration of gas at the location of a neuron modulates its sigmoid activity function, either increasing or decreasing the steepness of the curve (and its ability to secrete gas itself).

inputoutput

All GasNet figures courtesy of Phil Husbands, Mick O’Shea, Tom Smith, & Nick Jakobi

Researchers at Sussex’s Centre for Computational Neuroscience and Robotics have been developing an ANN architecture inspired by these findings which they call GasNets.

Their model is a generalisation of the dynamic recurrent neural nets. Neurons are organised on a 2D grid, with all-to-all synaptic connections. Active neurons can also secrete gas.

Page 17: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

17

A Control Task…

A robot lives in a walled arena. Its task is to approach a white triangle painted on the wall and avoid a white rectangle, using only very crude visual input (typically a handful of pixels from a camera mounted on the robot).

Performing this shape discrimination under noisy lighting conditions is a non-trivial task, especially given the limited visual input available to the controller.

drive wheel

caster

motor

camera

bumpers

Page 18: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

18

Non-Gaseous SolutionsThe same researchers had previously evolved more standard dynamical neural nets to solve this task:

These controllers took ~6000 generations to discover.

What kind of GasNet controllers would evolve? Would they exhibit advantages over other kinds of ANN?

all figures courtesy of Sussex CCNR

Page 19: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

19

GasNet ControllersTwo kinds of successful GasNet controller were evolved, each taking ~1000 generations to discover. Here’s one:

Page 20: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

20

The GasNet SolutionsBoth GasNets perform robustly despite the noisy lighting conditions and “outrageously low bandwidth” vision.

The evolved visual morphology always played a crucial role. Active visual strategies solved the task, rather than central reasoning.

Far: Ballistic

Contrast between two offset visual inputs is used to detect triangle edge.

Near: Closed-loop

“Scanning behaviour” continually modulates approach

Page 21: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

21

Why Does Gas Make It Better?While still an open question, several possibilities include:

• Gas diffuses widely, allowing large parts of the network to be inhibited or excited simultaneously.

• Gas concentration varies much slower than the flow of ‘electrical’ activation around the synaptic connections.

• There may be useful interactions between the slow gas and fast activation dynamics.

Combination of these ideas may explain why solutions appear easier to build from GasNets than from non-gas dynamical ANNs.

Research into these possibilities is currently ongoing by Chris Buckley in the Biosystems group of the School of Computing.

Page 22: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

22

From Biology to ANNs & Back

Neuroscience and studies of animal behaviour have led to new ideas for artificial learning, communication, cooperation & competition. Simplistic cartoon models of these mechanisms can lead to new paradigms and impressive technologies.

• Dynamic Neural Nets are helping us understand real-time adaptation and problem-solving under changing conditions.

• Hopfield nets shed new insight on mechanisms of association and the benefits of unsupervised learning.

• Thomas’ work helps unravel coding structures in the cortex. • Husbands et al.’s GasNets are helping neuroscientists to

understand the behaviour of NO and other local influences in real nervous systems and for improved robot control.

Page 23: Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen

23

Next time…

Reading• Elizabeth Thomas et al (2001) “Encoding of categories by noncategory-specific neurons in the inferior temporal cortex”, J. Cog. Neurosci. 13: 190-200.

• Phil Husbands, Tom Smith, Nick Jakobi & Michael O’Shea (1998). “Better living through chemistry: Evolving GasNets for robot control”, Connection Science, 10:185-210.

• Ezequiel Di Paolo (2003). Organismically-inspired robotics: Homeostatic adaptation and natural teleology beyond the closed sensorimotor loop, in: K. Murase & T. Asakura (Eds) Dynamical Systems Approach to Embodiment and Sociality, Advanced Knowledge International., Adelaide, pp 19 - 42.

• Ezequiel Di Paolo (2000) “Homeostatic adaptation to inversion of the visual field and other sensorimotor disruptions”, SAB2000, MIT Press.

• Guest lecture series on Genetic Evolution and Genetic Programming.