22
Introduction to Neural Networks John Paxton Montana State University Summer 2003

Introduction to Neural Networks

Embed Size (px)

DESCRIPTION

Introduction to Neural Networks. John Paxton Montana State University Summer 2003. Textbook. Fundamentals of Neural Networks: Architectures, Algorithms, and Applications Laurene Fausett Prentice-Hall 1994. Chapter 1: Introduction. Why Neural Networks? Training techniques exist. - PowerPoint PPT Presentation

Citation preview

Page 1: Introduction to Neural Networks

Introduction to Neural Networks

John Paxton

Montana State University

Summer 2003

Page 2: Introduction to Neural Networks

Textbook

Fundamentals of Neural Networks:

Architectures, Algorithms, and Applications

Laurene Fausett

Prentice-Hall

1994

Page 3: Introduction to Neural Networks

Chapter 1: Introduction

• Why Neural Networks?

Training techniques exist.

High speed digital computers.

Specialized hardware.

Better capture biological neural systems.

Page 4: Introduction to Neural Networks

Who is interested?

• Electrical Engineers – signal processing, control theory

• Computer Engineers – robotics

• Computer Scientists – artificial intelligence, pattern recognition

• Mathematicians – modelling tool when explicit relationships are unknown

Page 5: Introduction to Neural Networks

Characterizations

• Architecture – a pattern of connections between neurons

• Learning Algorithm – a method of determining the connection weights

• Activation Function

Page 6: Introduction to Neural Networks

Problem Domains

• Storing and recalling patterns

• Classifying patterns

• Mapping inputs onto outputs

• Grouping similar patterns

• Finding solutions to constrained optimization problems

Page 7: Introduction to Neural Networks

A Simple Neural Network

x2

y

w1

w2

x1

yin = x1w1 + x2w2

Activation is f(yin)

Page 8: Introduction to Neural Networks

Biological Neuron

• Dendrites receive electrical signals affected by chemical process

• Soma fires at differing frequencies

somadendrite

axon

Page 9: Introduction to Neural Networks

Observations

• A neuron can receive many inputs

• Inputs may be modified by weights at the receiving dendrites

• A neuron sums its weighted inputs

• A neuron can transmit an output signal

• The output can go to many other neurons

Page 10: Introduction to Neural Networks

Features

• Information processing is local

• Memory is distributed (short term = signals, long term = dendrite weights)

• The dendrite weights learn through experience

• The weights may be inhibatory or excitatory

Page 11: Introduction to Neural Networks

Features

• Neurons can generalize novel input stimuli

• Neurons are fault tolerant and can sustain damage

Page 12: Introduction to Neural Networks

Applications

• Signal processing, e.g. suppress noise on a phone line.

• Control, e.g. backing up a truck with a trailer.

• Pattern recognition, e.g. handwritten characters or face sex identification.

• Diagnosis, e.g. aryhthmia classification or mapping symptoms to a medical case.

Page 13: Introduction to Neural Networks

Applications

• Speech production, e.g. NET Talk. Sejnowski and Rosenberg 1986.

• Speech recognition.

• Business, e.g. mortgage underwriting. Collins et. Al. 1988.

• Unsupervised, e.g. TD-Gammon.

Page 14: Introduction to Neural Networks

Single Layer Feedforward NN

x1

xn

y1

ym

w11

w1m

wn1

wnm

Page 15: Introduction to Neural Networks

Multilayer Neural Network

• More powerful

• Harder to train

x1

xn zp

z1

ym

y1

Page 16: Introduction to Neural Networks

Setting the Weight

• Supervised

• Unsupervised

• Fixed weight nets

Page 17: Introduction to Neural Networks

Activation Functions

• Identity f(x) = x

• Binary step f(x) = 1 if x >= f(x) = 0 otherwise

• Binary sigmoidf(x) = 1 / (1 + e-x)

Page 18: Introduction to Neural Networks

Activation Functions

• Bipolar sigmoidf(x) = -1 + 2 / (1 + x)

• Hyperbolic tangentf(x) = (ex – e-x) / (ex + e-x)

Page 19: Introduction to Neural Networks

History

• 1943 McCulloch-Pitts neurons

• 1949 Hebb’s law

• 1958 Perceptron (Rosenblatt)

• 1960 Adaline, better learning rule (Widrow, Huff)

• 1969 Limitations (Minsky, Papert)

• 1972 Kohonen nets, associative memory

Page 20: Introduction to Neural Networks

History

• 1977 Brain State in a Box (Anderson)

• 1982 Hopfield net, constraint satisfaction

• 1985 ART (Carpenter, Grossfield)

• 1986 Backpropagation (Rumelhart, Hinton, McClelland)

• 1988 Neocognitron, character recognition (Fukushima)

Page 21: Introduction to Neural Networks

McCulloch-Pitts Neuron

x1

x2

x3

y

f(yin) = 1 if yin >=

Page 22: Introduction to Neural Networks

Exercises

• 2 input AND

• 2 input OR

• 3 input OR

• 2 input XOR