Supervised Learning Networks. Linear perceptron networks Multi-layer perceptrons Mixture of experts...

Preview:

Citation preview

Supervised Learning Networks

Supervised Learning Networks

• Linear perceptron networks

• Multi-layer perceptrons

• Mixture of experts

• Decision-based neural networks

• Hierarchical neural networks

Two-Level:

(b) Linear perceptron networks

(c) decision-based neural network.

(d) mixture of experts network.

Hierarchical Neural Network Structures

Three-Level:

(e) experts-in-class network.

(f) classes-in-expert network.

One-Level:

(a) multi-layer perceptrons.

Hierarchical Structure of NN

•1-level hierarchy: BP

•2-level hierarchy: MOE,DBNN

•3-level hierarchy: PDBNN

“Synergistic Modeling and Applications of Hierarchical Fuzzy Neural Networks”,

by S.Y. Kung, et al., Proceedings of the IEEE, Special Issue on Computational Intelligence, Sept. 1999

All Classes in One Net

multi-layer perceptron

Divide-and-conquer principle: divide the task into modules and then integrate the individual results into a collective decision.

Modular Structures (two-level)

Two typical modular networks:

(1) mixture-of-experts (MOE) which utilizes the expert-level modules,

(2) decision-based neural networks (DBNN) based on the class-level modules.

Each expert serves the function of

(1) extracting local features and

(2) making local recommendations.

The rules in the gating network are used to decide how to combine recommendations from several local experts, with corresponding degree of confidences.

Expert-level (Rule-level) Modules:

mixture of experts network

Class-level modules are natural basic partitioning units, where each module specializes in distinguishing its own class from the others.

Class-level modules:

In contrast to expert-level partitioning, this OCON structure facilitates a global (or mutual) supervised training scheme. In global inter-class supervised learning, any dispute over a pattern region by (two or more) competing classes may be effectively resolved by resorting to the teacher's guidance.

Decision Based Neural Network

Depending on the order used, two kinds of hierarchical networks:

•one has an experts-in-class construct and

•another a classes-in-expert Construct.

Three-level hierarchical structures:

Apply the divide-and-conquer principle twice:

one time on the expert-level and another on the class-level.

Classes-in-Expert Network

Experts-in-Class Network

Multilayer Back-Propagation Networks

A BP Multi-Layer Perceptron(MLP) possesses adaptive learning abilities to estimate sampled functions, represent these samples, encode structural knowledge, and inference inputs to outputs via association.

Its main strength lies in its (sufficiently large number of ) hidden units, thus a large number of interconnections.

The MLP neural networks enhance the ability to learn and generalize from training data. Namely, MLP can approximate almost any function.

BP Multi-Layer Perceptron(MLP)

A 3-Layer Network

Neuron Units: Activation Function

Linear Basis Function (LBF)

RBF NN is More Suitable for Probabilistic Pattern Classification

MLP RBFHyperplane Kernel function

The probability density function (also called conditional density function or likelihood) of the k-th class is defined as

kCxp |

The centers and widths of the RBF Gaussian kernels are deterministic functions of the training data;

RBF BP Neural Network

•According to Bays’ theorem, the posterior prob. is

xp

CPCxpxCP kk

k

||

where P(Ck) is the prior prob. and

RBF Output as Probability Function

'

'

'|k

kk

CPCxpxp

kM

jk CjPjxpCxp |||

1

)1|(xp

)(xp

)|( Mxp)2|(xp

kk

M

jk CPCjPjxpxp

1

||

M

j

kk

k

M

j

jPjxp

CPCjPjxp

1

1

|

||

kk

M

jk CPCjPjxpxp

1

||

M

j

kk

k

M

j

jPjxp

CPCjPjxp

1

1

|

||

jPjP

jPjxp

CPCjPjxp

xCP M

j

M

jkk

k

1

''

1

|

||

|

M

jjkj

M

jk

M

j

M

j

kk

xw

xjPjCP

jPjxp

jPjxp

jP

CPCjP

1

1

1

''1

||

|

||

RBF output jx posterior prob. of the j-th set of

features in the input .

weight wkj posterior prob. of class membership, giventhe presence of the j-th set of features .

jPjP

jPjxp

CPCjPjxp

xCP M

j

M

jkk

k

1

''

1

|

||

|

MLPs are highly non-linear in the parameter space gradient descent local minima

RBF networks solve this problem by dividing the learning into two independent processes.

1. Use the K-mean algorithm to find ci and determine weights w using the least square method

2. RBF learning by gradient descent

RBF networks MLP

Learning speed Very Fast Very Slow

Convergence Almost guarantee Not guarantee

Response time Slow Fast

Memoryrequirement

Very large Small

Hardwareimplementation

IBM ZISC036Nestor Ni1000

Intel 80170NX

Generalization Usually better Usually poorer

Comparison of RBF and MLP

xp

K-means

K-NearestNeighbor

BasisFunctions

LinearRegression

ci

ci

i

A w

RBF learning process

RBF networks implement the function

s x w w x ci i ii

M

( ) ( )

0

1

wi i and ci can be determined separately

Fast learning algorithm Basis function types

22

2

2

1)(

)2

exp()(

rr

rr

Finding the RBF Parameters

(1 ) Use the K-mean algorithm to find ci

1

2

2

2

1

1

Centers and widths found by K-means and K-NN

Use K nearest neighbor rule to find the function width

2

1

1

K

kiki cc

K

k-th nearest neighbor of ci

The objective is to cover the training points so that a smooth fit of the training samples can be achieved

For Gaussian basis functions

s x w w x c

w wx c

p i i p ii

M

ipj ij

ijj

n

i

M

( )

exp( )

01

0

2

211 2

Assume the variance across each dimension are equal

M

i

n

jijpj

iip cxwwxs

1 1

220 )(

2

1exp)(

ipipi cxa

To write in matrix form, let

a x c

s x w a a

pi i p i

p i pii

M

p

where ( )

00 1

s x

s x

s x

a a a

a a a

a a a

w

w

wN

M

M

N N NM M

( )

( )

( )

`

1

2

11 12 1

21 22 2

1 2

0

1

1

1

1

sAw

Aws

Determining weights w using the least square method

E d w x cp j jj

M

p jp

N

0

2

1

where dp is the desired output for pattern p

E

E

T

T T

( ) ( )

( )

d Aw d Aw

wA A A dSet w

0 1

(2) RBF learning by gradient descent

Let and i p

pj ij

ijj

n

p p pxx c

e x d x s x( ) exp ( ) ( ) ( )

1

2

2

21

E e x pp

N

1

2 1

2

( ) .

we have

E

w

E E

ci ij ij

, , and

Apply

we have the following update equations

w t w t e x x i M

w t w t e x i

t t e x w x x c t

c t c t e x w x x c t

i i w p i pp

N

i i w pp

N

ij ij p i i p pj ij ijp

N

ij ij c p i i p pj ij ijp

N

( ) ( ) ( ) ( ) , , ,

( ) ( ) ( )

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( )

1 1 2

1 0

1

1

1

1

2 3

1

2

1

when

when

Elliptical Basis Function networks

)}()(2

1exp{)( 1

jpjT

jppj xxx

j

j

: function centers

: covariance matrix

1

x1

2 M

x2 xn

J

jpjkjpk xwxy

0

)()(

y W D W = +

y x1( )

y xK ( )

EBF Vs. RBF networks

RBFN with 4 centers EBFN with 4 centers

MatLab Assignment #3: RBF BP Network to separate 2 classes

RBF BP with 4 hidden units EBF BP with 4 hidden units

ratio=2:1

Recommended