50
1 Glede na način učenja se ločujejo na Nenadzorovane (1) Nadzorovane (2) (1) Kohonenove nevronske mreže (2) Nevronske mreže z vzratnim širjenjem napake (error back propagation NN) (2..1) Protitočne nevronske mreže (counterpropagation NN) (2) RBF nevronske mreže (radial basis function) Umetne Nevronske Mreže UNM (ANN)

Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

1

• Glede na način učenja se ločujejo na •Nenadzorovane (1) •Nadzorovane (2)

• (1) Kohonenove nevronske mreže • (2) Nevronske mreže z vzratnim širjenjem napake (error back propagation

NN) • (2..1) Protitočne nevronske mreže (counterpropagation NN) • (2) RBF nevronske mreže (radial basis function)

Umetne Nevronske Mreže

UNM (ANN)

Page 2: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

2

Začetki 1943: McCulloch in Pitts, "A Logical Calculus of Ideas Immanent in Nervous Activity"

Leta1949: Donald Hebb, “The Organization of Behavior”, predlaga Hebb-ovo pravilo (ojačitev povezave med sosednjimi vzbujenimi nevroni, ta aktivnost je osnova za učenje in memoriranje).

Leta 1962: Frank Rosenblatt, knjiga “Principles of Neurodynamics” z uporabo McCulloch-Pitts-ovega nevrona in Hebbovega pravila razvije prvi perceptron.

Leta1969: Minsky in Papert, knjiga “Perceptroni” utemeljitev, zakaj imajo preceptroni teoretične omejitve pri reševanju nelinearnih problemov.

Leta 1982: John Hopfield, vpelje nelinearnost v osnovno funkcijo nevrona. S tem odpravi pomanjkljivosti in odpre novo obdobje raziskav umetnih nevronskih mrež.

Page 3: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

3

XOR

Input x1 Input x2 Output

0 0 0

0 1 1

1 0 1

1 1 0

Page 4: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

4

Tuevo Kohonen (rojen 1934), Finski akademik. “Self-organizing maps” Kohonen,

Leta1960: T. Kohonen vpelje nov koncept v “neural computing” asociativni spomin, optimalno asociativno mapiranje, self-organizing feature maps (SOMs).

Kohonen, T. and Honkela, T. (2007). "Kohonen network“

http://www.scholarpedia.org/article/Kohonen_network

Leta 1986: Rumelhart s sodelavci objavili nov algoritem za tako nevronske mreže s vzvratnim širjenjem napake (error back-propagation)

Page 5: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

5

Nature 323, 533 - 536 (09 October 1986); doi:10.1038/323533a0

Learning representations by back-propagating errors

DAVID E. RUMELHART*, GEOFFREY E. HINTON† & RONALD J. WILLIAMS*

*Institute for Cognitive Science, C-015, University of California, San Diego, La Jolla, California 92093, USA †Department of Computer Science, Carnegie-Mellon University, Pittsburgh, Philadelphia 15213, USA †To whom correspondence should be addressed.

We describe a new learning procedure, back-propagation, for networks of neurone-like

units. The procedure repeatedly adjusts the weights of the connections in the network so

as to minimize a measure of the difference between the actual output vector of the net

and the desired output vector. As a result of the weight adjustments, internal 'hidden'

units which are not part of the input or output come to represent important features of

the task domain, and the regularities in the task are captured by the interactions of these

units. The ability to create useful new features distinguishes back-propagation from

earlier, simpler methods such as the perceptron-convergence procedure1.

References

1. Rosenblatt, F. Principles of Neurodynamics (Spartan, Washington, DC, 1961).

2. Minsky, M. L. & Papert, S. Perceptrons (MIT, Cambridge, 1969).

3. Le Cun, Y. Proc. Cognitiva 85, 599−604 (1985).

4. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. in Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1: Foundations (eds Rumelhart, D. E. & McClelland, J. L.) 318−362 (MIT, Cambridge, 1986).

Page 6: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

x1 x2 xj xm

Sinapse ali uteži wj

Telo nevrona

Izhodni signal y

Akson

Vhodni signali X

x1

x2

xj

xm

Uteži wj

Telo nevrona

Izhodni signal y

Predstavitev nevronov

Page 7: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

7

y = f (Net )

Net

y = f (Net )

0

1

y = f (Net )

b)

q

a

Net 0

1

a)

q

Net 0

1

c)

q

a

Netj = Si=1,k wji xi

)Net(ey

qa

1

1outsignalY

“Error Back Propagation” nevronske mreže (perceptron)

Izhodni signali nevronov

Page 8: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

8

w12

w11 w31

w21

w22

w32

x1 x2

y1 y2 y3

w12

w11 w21 w22

w31

w32

w1b

w2b

w3b

y1 y2 y3

x1 x2

Skriti nivo whji

(6 uteži)

Izhodni nivo woji

(9 uteži)

x1 x2 Bias

wo

11 wo

21

wo22

wo31

wo32

wo1b

wo3b wo

12

wo2b

wh11 w

h21 w

h12

wh1b

wh2b

y1 y2 y3

wh22

Netj = Si=1,k wji xi

Različne oblike EBP nevronskih mrež

Page 9: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

9

Tok

podatkov

Neaktivna vhodna plast: razporeditev spremenljivk xsi

1. transformacija iz xsi v outsj

1

2. transformacija iz outsj

1 v outsk2

3. transformacija iz outsk

2 v ysj

vhodna plast: ni popravkov

3. popravki uteži v prvi skriti plasti

1. popravki uteži v izhodni plasti

2. popravki uteži v drugi skriti plasti

Tok popravkov

xs1 xs2 bias

ys1 ys2 ys3

)y(y)yt( output

j

output

j

output

jj

output

j 1

)y(y)w( hidden

j

hidden

j

n

k

output

kj

output

k

hidden

j

r

11

prejšnji,l

ji

l

j

l

j

l

ji woutw 1

)(1

1qa

Nete

signal

Netj = Si=1,k wji xi

outsignal

Popravljanje uteži v nevronski mreži z vzvratnim širjenjem napake

Page 10: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

10

Izhodna plast

Skrita plast

wjihidden

w1joutput w2j

output wkjoutput wrj

output

j-ti nevron v skriti plasti da signal enak yi

hidden

i-ti nevron v skriti-1 plasti da izhodni signal enak yi

hidden-1

1 1

output 2 2output k k

output r routput

Izhodni nevroni, ki dajo signal yk

output y1

output y2output yk

output yroutput

)y(y)w( hidden

j

hidden

j

n

k

output

kj

output

k

hidden

j

r

11

prejšnji,l

ji

l

j

l

j

l

ji woutw 1

V izračunu poravka vsake uteži upoštevamo podatke iz treh plasti nevronov

Page 11: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

11

Kohonenova mreža Nivoji uteži

Vhodniobjekt Xs

x1

x2

x3

x4

x5

x6

x7

x8

1

2

3

4

5

6

7

8

Nivo uteži št. 2

Nivo uteži št. 6

Posamezna utež

Kohonenova nevronska mreža

Page 12: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

12

x1

x2

x3

x4

y1 y2 y3 y5 yn

3 2 1 0 1 2 3 3 2 1 0 1 2 3

5

5

5

5

4

3

2

2

2

3

4

4

3

3

3

3

3

4

4

3

2

3

2

4

3

2

1

1

0

1

1

1

1

2

3

4

2

3

2

3

4

4

3

2

2

2

3

4

4

3

3

3

3

4

1

1

1

1

1

1 1 1

2

2

2

2

2

2

2

2 2 2 2 2

2

2

2

3

2

3

3

3

3

3

3

3

3

3

3

3

3

3

3 3 3

3

3

3

3

3

3

3

0

Page 13: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

13

x1

x2

x3

x4

y1 y2 yj yn

0 1 2 3 … … 3 2 1 0

yn y1

y2 y3

yn-1

x1

x2

x3

x4

a

b

c

d

d

c

Heksagonalna postavitev nevronov

Page 14: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

14

A A A A,C B

A A A,C C B

A C C C B

C C C B

C C C B B

C C C,B B B B B

C C B B B B

1 1 2 3 4

1 6 2 1 5

2 1 1 1 2

1 1 1 1

1 1 1 1 1

2 1 2 1 1 1 2

1 1 1 2 2 8

40 66 5,30 3 4

53 6 4,29 9 5

11,37 10 20 41 7, 48

64 38 44 36

8 43 39 16 35

21,65 57 14,62 46 67 13 15,63

60 49 18 32,59 7 8

22

23

45

47

2

12

17

19

25

26

42

61

6

31

33

34

56

1

50

52

54

58

68

3

27

51

Page 15: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

15

Mapa uteži spremenljivke x2

“koncentracija pigmenta”

Zgornja mapa s celicami, ki vsebujejo informacijo o kvali-teti barve (oznake A, B, in C)

0.5

0.3

0.9

A

0.7 0.57 0.3

0.1

0.7

.76 .85 .90 .93 .95 .97 .99

.63 .75 .84 .80 .85 .89 .95

.59 .66 .68 .65 .77 .75 .92

.47 .45 .52 .58 .60 .69 .85

.31 .33 .46 .42 .53 .61 .81

.16 .25 .30 .33 .46 .55 .75

.03 .09 .08 .22 .38 .50 .67

Solvent x1

Pigment x2

Binder x3

Vhodni objekt Xs (receptura za barvo) predstavljen s tremi spremenljivkami

0.9

0.5

0.9 0.7

0.3

Page 16: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

16

Vzbujeni nevron We

Vhodni objekt: Xs

xs1 xs2 xs3

xs4 xs5 xs6 xs7

xs8

Vhodna Kohonenova plast

Izhodna ali Grossbergova plast nevronov

ts1 ts2 ts3

ts4

Potem, ko preslikamo položaj We odgovor Ts:vstopi v izhodno Grossbergovo plast

Položaj vzbujenega nvrona We se preslika v izhodno Grosbergovo plast

Protitočna (counterpropagation) nevronska mreža

Page 17: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

17

Vhodni nivo

5 RBF nodes

Bias

Input: x1 x2

w1 w2 w3 w4 w5 wbias

Output: y1

“Radial Basis Function” nevronske mreže

Page 18: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

18

RMSE

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Epohe (v tisočih)

RMSE training RMSEtest Best model

Best RMSE for test set

Page 19: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

19

Output value of the hidden node 1

Input Hidden Output (=input)

x1

x2

x3

Xj

xm

400

400

400 600

700

100

100

50

Concentration of SO2 [mg/m3]

17 input variables are the same 17 output

variables as targets

Page 20: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

20

KOHONENOVE IN PROTITOČNE

(COUNTERPROPAGATION) NEVRONSKE

MREŽE

Študijska literatura

1. J. Zupan, J. Gasteiger, Neural Networks in Chemistry and Drug Design, Wiley-VCH, Weinheim, 1999

2. J. Zupan, Računalniške metode za kemike, DZS, Ljubljana, 1992

3. Teach/Me software, Hans Lohninger, Springer Verlag Berlin 1999

4. T. Kohonen, Self-Organizing Maps, Springer-Verlag Berlin 1995

5. R. Hecht-Nielsen, "Counterpropagation networks“, Applied Optics, 26, 4979-4984, 1987

6. R. Hecht-Nielsen, "Applications of counterpropagation networks“, Neural Networks, 1, 131-139, 1988

7. R. Hecht-Nielsen, Neurocomputing, Reading, MA: Addison-Wesley, 1990

8. J. Zupan, M. Novič, I. Ruisánchez. Kohonen and counterpropagation artificial neural networks in analytical chemistry : tutorial. Chemometr. intell. lab. syst., 1997, vol. 38, str. 1-23.

Page 21: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

21

Osnovna struktura umetnih nevronskih mrež

Kohonenove umetne nevronske mreže (K-UNM)

Protitočne (Counterpropagation) umetne nevronske mreže

2-D preslikava (mapiranje) s Kohonenovo UNM

Klasifikacija

Modeliranje

Primeri uporabe

Mapiranje IR spektrov

Modeli za napovedovanje toksičnosti (log(1/LD50) za

Tetrahymena pyroformis), preko 200 hidroksi-

substituiranih aromatskih spojin

Vsebina:

Page 22: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

22

Umetne nevronske mreže (UNM) kot črna skrinja pri odločitvah

• vhodni signali

• izhodni signali

• strategije učenja

• nenadzorovano

• nadzorovano

Page 23: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

23

UNM kot črna skrinja pri odločitvah

Razpoznavanje vzorcev, določitev lastnosti, kontrola procesov...

Vhodni signal:

X=(x1,x2,...xi,...xm)

Izhodni signal:

Y=(y1,y2,...yj,...yn)

UNM postane uporabna šele potem, ko jo naučimo naših podatkov.

Vhodni signal

Izhodni signal

Netk UNM +

Page 24: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

24

Nenadzorovano učenje

Razmerje med objektom in tarčo ni vnaprej definirano.

Objekt Tarča

X=(x1,x2,...xi,...xm) Y=(ni poznana)

Page 25: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

25

Nenadzorovano učenje

Primer: Izvor italjanskih oljčnih olj

Objekt: X=(x1,x2,...xi,...x8) 572 vzorcev

Tarča: Y (ni poznana)

x1...x8 : % maščobnih kislin:

(1)palmitinska, (5)linoleinska,

(2)palmitoleinska, (6)arašidna (eikosanoična),

(3)stearinska, (7)linolenska,

(4)oleinska, (8)eikosenoična

Page 26: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

26

J. Zupan, M. Novic, X. Li, J. Gasteiger,

Classification of Multicomponent

Analytical Data of Olive Oils Using

Different Neural Networks, Anal. Chim.

Acta, 292, (1994), 219-234.

Razred Pokrajina Število vzorcev

1 North Apulia 25

2 Calabria 56

3 South Apulia 206

4 Sicily 36

5 Inner Sardinia 65

6 Coastal Sardinia 33

7 East Liguria 50

8 West Liguria 50

9 Umbria 51

S 572

6 6 6 6 6 5 5 5 5 5 5 8 8 8 8 8 8 8

6 6 6 6 5 5 5 5 5 7 8 8 8 8

6 6 6 6 5 5 5 5 5 5 7 8 8 8 8 8

6 6 6 6 5 5 5 5 5 5 7 7 7 7 8 8 8 8

6 6 5 5 5 5 7 7 7 7 7 7 8 8

3 3 5 5 5 7 7 7 9 9 9 9 8

3 5 5 5 7 7 7 9 9 9 9

3 3 5 5 7 7 7 7 9 9 9 9 9

3 3 3 7 7 7 7 9 9 9 1

3 3 3 3 3 3 3 7 7 7 9 1 4 1

3 3 3 3 3 3 3 2 2 4 2 1 1 1 1

3 3 3 3 3 3 3 3 2 2 2 2 2 2 1 1 1

3 3 3 3 3 3 3 3 3 2 2 1 1 1

3 3 3 3 3 3 2 2 2 2 2 4 4

3 3 3 3 3 3 3 3 3 3 3 2 2 2 2

3 3 3 3 3 3 3 3 3 3 2 2 2 4 4 4

3 3 3 3 3 3 3 3 3 3 3 3 4 2 2 4 4 4 4

3 3 3 3 3 3 3 3 3 4 2 2 2 2

3 3 3 3 3 3 3 3 4 4 3 4 4 2 2 2 2

3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 2 2 2 4

2

5 3

4

8 7

6

1

9

Page 27: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

27

Nadzorovano učenje

Razmerje med objektom in tarčo je poznano vnaprej,

kar pomeni, da za poljuben objekt natančno vemo,

katera tarča mu pripada oziroma jo želimo doseči.

Objekt Tarča

X=(x1,x2,...xi,...xm) Y=(y1,y2,...yj,...yn)

Page 28: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

28

Nadzorovano učenje

Primer: Klasifikacija točk glede na kvadrante v kartezičnem

koordinatnem sistemu

Objekt: X=(x1,x2,...xi,...xm)

Tarča: Y=(y1,y2,...yj,...yn)

m=2 n=1 n=4

---------- ----- --------------

X=( 1, 1) Y=1 Y=(1,0,0,0)

X=(-1, 1) Y=2 Y=(0,1,0,0)

X=( -1,-1) Y=3 Y=(0,0,1,0)

X=(1,-1) Y=4 Y=(0,0,0,1)

Page 29: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

29

Nadzorovano učenje

-----------------------------------

Primer

X=(x1,x2) Y=(y1,y2,y3,y4)

n=1 n=4

----- --------------

Y=1 Y=(1,0,0,0)

Y=2 Y=(0,1,0,0)

Y=3 Y=(0,0,1,0)

Y=4 Y=(0,0,0,1)

x

1 x

2

Y= Y=

Y= Y=

1 2

3 4

Page 30: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

30

- organizacija nevronov

- 1-dimenzionalna (v vrsti)

- 2-dimenzionalna (v ravnini)

- analogija Kohonenove UNM z možgani

- učna strategija - “zmagovalec dobi vse”

- parameteri hitrosti učenja

- velikost in obseg popravkov uteži

- krčenje območja sosedov

- razpoznavanje objektov iz učnega niza

- napoved “neznanih” objektov

- vrhnja plast VP (“top-layer”) z oznakami

- prazni prostori v VP

- klastri v VP

- konflikti v VP

- nivoji uteži

Osnove Kohonenovih umetnih nevronskih mrež

Page 31: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

31

Nevron je definiran kot niz uteži

Wj j-ti nevron; wji i-ta utež j-tega nevrona

W j [w ji i=1…m ]

w j1

w j2

w j3

w j4

w j5

w j6

Page 32: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

32

Organizacija nevronov

Wj j=1,n

wji, i=1,m

n: število nevronov

m: dimenzija nevronov

Dimenzija nevronov m mora biti enaka dimenziji reprezentacije

objektov (številu neodvisnih spremenljivk)!

Page 33: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

33

1-dimenzionalna organizacija nevronov

(v vrsti)

X [x ,x ,x ] 1 2 3

W W W W W 1 2 4 5 6

W 1

W 1

1

2

3

W 2

W 2

1

2

3

W 3

W 3

W 3

1

2

3

W 4

W 4

W 5

W 5

W 6

W 6

1

2

3

1

2

3

1

2

3

m=3

n=6

W W W W W W 1 2 3 4 5 6

Page 34: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

34

2-dimenzionalna organizacija nevronov

(v ravnini)

X [x ,x ,x ]1 2 3

Wj ,j ,1

Wj ,j ,2

Wj ,j ,3

j =1,6

j =1,4y

x

x y

x y

x y

Wj ,j x y

n = 4y

n = 6x

Število nevronov je

n = nx·ny = 24

(6x4)

Page 35: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

35

Kohonenove nevronske mreže lahko uporabljamo za nelinearne projekcije več-dimenzionalnih objektov v dvo-dimenzionalen prostor – ravnino (mapiranje)

"homunculus"

V možganih so področja nevronov, ki kontrolirajo gibanje –

motoriko posameznih delov človeškega telesa, organizirana eno

zraven drugega, kot na navideznem zemljevidu človeškega telesa.

Vendar velikost posameznih področij ne ustreza velikosi organa,

katerega gibanje kontrolira. Velikost področja je sorazmerna

pomembnosti in kompleksnosti njegovih aktivnosti.

Page 36: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

36

"homunculus"

Page 37: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

37

Strategija učenja - “zmagovalec dobi vse"

Winner = Wwin min (εj)

xi objekt - komponente vektorja X

wji jti nevron – komponente vektorja W (uteži)

εj razlika med objektom in jtim nevronom

( )å

e m

ijiij wx

1

2

Page 38: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

38

Analogija med biološkim in

simuliranim nevronom:

Signal vzbudi en sam nevron v

celem šopu realnih, bioloških

nevronov.

V računalniški simulaciji je

vzbujeni (zmagovalni) nevron

Wwin določen s koordinato svojega

položaja v UNM

Wwin =Wjxjy

W =W win

j =5

j =2

x

y

j x j y

Page 39: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

39

POPRAVLJANJE:

“zmagovalni" nevron + okoliški nevroni

Parameter hitrosti učenja: η(t,r(t)))

t: čas učenja

r(t): okolica (sosedni nevroni)

p s

n j

m i

w x w w stari ji si

stari ji

novi ji

, 1

, 1

, 1

) (

Page 40: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

40

Proces, v katerem popravimo zmagovalni nevron in ustrezne

okoliške nevrone glede na en sam (sti) objekt, imenujemo ena

iteracija učnega procesa.

Potem ko skozi mrežo pošljemo enega za drugim vse objekte (p)

in naredio ustrezne popravke uteži, pravimo, da smo zaključili

eno učno epoho.

p iteracij = 1 epoha

Page 41: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

41

e

W s sosed

W novi

X

s=1

s=s+1

Ponavljaj do s=p obj

Ena epoha učenja Kohonenove mreže

NET

w i n n e r

n e i g

h b o u r s

stari sosed

e W sosed

stari s

X -

p : število objektov v učnem nizu

ji

) ( w x w w stari ji si

stari ji

novi ji

Page 42: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

42

Velikost in obseg popravkov:

Krčenje obsega sosedov

Flat function

Triangular function

Mexican hat function

Chef hat function

max

r

rc

Page 43: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

43

Page 44: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

44

Razpoznavanje objektov iz učnega niza

Ko je učenje končano, morajo biti vsi objekti iz

učnega niza razpoznavni

Zgornji pogoj je praviloma dosežen, ko je vsota

napak v eni epohi εepoh pod določeno mejo.

( )

e n

j ji si

m

i

p

s epoh w x

1 1 1

2

Page 45: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

45

Vrhnja plast (VP) z oznakami:

-prazni nevroni v VP

-klastri v VP

-konflikti v VP

EEEEEEE

KK

K

KKK K

A

A

AA A

A11x11

NETnx

ny

A

AA

AA E

EE

EE

E

K

n=121

A = Acid

E = Ester

K = Ketone

W =W win

j =5

j =2

x

y

j x j y

E

Page 46: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

46

Modifikacija Kohonenove UNM

Dodatna plast izhodnih nevronov spremeni Kohonenovo v

“protitočno” (“counterpropagation”) umetno nevronsko mrežo

Objekt je sestavljen iz opisnega vektorja X (neodvisni) in lastnosti

- Y (vektor odvisnih spremenljivk ali tarče)

• Vhodni nevroni W (vhodna ali Kohonenova plast)

• Izhodni nevroni O (izhodna ali Grosbergova plast)

• Določanje zmagovalnega nevrona samo v vhodni plasti

glede na podobnost med X in W

• Popravljanje uteži v obeh plasteh na enak način, glede na

razliko med X-W in Y-O

Page 47: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

47

Wj x j y

Učenje

z X

Učenje z Y

p s

n j t i

y o o o old ji si

old ji

new ji

, 1

, 1

, 1

) (

Kohonenova - vhodna plast

Izhodna plast

p s

n j m i

w x w w old ji si

old ji

new ji

, 1

, 1 , 1

) (

O

Page 48: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

48

Protitočna (counterpropagation) mreža (CP-UNM)

CP ANN

jx jy neurons (5 5)

Y: tarča

lastnost

P: Napoved

lastnost

X: reprezentacija

molekulske strukture

Xi

i=1

i=644

Page 49: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

49

Uteži: Wjxjyi

CP ANN

jx jy neurons (5 5)

Y: tarča

lastnost

Mapiranje v

Kohonenovi plasti

P: Napoved

lastnost

Napoved v

Izhodni plasti

X: reprezentacija

molekulske strukture

Zmagovalni nevron

Xi

i=1

i=644

( ) i

jjjj yxyxwxD

2,ii

Protitočna (counterpropagation) mreža (CP-UNM)

Page 50: Umetne Nevronske Mreže - Študentski.net€¦ · We describe a new learning procedure, back -propagation, for networks of neurone -like units. The procedure repeatedly adjusts the

50

Protitočna (counterpropagation) mreža (CP-UNM) za napoved vezavne afinitete za ERa

HOMO-LUMO energija

Izhodna plast -

ravnina odgovorov

Y

Xj