5
NON ITERATIVE HOPFIELD MODEL Catalán Salgado Edgar A. and Cornelio Yáñez Márquez Center for Computing Research, National Polytechnic Institute Mexico, DF [email protected] [email protected] Abstract The Hopfield model has problems in the recall phase, one of them it’s the time convergence or non convergence in certain cases. We propose a model that eliminates iteration in Hopfield model. This modification in the recall phase, eliminates the iterations and for consequence takes fewer steps, after them, the recuperation of N patterns learned it’s the same or little better than Hopfield model. Finally because iteration is eliminated the recuperation time is reduced too. Keywords Hopfield, Non iterative, associative memory, auto associative memory 1. Introduction The fundamental purpose of an associative memory is to recover correctly complete patterns starting from entrance patterns (fundamental set), which can be altered with additive noise, subtractive or combined: this is the most attractive characteristic in the associative memoirs.[1] The learning capacity and storage, the efficiency in the answer or recovery of patterns, the speed and the noise inmunity are the main qualities in associative memories.[1] The Hopfield model developed by John J. Hopfield in 1982 [2], marks the history because revives the interest in the associative memories and neural networks, whose investigation was practically detained during the 10 previous years, due to an article against them writed by Minsky, M. & Papert, S. (1988)[3] However, the Hopfield memory model have several problems, first, its estimated that it can only recover approximately .15n, where n is the dimension of patterns[3]. Another problem is the convergence time, while the number of learned patterns and the dimension of the patterns increases, the time convergences increases too, even moere exist certain cases for those which never converge as we show in this paper. This paper began showing the function of the Hopfield model, with an operation example and a examples for which exist one pattern for which one not converge, one example for a member of a fundamental set and another that not is it. Later on we show our proposal, with an operation example. Finally we show the results of some experiments that shows the function of the 2 models in front of the same problems, and finally a conclusions about it. 2. Hopfield model This section is strongly based in [1], so also in the original papers writed by Hopfield [2] [4]. In the legendary article by Hopfield in 1982 [4] is considered a physical system described by a vector of state x whose coordinates are (x 1 , x 2 ,..., x n ); it is also considered that the system has limit locally stable points x a , x b …. Then, if the system is activated sufficiently in a state near to any locally stable limit point, let us say in x = xa +?., when time laps the state of the system will change until x˜ xa. The intensity of the force connection of the neuron xi to the neuron xj is represented by the value of mij, and it is considered that there is symmetry, that is to say, mij = mji. If xi is not connected with xj, then mij = 0; in particular, there are not recurrent connections from a neuron to itself, that which it means that mii = 0, i The instantaneous state of the system is totally specified by the vectorial dimension column n whose coordinates are the values of n neurons. The memory Hopfield is auto-associative, symmetrical, with zeros in the main diagonal. In virtue of that the memory is auto-associative, the fundamental group for the memory Hopfield is {(X μ , X μ ) | μ = 1,2,3.. p}, where p is the number of patterns and: Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) 0-7695-2569-5/06 $20.00 © 2006

[IEEE Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) - Cuernavaca, Morelos (2006.9.26-2006.9.26)] Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)

Embed Size (px)

Citation preview

Page 1: [IEEE Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) - Cuernavaca, Morelos (2006.9.26-2006.9.26)] Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)

NON ITERATIVE HOPFIELD MODEL

Catalán Salgado Edgar A. and Cornelio Yáñez MárquezCenter for Computing Research, National Polytechnic Institute

Mexico, [email protected] [email protected]

Abstract

The Hopfield model has problems in the recall phase,one of them it’s the time convergence or non convergence in certain cases. We propose a model that eliminatesiteration in Hopfield model. This modification in therecall phase, eliminates the iterations and forconsequence takes fewer steps, after them, therecuperation of N patterns learned it’s the same or littlebetter than Hopfield model. Finally because iteration is eliminated the recuperation time is reduced too.

KeywordsHopfield, Non iterative, associative memory, auto

associative memory

1. IntroductionThe fundamental purpose of an associative memory is

to recover correctly complete patterns starting fromentrance patterns (fundamental set), which can be altered with additive noise, subtractive or combined: this is the most attractive characteristic in the associativememoirs.[1]

The learning capacity and storage, the efficiency in the answer or recovery of patterns, the speed and the noise inmunity are the main qualities in associativememories.[1]

The Hopfield model developed by John J. Hopfield in 1982 [2], marks the history because revives the interest in the as sociative memories and neural networks, whose investigation was practically detained during the 10previous years, due to an article against them writed by Minsky, M. & Papert, S. (1988)[3]

However, the Hopfield memory model have several problems, first, its estimated that it can only recoverapproximately .15n, where n is the dimension ofpatterns[3]. Another problem is the convergence time, while the number of learned patterns and the dimension of the patterns increases, the time convergences increases too, even moere exist certain cases for those which neverconverge as we show in this paper.

This paper began showing the function of theHopfield model, with an operation example and aexamples for which exist one pattern for which one notconverge, one example for a member of a fundamental set and another that not is it. Later on we show ourproposal, with an operation example. Finally we show the results of some experiments that shows the function of the 2 models in front of the same problems, and finally aconclusions about it.

2. Hopfield modelThis section is strongly based in [1], so also in the

original papers writed by Hopfield [2] [4].In the legendary article by Hopfield in 1982 [4] is

considered a physical system described by a vector ofstate x whose coordinates are (x1, x2,..., xn ); it is also considered that the system has limit locally stable pointsxa, xb …. Then, if the system is activated sufficiently in a state near to any locally stable limit point, let us say in x = xa +?., when time laps the state of the system will change until x˜ xa.

The intensity of the force connection of the neuron xi to the neuron xj is represented by the value of mij, and it is considered that there is symmetry, that is to say, mij = mji. If xi is not connected with xj, then mij = 0; in particular, there are not recurrent connections from a neuron to itself, that which it means that mii = 0, ∀i The instantaneous state of the system is totally specified by the vectorial dimension column n whose coordinates are the values of n neurons.

The memory Hopfield is auto-associative,symmetrical, with zeros in the main diagonal.

In virtue of that the memory is auto-associative, the fundamental group for the memory Hopfield is {(Xμ, Xμ)| μ = 1,2,3.. p}, where p is the number of patterns and:

Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)0-7695-2569-5/06 $20.00 © 2006

Page 2: [IEEE Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) - Cuernavaca, Morelos (2006.9.26-2006.9.26)] Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)

2.1 Learning phase and recall phaseThe learning rule to obtain the ij- esim component

is as follows:

The recall phase is as follows:When presenting an entrance pattern X’ to the memory

Hopfield, this will change their state with the time, so each neuron xi adjusts their agreement value with theresult that gives the comparison of the quantity

with a threshold value whose value is usually zero.Let us represent the state of the memory Hopfield in

the time t for x (t); then xi (t) represents the value of the neuron xi in the time t and xi (t + 1) is the value of xi in the following time (t + 1).

Given a vectorial input column X’, the recovery phase consists of three steps:1. For t = 0, x (t) is made = X’; that is to say, Xi (0) = X’i,∀i ε {1, 2, 3,..,n}2. ∀i ε {1, 2, 3,..,n}, Xi (t + 1) is calculated with the following condition:

Xi (t+1) is comparated with Xi (t) ∀i ε {1, 2, 3,..,n}. If X(t + 1) = X(t) the process finishes and the recovered vector is X(0) = X’. Otherwise, the process continues in the following way: The steps 2 and 3 must be iterated many times as be necessary until arriving to a value t =(t+1) which Xi (t + 1) = Xi (t) ∀i ε {1, 2, 3,..,n}; then the process finishes and the recovered pattern is X (t).

The convergence process described in the step 3 of the recovery phase, indicates that the system arrives to apoint locally stable limit in the time t.

The existence of t is guaranteed through thedemonstration that makes Hopfield that they exist locallystable points limit in their associative memory model; forthis purpose, it defines E of the following way, taking into account the condition that mii = 0 ∀i:

After that, he shows that E it’s a monotone decreeingfunction.

Besides the demonstration suggested by Hopfield,McEliece et al found another way to demonstrate this expression [6].

2.2 Example of Hopfield modelBe the next patterns the fundamental setX1= [ 1, -1, -1, 1]X2= [-1, 1, -1, 1]X3= [-1, -1, 1,-1]That after learned yields a memory Hopfield M as

follows: 0 -1 -1 1

-1 0 -1 1-1 -1 0 -3

1 1 -3 0At the recovery phase yields the next results:

X1= [ 1, -1, -1, 1]1st Iteration S= [3,1,-3,3,] Recovered pattern [11-11]2nd Iteration S= [1,1,-5,5,] Recovered pattern [11-11]Convergence in 2nd iteratión, Recovered pattern[11-11]X2=[-1, 1, -1, 1]Iteration 1 S= [1,3,-3,3,] Recovered pattern [11-11]Iteration 2 S= [1,1,-5,5,] Redovered pattern [11-11]Convergence in iteration 2 Recovered pattern [11-11]X3= [-1, -1, 1,-1]Iteration 1 S= [-1,-1,5,-5,] Recovered pattern [-1-11-1]Convergence in iteration 1Recovered pattern [-1-11-1]

As we can see, the only one pattern recoveredcorrectly is X3.

2.3 Non convergence patternsAlthough exist convergence demonstrations exist

some pattern combinations that makes an associativememory Hopfield doesnt converge in the presence ofcertain pattern, as is showed in the next examples

A).- Pattern not member of the fundamental set.Be the next patterns the fundamental set:

X1=[1, -1, -1, 1] X2= [-1, 1, -1, 1]

That after learned yields a memory Hopfield M as follows: 0 -2 0 0

-2 0 0 0 0 0 0 -2 0 0 -2 0

At the recovery phase we would try X3= [1, 1, -1, 1]1st iteration S=[-2,-2,-2,2] Recovered pattern [-1,-1,-1,1]2nd iteration S=[2,2,2,-2] Recovered pattern [1, 1, 1, -1]

Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)0-7695-2569-5/06 $20.00 © 2006

Page 3: [IEEE Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) - Cuernavaca, Morelos (2006.9.26-2006.9.26)] Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)

3rd iteration S=[-2,-2,-2,2] Recovered pattern [-1,-1,-1,1]4th iteration S=[2, 2, 2, -2] Recovered pattern [1, 1, 1, -1]

That yields an a infinite cycle that alternate their exit values and never converge.

B).- Pattern member of the fundamental set.Be the next patterns the fundamental set:

X1= [1, 1, 1, 1]X2= [1, 1, -1, -1]X3= [-1, 1, 1, -1]X4= [-1, -1, 1, 1]X5=[1, -1, -1, 1]

That after learned yields a memory Hopfield M as follows:0 1 -3 11 0 1 -3-3 1 0 11 -3 1 0

At the recovery phase we would try X1= [1, 1, 1, 1]1st Iteration S=[1,1,1,1] recovered pattern=[1,1,1,1]2nd Iteration S=[-1,-1,-1,-1]recoverd pattern=[-1,-1,-1,-1]3th Iteration S=[1,1,1,1] recovered pattern=[1,1,1,1]4th Iteration S=[-1,-1,-1,-1]recoverd pattern=[-1,-1,-1,-1]

That yields an a infinite cycle that alternate their exit values and never converge.

3. The proposed modelThere is our proposal, consist in an easy adjust to the

exit after that is the first iteration in the original Hopfield model, after this adjustment, we are able to say if a pattern belongs to a fundamental set, this adjust, makes the memory recover the same patterns and sometimes a little more, but the great difference is that this model doesnt iterate, and for this the recover time is so much lower. And another advantage is that avoid the problems generated by a non convergence pattern.

3.1 LearningThe learning phase remains unchanged

3.2 Recover1.-∀ι ε {1,2,3,...n} calculate Si as follow:

2.- ∀ι ε {1,2,3,...n} threshold with the next condition

3.- Match Yi with Xi ∀ι ε {1,2,3,...n}, if are equals the process ends, if not continues with next step.

4.- Find the adjust value, this is the value nearest 0 and add it one unit,

Being S={S1,S2,…,Sn}5.- .-∀ι ε {1,2,3,...n} calculate:

S’i= Si+AdjValue6.- Apply threshold as described in the step 2, we will call Y to the resulting vector.7.- If Yi =Xi, ∀ι ε {1,2,3,...n} then the pattern belongs to the fundamental set, otherwise not.

3.3 ExampleBeing the fundamental set the same described in the

example of Hopfield model, and as consequence theHopfield memory it is also the same.

We would try to recover the patterns in thefundamental set.X1= [ 1, -1, -1, 1]1.- S= [3,1,-3,3,]2.- After tresholding [11-11]3.- Recovered pattern [11-11]≠X14.-AdjVa lue = -25.- S’= [1,-1,-5,1,]6.- Recovered pattern [1-1-11]=X1X2= [-1, 1, -1, 1]1.-S = [1,3,-3,3,]2.- After tresholding [11-11]3.- Recovered pattern [11-11] ≠ X24.- AdjValue = -25.- S’= [-1,1,-5,1,]6.- Patron Recuperado[-11-11] =X2X3= [-1, -1, 1,-1]1.- S= [-1,-1,5,-5,]2.- Tresholding [-1-11-1]3.- Recovered pattern [-1-11-1]= X3

For this case, our model recovers all the patterns correctly, but one better comparison between the original Hopfield model and ours, is showed in the next section.

4. Experiments

In the next experiments we show the results of usethe original Hopfiled model, marked by down triangleand our modified modified model marked by up triangle and the numbers of patterns that Hopfield couldn’trecover after 200 iterations (some of them are non

(Min(abs(S)) -1) *(-1) if (Si)<0(Min(abs(S)) +1)*(-1) if (Si)>0

AdjValue=

Yi = 1 if Si>0Xi if Si=00 if Si<0

Si=

Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)0-7695-2569-5/06 $20.00 © 2006

Page 4: [IEEE Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) - Cuernavaca, Morelos (2006.9.26-2006.9.26)] Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)

convergence patterns) marked by a diamond. All thepatterns was aleatorily generatedExperiment 1:Patterns dimension:30, and the fundamental set

increases

After 20 learning patterns, the recover factor is generally 0 for both models .

Experiment 2:Patterns dimension: 20, and the fundamental set

increases.

After 20 learned patterns the number of recovered patterns is between 0-3.

In the next examples we change the dimension of the patterns, and the number of patterns in fundamental set remains constant.

Experiment 3:Number of patterns in fundamental set:5Dimension changes

Number of patterns in fundamental set: 15dimension changes

5. Conclusions and Future ResearchWe show a non iterative Hopfield model, which

eliminates the iterations, saving calculus time andavoiding the non convergence pattern problems.

From the experiments graph we will make the next observations:

1.- As the number of patterns in fundamental setincreases, the number of recovered patterns is fewer, but our model always recover the same or a litle morepatterns than the original Hopfield model.

2.- As the number of patterns in fundamental setincreases, the number of non converge patterns increases ,that means that the original Hopfield model takes more time to give a result , while our model takes so much fewer time..

3.- In the graphics were the dimension changes, theoriginal Hopfield model gets problems of recognition of a repetitively learned pattern.

As future work we will seek a way to merge this achievement to another that increases the number ofrecovered patterns or to developed it. So in this way obtain a better memory

6. AcknowledgementsThe authors would like to thank the Instituto

Politécnico Nacional (Secretaría Académica, COFAA,SIP, and CIC), the CONACyT, and SNI for theireconomical support to develop this work.

7. References[1] Cornelio Yáñez Márquez Juan Luis Díaz de León Santiago

Memoria Asociativa Hopfield, IT No. 52 Serie: VERDE Fecha: Junio 2001

[2] Hopfield, J.J. (1982). Neural networks and physical systems with emergent collective computational abilities, Proceedingsof the National Academy of Sciences, 79, 2554-2558.

[3] Minsky, M. & Papert, S. (1988). Perceptrons, Cambridge: MITPress.

[4] Hopfield, J.J. (1984). Neurons with graded respose have collective computational properties likethose of two-state neurons, Proceedings of the National

Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)0-7695-2569-5/06 $20.00 © 2006

Page 5: [IEEE Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) - Cuernavaca, Morelos (2006.9.26-2006.9.26)] Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)

Academy of Sciences, 81,3088-3092.

[5] McEliece, R., Posner, E., Rodemich, E. & Venkatesh, S. (1987). The capacity of the Hopfieldassociative memory, IEEE Transactions on Information Theory, IT-33, 4, 461-482.

[6] Juan Luis Díaz de León Santiago, Cornelio Yáñez MárquezPruebas de Convergencia de la Memoria HopfieldIT No. 53 Serie: VERDE Fecha: Junio 2001

Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06)0-7695-2569-5/06 $20.00 © 2006