6
A Weightless Neural Node based on a Probabilistic Quantum Memory Adenilton Silva , Wilson de Oliveira and Teresa Ludermir Universidade Federal de Pernambuco Centro de Informatica Recife, PE - Brazil Email: {ajs3, tbl}@cin.ufpe.br Universidade Federal Rural de Pernambuco Departamento de Estat´ ıstica e Inform´ atica Recife, PE - Brazil Email: [email protected] Abstract—The success of quantum computation is most commonly associated with speed up of classical algorithms, as the Shor’s factoring algorithm and the Grover’s search al- gorithm. But it should also be related with exponential storage capacity such as the superdense coding. In this work we use a probabilistic quantum memory proposed by Trugenberger, where one can store 2 n patterns with only n quantum bits (qbits). We define a new model of a quantum weightless neural node with this memory in a similar fashion as to the classical Random Access Memory (RAM) node is used in classical weightless neural networks. Some advantages of the proposed model are that the memory of the node does not grow exponentially with the number of inputs and that the node can generalise. Keywords-Neural Computing; Quantum Computing; RAM Based Neural Networks; Weightless Neural Networks; Proba- bilistic Quantum Memories I. I NTRODUCTION The main aim of this work is to propose a novel quantum weightless neural node and to investigate it properties, individually and collectively. This new model can be seen as quantisation of the classical RAM node. As such it will be shown that it overcomes some of the limitations of its classical sibling. The study of the quantum weightless neural networks was started by Oliveira et al. in [1], where it is shown that in contrast to the MCP node [2], the probabilistic logic node PLN [3] and the multi valued probabilistic logic node MPLN [3] can be easily quantised and implemented in a quantum computer. The structure of the node used in [1], [4] differs substantially from the one used here. In op.cit. the nodes are especial kinds of matrices whereas here we use a state or quantum register. Their matrices grow exponentially with the input size while our registers grow linearly. The first model of (classical) weightless neuron, denom- inated RAM node, was proposed by Igor Aleksander [5] in 1966. Several variations of this model were afterward proposed such as the PLN, the MPLN and the goal seeking neuron GSN [6]. A serious drawback of the RAM based nodes is that its memory size grows exponentially with the number of inputs. In this paper we propose a new model of quantum weightless neural networks based on a probabilistic quantum memory (PQM) [7], in the same way that the RAM node is based in the RAM memory. In contrast to the others classical and quantum RAM nodes, the PQM node memory does not grow exponentially with the number of inputs. The number of inputs in the RAM based nodes must be limited due to the exponential grow up. This fact reflects in the kinds of architecture used in networks of RAM based nodes and in several works they usually have been partially connected. With the PQM node any kind of architecture can now be used. Another point in favor of the PQM node is that a single PQM node is able to generalise while a single RAM node cannot. Only a combination of RAM nodes in a network generalises. One PQM node can generalise in a probabilistic mode. The PQM node also has advantages over the PQM memory. In [8] Trugenber uses the PQM memory as an associative memory, but the memory needs to be cloned. But it is known that a quantum memory can be probabilistically cloned by a general unitary-reduction operation if and only if the patterns stored are linearly independent [9]. This is a strong limitation for the use of the PQM memory in machine learning. We shall see that with a small modification in the original algorithm employed in [8] the PQM node does not suffer with this limitation. The remainder of this paper is divided in seven sections. The aim of Sections 2, 3 and 4 is to present the basic defini- tions for weightless neural networks, quantum computation and quantum neural network. In Section 5 we describe the probabilistic quantum memory, in Section 6 the quantum weightless neural node is proposed and the Section 7 is the conclusion. II. WEIGHTLESS NEURAL NETWORK In contrast to biologically motivated nodes, RAM nodes were designed as engineering tool to solve pattern recog- nition problems efficiently. An n input RAM node has 2 n memory locations, ad- dressed by the n-bit string a =(a 1 a 2 ...a n ), a i ∈{0, 1}. A binary signal x =(x 1 x 2 ...x n ), x i ∈{0, 1}, on the input lines will access only one of these locations resulting in y = C [x] [10]. In Figure 1 below s and d are respectively the learning strategy and the desired output to be learned. Learning in a RAM node takes place simply by writing into the corresponding look-up table entries. This learning !o !ʚ!ȫƼÞoÞo Þʚ Þo ooo!ÞoȫÞ Þȫ!

A Weightless Neural Node Based on a Probabilistic Quantum Memory

Embed Size (px)

DESCRIPTION

The success of quantum computation is most commonly associated with speed up of classical algorithms, as the Shor’s factoring algorithm and the Grover’s search al- gorithm. But it should also be related with exponential storage capacity such as the superdense coding. In this work we use a probabilistic quantum memory proposed by Trugenberger, where one can store 2n patterns with only n quantum bits (qbits). We define a new model of a quantum weightless neural node with this memory in a similar fashion as to the classical Random Access Memory (RAM) node is used in classical weightless neural networks. Some advantages of the proposed model are that the memory of the node does not grow exponentially with the number of inputs and that the node can generalise.

Citation preview

Page 1: A Weightless Neural Node Based on a Probabilistic Quantum Memory

A Weightless Neural Node based on a Probabilistic Quantum Memory

Adenilton Silva†, Wilson de Oliveira‡ and Teresa Ludermir††Universidade Federal de Pernambuco

Centro de Informatica

Recife, PE - Brazil

Email: {ajs3, tbl}@cin.ufpe.br

‡Universidade Federal Rural de Pernambuco

Departamento de Estatıstica e Informatica

Recife, PE - Brazil

Email: [email protected]

Abstract—The success of quantum computation is most

commonly associated with speed up of classical algorithms,

as the Shor’s factoring algorithm and the Grover’s search al-

gorithm. But it should also be related with exponential storage

capacity such as the superdense coding. In this work we use

a probabilistic quantum memory proposed by Trugenberger,

where one can store 2npatterns with only n quantum bits

(qbits). We define a new model of a quantum weightless

neural node with this memory in a similar fashion as to

the classical Random Access Memory (RAM) node is used in

classical weightless neural networks. Some advantages of the

proposed model are that the memory of the node does not grow

exponentially with the number of inputs and that the node can

generalise.

Keywords-Neural Computing; Quantum Computing; RAM

Based Neural Networks; Weightless Neural Networks; Proba-

bilistic Quantum Memories

I. INTRODUCTION

The main aim of this work is to propose a novel quantumweightless neural node and to investigate it properties,individually and collectively. This new model can be seenas quantisation of the classical RAM node. As such it willbe shown that it overcomes some of the limitations of itsclassical sibling. The study of the quantum weightless neuralnetworks was started by Oliveira et al. in [1], where it isshown that in contrast to the MCP node [2], the probabilisticlogic node PLN [3] and the multi valued probabilistic logicnode MPLN [3] can be easily quantised and implemented ina quantum computer. The structure of the node used in [1],[4] differs substantially from the one used here. In op.cit. thenodes are especial kinds of matrices whereas here we use astate or quantum register. Their matrices grow exponentiallywith the input size while our registers grow linearly.

The first model of (classical) weightless neuron, denom-inated RAM node, was proposed by Igor Aleksander [5]in 1966. Several variations of this model were afterwardproposed such as the PLN, the MPLN and the goal seekingneuron GSN [6]. A serious drawback of the RAM basednodes is that its memory size grows exponentially with thenumber of inputs. In this paper we propose a new model ofquantum weightless neural networks based on a probabilisticquantum memory (PQM) [7], in the same way that the RAMnode is based in the RAM memory.

In contrast to the others classical and quantum RAMnodes, the PQM node memory does not grow exponentiallywith the number of inputs. The number of inputs in theRAM based nodes must be limited due to the exponentialgrow up. This fact reflects in the kinds of architecture usedin networks of RAM based nodes and in several works theyusually have been partially connected. With the PQM nodeany kind of architecture can now be used.

Another point in favor of the PQM node is that a singlePQM node is able to generalise while a single RAM nodecannot. Only a combination of RAM nodes in a networkgeneralises. One PQM node can generalise in a probabilisticmode.

The PQM node also has advantages over the PQMmemory. In [8] Trugenber uses the PQM memory as anassociative memory, but the memory needs to be cloned. Butit is known that a quantum memory can be probabilisticallycloned by a general unitary-reduction operation if and onlyif the patterns stored are linearly independent [9]. This is astrong limitation for the use of the PQM memory in machinelearning. We shall see that with a small modification in theoriginal algorithm employed in [8] the PQM node does notsuffer with this limitation.

The remainder of this paper is divided in seven sections.The aim of Sections 2, 3 and 4 is to present the basic defini-tions for weightless neural networks, quantum computationand quantum neural network. In Section 5 we describe theprobabilistic quantum memory, in Section 6 the quantumweightless neural node is proposed and the Section 7 is theconclusion.

II. WEIGHTLESS NEURAL NETWORK

In contrast to biologically motivated nodes, RAM nodeswere designed as engineering tool to solve pattern recog-nition problems efficiently.

An n input RAM node has 2n memory locations, ad-dressed by the n-bit string a = (a1a2 . . . an), ai ∈ {0, 1}.A binary signal x = (x1x2 . . . xn), xi ∈ {0, 1}, on the inputlines will access only one of these locations resulting iny = C [x] [10]. In Figure 1 below s and d are respectivelythe learning strategy and the desired output to be learned.

Learning in a RAM node takes place simply by writinginto the corresponding look-up table entries. This learning

!000111000      EEEllleeevvveeennnttthhh      BBBrrraaazzziiillliiiaaannn      SSSyyymmmpppooosssiiiuuummm      ooonnn      NNNeeeuuurrraaalll      NNNeeetttwwwooorrrkkksss

!777888-­-­-000-­-­-777666!555-­-­-444222111000-­-­-222///111000      $$$222666...000000      ©©©      222000111000      IIIEEEEEEEEE

DDDOOOIII      111000...111111000!///SSSBBBRRRNNN...222000111000...555222

222555!

Page 2: A Weightless Neural Node Based on a Probabilistic Quantum Memory

process is much simpler than the adjustment of weights.The RAM node, as defined above, can compute all binaryfunctions of its input while the weighted nodes can onlycompute linearly separable functions.

s � 11 . . . 1 C [2n − 1]d � 11 . . . 0 C [2n − 2]

x1−−−−−−−−→...

... y−−−−−−−→

... 00 . . . 1 C[1]xn−−−−−−−−→ 00 . . . 0 C[0]

Figure 1. Ram Node

A Probabilistic Logic Node (PLN) differs from a RAMnode in that a 2-bit number (rather than a single bit) is nowstored at the addressed memory location. The content ofthis location is turned into the probability of firing (i.e.,generating 1) at the overall output of the node. In otherwords, a PLN consists of a RAM node, where now a2-bit number is stored at the addressed memory location(0, 1 or u), respectively represented as say 00, 11 and 01(or 10), with a probabilistic output generator. The output ofthe PLN Node is given by [10]:

y =

0, if C[x] = 01, if C[x] = 1random(0, 1), if C[x] = u

The Multi-Valued Probabilistic Logic Node (MPLN) dif-fers from PLN by allowing a wider but still discrete rangeof probabilities to be stored at each memory content.

III. QUANTUM COMPUTATION

Quantum computing [11] was originally proposed byRichard Feynman [12] in the 1980s and had its formalisationwith David Deutsch which proposed the quantum Turingmachine [13]. Quantum Computing has been popularisedthrough the quantum circuit model [14] which is a math-ematical quantisation [15] of the classical boolean circuitmodel of computation. Quantum computers, if ever built,can also be seen as a parallel device for potentially improvethe computational efficiency of neural networks [16].

The quantum information unit is the quantum bit or“qubit”. A very intuitive view of the quantisation procedureused almost everywhere is put forward by Nik Weaver in thePreface of his book Mathematical Quantization [15] withsays in a nutshell: “The fundamental idea of mathematicalquantisation is sets are replaced with Hilbert spaces”. In or-der to properly deal with operations we would add: functions(morphisms) are replaced with linear operators in general -and in some cases, with unitary operators in particular.

The quantisation of the boolean circuit logic starts bysimply embedding the classical bits {0, 1} in a convenientHilbert space. The natural way of doing this is to rep-resent them as (orthonormal) basis of a Complex Hilbertspace. In this context these basis elements are called thecomputational-basis states [17]. Linear combinations (fromLinear Algebra [18]) of the basis spans the whole spacewhose elements, called states, are said to be in super-

position. The general state of a qubit is a superposition(linear combinations) of the two computational-basis states:|ψ� = α |0� + β |1�, where α, β are complex coefficients(called probability amplitudes) constrained by the normal-isation condition: |α|2 + |β|2 = 1; and |0�, |1� are a pairof orthonormal basis vectors representing each classical bit,or “cbit”, as column vector. For example the cbits can berepresented as

|0� =

�10

�, |1� =

�01

It should be clear that any pair of orthonormal basisvectors would work but those above are the canonical onesmostly employed in the majority of works. Thus qubits livein the bi-dimensional complex Hilbert space C2. Tensorproducts are used in order to represent more than one qubitas follows: |i� ⊗ |j� = |i� |j� = |ij� , where i, j ∈ {0, 1}.The n-qubits live in the space C2n

Recall that the tensorproduct of two bi-dimensional vectors

|ψ� =

�ψ0

ψ1

�, |φ� =

�φ0

φ1

is the 4-dimensional vector:

|ψ� ⊗ |φ� =

ψ0

�φ0

φ1

ψ1

�φ0

φ1

=

ψ0φ0

ψ0φ1

ψ1φ0

ψ1φ1

Which obviously generalises to any pair of n− andm−dimensional vectors producing a nm-dimensional vec-tor or more generally a (n0, m0)−dimensional ma-trix by a (n1, m1)−dimensional one producing a third(n0m0, m0n0)−dimensional matrix and so on. Operationson qubits are carried out by unitary operators. So quantumalgorithms on n bits are represented by unitary operatorsU over the 2n-dimensional complex Hilbert space: |ψ� →U |ψ�.

In quantum computation, all operators on qubits arereversible, except for the process called measurement, thatloses the information about the superposition of states. Tomeasure a general state |ψ� collapses (projects) it into eitherthe |0� state or the |1� state, with probabilities |α|2 or |β|2respectively.

222666000

Page 3: A Weightless Neural Node Based on a Probabilistic Quantum Memory

Others widely used operators and their correspondingmatrix operators are [11]:

• I, identity operator: which is particularly useful whencombined with other in the case one wants to manip-ulate a particular bit in a register leaving the othersintact.

I =

�1 00 1

�I |0� = |0�I |1� = |1�

• X, flip operator: behaves as the classical NOT on thecomputational basis.

X =

�0 11 0

�X |0� = |1�X |1� = |0�

• H, Hadamard transformation: generates superpositionof states.

H = 1√2

�1 11 −1

�H |0� = 1/

√2(|0�+ |1�)

H |1� = 1/√

2(|0� − |1�)

• W, operator used in the PQM node.

W =

�ei

π2n 00 1

�H |0� = 1/

√2(|0�+ |1�)

H |1� = 1/√

2(|0� − |1�)

Quantum operators also may be represented in quantumcircuits by corresponding quantum gates. Figure 2shows an n-qubit controlled gate U whose action onthe target qubit (bottommost) is active or not by n− 1(topmost) control qubits [11]. The output is checked bymeasurement gates.

•��������......•U �����where�������� = X • X

Figure 2. A quantum circuit

IV. QUANTUM NEURAL NETWORKS

The concept of quantum neural computation was first intro-duced by Kak in 1995, creating a new paradigm using neuralnetworks and quantum computation which opens severalnew directions in neural network research [19]. It is ex-pected that quantum neural networks are more efficient thanclassical neural networks, parallel to what is expected fromquantum computation in relation to Classical Computation.

In that same year, Menneer and Narayanan proposeda Quantum Inspired Neural Network applied the multiple

universes view from quantum theory to one-layer artificialneural networks [20]. In 1998 Ventura and Martinez in-troduced a quantum associative memory with a capacityexponential in the number of neurons [21].

The non-linear activation functions used in models of neu-ral networks delayed further development of their quantumanalogue. But in 2001 Altaisky proposed a quantum systemwhere the rules of perceptron learning were proposed [22].

Several Quantum Weighted Neural Networks models havebeen proposed but there remains the challenge of directimplementation in quantum circuits, natural adaptation ofthe learning algorithms and quantum learning algorithmsrespecting the postulates of Quantum Mechanics. These arecharacteristics not altogether found in any of the proposedQuantum Weighted Neural Networks models. The first reallyquantum neural networks is the weightless neural networksbased on quantum RAM in [1].

V. PROBABILISTIC QUANTUM MEMORIES

The probabilistic quantum memory PQM was proposed byTrugenberger in 2001 [7]. The advantage of the PQM mem-ory is its exponential capacity of storage. In [8] Trugenbergeruse this memory to define a quantum associative memorytaking benefices of this memory capacity.

In the PQM memory the information retrieval step seemsto contradict the general idea of a memory [23], since afterthe retrieval step the memory needs to be prepared again.Trungerberg argues that its model is efficient if the numberof inputs is polynomial [24]. The trouble with the PQMmemory is the need to measure the memory register in orderto recover the data stored in the memory.

The PQM memory is a state which represents a super-position of the p patterns p

j to be stored, as showed inequation (1). The algorithm to create this state is describedin [7] with details; With this algorithm 2n patterns with n

qubits can be stored in n qubits.

|m� =1√

p

p�

j=1

��pj�

(1)

The retrieval algorithm of the PQM requires three regis-ters: the first i will receive the input pattern, the second m

will contain the memory |m� and the third c is a auxilaryquantum bit |c� initializated as H |0�.

|ψ0� =1√2p

p�

k=1

��i1, · · · , in; pk

1 ; · · · ; pk

n; 0

�+

1√2p

p�

k=1

��i1, · · · , in; pk

1 ; · · · ; pk

n; 1

�(2)

After the deterministic part of the retrieval algorithm, thequantum state will be as indicated in the equation (3), wheredH

�i, p

k�

denotes the Hamming distance between i andp

k. A measurement in register |c� says if the pattern is

222666111

Page 4: A Weightless Neural Node Based on a Probabilistic Quantum Memory

recognized |c� = |0� or unrecognized |c� = |1�. This processcan be repeated T times. If one gets the value |c� = |0�,classifies i as recognized and measure the memory registerto identify it [7].

|ψ� =1√

p

p�

k=1

cos�

π

2ndH

�i, p

k�� ��i1, · · · , in; pk

1 ; · · · ; pk

n; 0

+1√

p

p�

k=1

sin�

π

2ndH

�i, p

k�� ��i1, · · · , in; pk

1 ; · · · ; pk

n; 1

(3)

VI. PQM NODE

The measurement of the memory register in a PQM memoryis a negative characteristic since the memory collapse makesit loose its future uses and one then need to prepare its stateagain. In this section we propose a weightless neural nodebased on the PQM memory and without the need to measurethe memory register.

In the RAM node the decision about the number of inputsn1 has directly influence in the generalisation ability of thenetwork, however one of the main disadvantage of the RAMnode is that its memory grows exponentially with n1. Inthis section will be defined a quantum node similar to theRAM node, but with architecture based in the PQM memoryinstead of the RAM memory, in such a way that the PQMnode will have the size of the memory equal to its numberof inputs.

A PQM node is implemented as a PQM memory. As inthe retrieval algorithm of the PQM memory, a PQM nodewith n inputs has three registers: the first is the register i withn qubits in which we feed the input of the node; the secondmemory register is m with n qubits to hold the memory ofthe node |m� and the third register is c with a single qubit,the measure of c will determine the output of the node in aprobabilistic way. A PQM node with n inputs is representedby a quantum state with three registers containing 2n + 1qubits.

As the RAM node the PQM node is used to recognize onlyone class, a new input pattern will make the node answerpositively if the pattern is in the class that the node wastrained for or otherwise negatively rejecting the pattern.

The learning algorithm of a PQM node is to create asuperposition of all the patterns in the training set |m� in thememory register m of the node. This process can be realizedusing the storage algorithm of the memory PQM [7]. Thecost of the training is linear in the number n2 of patterns inthe training set, the computational cost is Θ (n2).

After feeding one new pattern in the input register ofthe node, the deterministic part of the retrieval algorithmof the PQM memory is applied to the state (Fig. 3), atthis moment the equation (3) describes the state of thenode. A measurement of the value in the memory registerc will determine the output of the node in a probabilistic

way described in the equation (4), where dH(i, pk) is theHamming distance between the input pattern and the kth

pattern stored in the memory.

�P (|c� = |0�) =

�p

k=11p

cos2�

π

2ndH(i, pk)

P (|c� = |1�) =�

p

k=11p

sin2�

π

2ndH(i, pk)

� (4)

For instance, if one class is represented in the training setby {(00000000), (00000001)}, then it is used the algorithmproposed in [7] to create the state described in the equation(5).

|ψ� =1√2

(|00000000�+ |00000001�) (5)

Feeding the pattern (11111111) in the input of the net-work and initializing the register c with the state |c� =1√2

(|0�+ |1�) the state of the node at this moment isdescribed in the equation (6).

|ψ� =12

[|11111111� (|00000000�+ |00000001�) |0�+

|11111111� (|00000000�+ |00000001�) |1�](6)

After the action of the node, described in the Fig. 3 thestate |ψ� will be in the state described by the equation

|ψ� =12

�cos

�8pi

16

�|11111111� |00000000� |0�+

cos�

7pi

16

�|11111111� |00000001� |0�+

sin�

8pi

16

�|11111111� |00000000� |1�+

sin�

7pi

16

�|11111111� |00000001� |1�

(7)

A measure in the register c of the node will result in |0�,the node recognize the pattern, with probability�

1√2

cos�

8pi

16

��2

+�

1√2

cos�

7pi

16

��2

= 1.9 · 10−2 (8)

and will result in |1� with probalility�

1√2

sin�

8pi

16

��2

+�

1√2

sin�

7pi

16

��2

= 98.1 ·10−2 (9)

and the pattern (11111111) will not be recognized.The probabilities in the equations (8) and (9) can be

estimated repeating the execution of the node T times orusing a register c with more qbits [8].

Other disadvantage of the RAM node is the inability togeneralise; the ability of generalisation is achieved only withRAM networks. As can be seen in the equation (4) theoutput of a PQM node considers the Hamming distance,in this equation the output |0� means that the pattern was

222666222

Page 5: A Weightless Neural Node Based on a Probabilistic Quantum Memory

|i1� • •

|i2� • •... · · · · · ·

|in� • •

|m1� �������� X W W−2 X

��������|m2� ��������

X W W−2 X

��������... · · · W W

−2 · · · · · · · · ·

|mn� ��������X W W

−2 X��������

|c� • • • • H

Figure 3. PQM node

recognized, the output |0� will occur with high probabilityif the input is near to the values stored in |m� and the outputequal to |1� means that the pattern was not recognized, thiswill occur if the input is distant to the values in |m�. A newpattern |i�� not presented to the node in the learning phasecan be recognized by the node if the values of dH(i�, pk)are near to zero. On the other hand, the storage of a pattern|p�� distant to the others storage data in the memory register,will have a low probability of being recognised.

One of the limitations of the PQM node is its probabilisticoutput. To a given input i, different outputs can be generatedby the same PQM node. This probabilistic behavior isderived from the probabilistic quantum memory. To reducethis limitation one can allow the node to produce b temporaryoutputs and the most frequently output is used as the outputof the node [25].

In [8], Trugenberger uses the probabilistic quantum mem-ory as an associative memory, however in his works afterthe measurement of the output register c if the pattern wasrecognized the memory register m is measured, then thememory register collapses and looses its utility. The memoryneeds to be recreated or probabilistically cloned.

Trugenberger does not define a neural structure, he di-rectly use the PQM memory as an associative memory. ThePQM node defined above can be used in neural networksin exactly the same kinds of architectures as networks withRAM nodes, for example the architecture with discriminator[10], but with more flexibility as to choose the number ofinputs of each node.

Experiments with the PQM node requires the simulationof a PQM memory. However the simulation of quantumsystems in actual classical computers has exponential costin relation to the number of qubits used which restrict thekind and size of experiments until a quantum computer is

built.

VII. CONCLUSION

The result obtained in this paper shows that the quantumcomputation can be used to overcome problems in classicalmodels of neural networks, this same conclusion was verifiedin others works as in [26], [27]. Particularly, in this articlewas defined the PQM node, a quantum weightless neuralnode similar to the RAM node where the memory has notexponential cost in relation to the number of inputs of thenode.

Another point to be considered, applicable to any modelof quantum neural network, is the creation of a quantumstate |m� where all the patterns in the training set are insuperposition. This allows the design of learning algorithmsfor neural networks where all patterns are presented at thesame time, implying in a drastic reduction of the execu-tion time of the learning algorithms. This reduction of thelearning algorithms time makes possible to extend the useof neural networks where the costs of classical learning isprohibitive.

In particular a learning algorithm for the q-PLN [1]with one single execution of the network could be createdusing the PQM memory. It is suffice to to superpose thepatterns of the training set in only one input |m� to bepresented to the network and use the retrieval algorithmor the Grover algorithm to obtain the desired values to thenetwork parameters.

ACKNOWLEDGMENT

This work is supported by research grants fromMCT/CNPq (Edital Universal) and PRONEX/FACEPE.

222666333

Page 6: A Weightless Neural Node Based on a Probabilistic Quantum Memory

REFERENCES

[1] W. Oliveira, A. J. Silva, T. B. Ludermir, A. Leonel,W. Galindo, and J. Pereira, “Quantum logical neural net-works,” Neural Networks, 2008. SBRN ’08. 10th Brazilian

Symposium on, pp. 147–152, oct. 2008.

[2] W. S. McCulloch and W. Pitts, “A logical calculus of theideas imminent in nervous activity,” Bulletin of Mathematical

Biophysics, vol. 5, pp. 115–133, 1930.

[3] I. Aleksander, M. de Gregorio, F. Frana, P. Lima, and H. Mor-ton, “A brief introduction to weightless neural systems,” inEuropean Symposium on Artificial Neural Networks, 2009.

Proceedings., April 2009, pp. 299–305.

[4] W. Oliveira, “Quantum ram based neural networks,” inEuropean Symposium on Artificial Neural Networks, 2009.

Proceedings., April 2009, pp. 331–336.

[5] I. Aleksander, “Self-adaptive universal logic circuits,” Elec-

tronics Letters, vol. 2, no. 8, pp. 321–322, 1966.

[6] W. Martins and C. G. Pinheiro, “On-line expansion of goalseeking neuron networks,” Neural Networks, 2000. IJCNN

2000, Proceedings of the IEEE-INNS-ENNS International

Joint Conference on, vol. 4, pp. 523–526, 2000.

[7] C. Trugenberger, “Probabilistic quantum memories,” Phys.

Rev. Lett., vol. 87, no. 6, p. 067901, Jul 2001.

[8] C. A. Trugenberger, “Quantum pattern recognition,” Quantum

Information Processing, vol. 1, pp. 471–493, 2002.

[9] L.-M. Duan and G.-C. Guo, “Probabilistic cloning and iden-tification of linearly independent quantum states,” Phys. Rev.

Lett., vol. 80, no. 22, pp. 4999–5002, Jun 1998.

[10] T. B. Ludermir, A. Carvalho, A. P. Braga, and M. C. P.Souto, “Weightless neural models: A review of current andpast works,” Neural Computing Surveys, vol. 2, pp. 41–61,1999.

[11] M. A. Nielsen and I. L. Chuang, Quantum Computation and

Quantum Information. Cambridge University Press, 2000.

[12] R. Feynman, “Simulating physics with computers,” Interna-

tional Journal of Theoretical Physics, vol. 21, pp. 467–488,1982.

[13] D. Deutsch, “Quantum theory, the Church-Turing principleand the universal quantum computer,” Proceedings of the

Royal Society of London Ser. A, vol. 400, no. 1818, pp. 97–117, 1985.

[14] ——, “Quantum computational networks,” Proceedings of the

Royal Society of London, vol. A 425, pp. 73–90, 1989.

[15] N. Weaver, Mathematical Quantization, ser. Studies in Ad-vanced Mathematics. Boca Raton, Florida: Chapman &Hall/CRC, 2001.

[16] J. Faber and G. Giraldi, “Quantum models of artifi-cial neural networks,” in I Meeting of Quantum In-

formation, Belo Horizonte, 2002, electronically available:http://arquivosweb.lncc.br/pdfs/QNN-Review.pdf.

[17] N. D. Mermin, “From cbits to qbits: Teaching computerscientists quantum mechanics,” American Journal of Physics,vol. 71, p. 23, 2003. [Online]. Available: http://www.citebase.org/abstract?id=oai:arXiv.org:quant-ph/0207118

[18] K. Hoffman and R. Kunze, Linear Algebra. Prentic-Hall,1971.

[19] S. C. Kak, “On quantum neural computing,” Information

Sciences, vol. 83, no. 3, pp. 143–160, 1995.

[20] T. Menneer and A. Narayanan, “Quantum-inspiredneural networks,” Department of Computer Sci-ence, University of Exeter, Exeter,United Kingdom,Technical Report R329, 1995. [Online]. Available:citeseer.ist.psu.edu/menneer95quantuminspired.html

[21] D. Ventura and T. Martinez, “Quantum associative memory,”Information Sciences, vol. 124, no. 1-4, pp. 273 – 296, 2000.

[22] M. V. Altaisky, “Quantum neural network,” Joint Institute forNuclear Research, Russia, Technical report, 2001.

[23] T. Brun, H. Klauck, A. Nayak, M. Rotteler, and C. Zalka,“Comment on “probabilistic quantum memories”,” Phys. Rev.

Lett., vol. 91, no. 20, p. 209801, Nov 2003.

[24] C. A. Trugenberger, “Trugenberger replies,” Phys. Rev. Lett.,vol. 91, no. 20, p. 209802, Nov 2003.

[25] C. Trugenberger, “Phase transitions in quantum pattern recog-nition,” Phys. Rev. Lett., vol. 89, no. 27, p. 277903, Dec 2002.

[26] D. Ventura and T. Martinez, “An artificial neuron with quan-tum mechanical properties,” in Proceedings of the Interna-

tional Conference on Artificial Neural Networks and Genetic

Algorithms, 1997, pp. 482–485.

[27] R. Zhou and Q. Ding, “Quantum m-p neural network,” Int J

Theor Phys, vol. 46, pp. 3209 – 3215, 2007.

222666444