9
Two-layer competitive based Hopfield neural network for medical image edge detection Chuan-Yu Chang* Pau-Choo Chung* National Cheng Kung University Department of Electrical Engineering Tainan, Taiwan E-mail: [email protected] Abstract. In medical applications, the detection and outlining of bound- aries of organs and tumors in computed tomography (CT) and magnetic resonance imaging (MRI) images are prerequisite. A two-layer Hopfield neural network called the competitive Hopfield edge-finding neural net- work (CHEFNN) is presented for finding the edges of CT and MRI im- ages. Different from conventional 2-D Hopfield neural networks, the CHEFNN extends the one-layer 2-D Hopfield network at the original im- age plane a two-layer 3-D Hopfield network with edge detection to be implemented on its third dimension. With the extended 3-D architecture, the network is capable of incorporating a pixel’s contextual information into a pixel-labeling procedure. As a result, the effect of tiny details or noises will be effectively removed by the CHEFNN and the drawback of disconnected fractions can be overcome. Furthermore, by making use of the competitive learning rule to update the neuron states, the problem of satisfying strong constraints can be alleviated and results in a fast con- vergence. Our experimental results show that the CHEFNN can obtain more appropriate, more continued edge points than the Laplacian- based, Marr-Hildreth, Canny, and wavelet-based methods. © 2000 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(00)01903-6] Subject terms: contextual information; neural networks; edge detection; medical imaging. Paper 990168 received Apr. 19, 1999; revised manuscript received Aug. 20, 1999; accepted for publication Aug. 30, 1999. 1 Introduction Computed tomography ~CT! and magnetic resonance imag- ing ~MRI! are nonintrusive techniques that are rapidly gain- ing popularity as diagnostic tools. In applying CT and MRI as diagnosis assistance, detection and outlining of bound- aries of organs and tumors are prerequisite, and this is one of the most important steps in computer-aided surgery. The goal of edge detection is to obtain a complete and mean- ingful description from an image by characterizing inten- sity changes. Edge points can be defined as pixels at which an abrupt discontinuity in gray level, color, or texture ex- ists. Different approaches have been used to solve edge detection problems based on zero-crossing detection. How- ever, most of these methods require a predetermined threshold to determine whether or not a zero-crossing point is an edge point. The threshold value is usually obtained through trial and error, which causes poor efficiency. On the other hand, Marr and Hildreth also proposed to obtain edge maps of different scales and augured that different scales of edges will provide important information. They suggested that the original image be bandlimited at several different cutoff frequencies and that an edge detection al- gorithm be applied to each of the bandlimited images. 1 This kind of multiresolution edge detection method has a trade- off between localization and edge details. A fine resolution gives too much redundant detail, whereas a coarse resolu- tion lacks accuracy of edge detection. In addition, due to the medical image acquisition properties, noise or artifacts arising in the course of image acquisition generally increase difficulty in edge detection. Thus, the first step of tradi- tional edge detection algorithms is to employ a noise- suppressed process on the original image ~e.g., a low-pass filter!. This noise-suppressed process usually causes the loss of sharpness in the edges of objects. Therefore, cur- rently, detection and outlining of boundaries of organs and tumors are usually performed manually, a task that is both costly and tedious. On the other hand, neural networks have features of fault tolerance and potential for parallel implementation and have been widely applied to edge detection in recent years. Zhu and Yan 2 proposed a modified Hopfield network based on an active contour model to detect the brain tumor boundaries in medical images. Lu and Shen 3 used a back- propagation network to extract boundaries, followed by boundary enhancement using a modified Hopfield neural network. However, these 2-D Hopfield neural networks perform edge detection on the basis of binary segmented images but not the original images. As a result, the quality of edge detection heavily depends on the presegmented re- sults. In addition, conventional 2-D Hopfield neural net- works lack the ability to take the pixel’s contextual infor- mation into its evolution consideration that results in fragmentation and disconnected points. Thus, despite the fact that a tremendous amount of research has been done on *The authors are currently with the National Cheng Kung University, Medical Images Processing and Neural Networks Laboratory, Depart- ment of Electrical Engineering, Tainan 70101, Taiwan. This work was supported by the National Science Council, Taiwan, under grant NSC 88-2213-E-006-056. 695 Opt. Eng. 39(3) 695703 (March 2000) 0091-3286/2000/$15.00 © 2000 Society of Photo-Optical Instrumentation Engineers

Two-layer competitive based Hopfield neural network for

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Two-layer competitive based Hopfield neuralnetwork for medical image edge detection

Chuan-Yu Chang *Pau-Choo Chung *National Cheng Kung UniversityDepartment of Electrical EngineeringTainan, TaiwanE-mail: [email protected]

Abstract. In medical applications, the detection and outlining of bound-aries of organs and tumors in computed tomography (CT) and magneticresonance imaging (MRI) images are prerequisite. A two-layer Hopfieldneural network called the competitive Hopfield edge-finding neural net-work (CHEFNN) is presented for finding the edges of CT and MRI im-ages. Different from conventional 2-D Hopfield neural networks, theCHEFNN extends the one-layer 2-D Hopfield network at the original im-age plane a two-layer 3-D Hopfield network with edge detection to beimplemented on its third dimension. With the extended 3-D architecture,the network is capable of incorporating a pixel’s contextual informationinto a pixel-labeling procedure. As a result, the effect of tiny details ornoises will be effectively removed by the CHEFNN and the drawback ofdisconnected fractions can be overcome. Furthermore, by making use ofthe competitive learning rule to update the neuron states, the problem ofsatisfying strong constraints can be alleviated and results in a fast con-vergence. Our experimental results show that the CHEFNN can obtainmore appropriate, more continued edge points than the Laplacian-based, Marr-Hildreth, Canny, and wavelet-based methods. © 2000 Societyof Photo-Optical Instrumentation Engineers. [S0091-3286(00)01903-6]

Subject terms: contextual information; neural networks; edge detection; medicalimaging.

Paper 990168 received Apr. 19, 1999; revised manuscript received Aug. 20,1999; accepted for publication Aug. 30, 1999.

g-in-RIndon

Theann-hicx-dgownedoined

Onainreneyeraal-

de-

ionolu-to

ctsasei-e-

thecur-ndoth

ofonentkmor

byral

kstedlityd re-t-

r-inthee on

ity,art-asSC

1 Introduction

Computed tomography~CT! and magnetic resonance imaing ~MRI! are nonintrusive techniques that are rapidly gaing popularity as diagnostic tools. In applying CT and Mas diagnosis assistance, detection and outlining of bouaries of organs and tumors are prerequisite, and this isof the most important steps in computer-aided surgery.goal of edge detection is to obtain a complete and meingful description from an image by characterizing intesity changes. Edge points can be defined as pixels at wan abrupt discontinuity in gray level, color, or texture eists. Different approaches have been used to solve edetection problems based on zero-crossing detection. Hever, most of these methods require a predetermithreshold to determine whether or not a zero-crossing pis an edge point. The threshold value is usually obtainthrough trial and error, which causes poor efficiency.the other hand, Marr and Hildreth also proposed to obtedge maps of different scales and augured that diffescales of edges will provide important information. Thsuggested that the original image be bandlimited at sevdifferent cutoff frequencies and that an edge detectiongorithm be applied to each of the bandlimited images.1 Thiskind of multiresolution edge detection method has a tra

*The authors are currently with the National Cheng Kung UniversMedical Images Processing and Neural Networks Laboratory, Depment of Electrical Engineering, Tainan 70101, Taiwan. This work wsupported by the National Science Council, Taiwan, under grant N88-2213-E-006-056.

Opt. Eng. 39(3) 695–703 (March 2000) 0091-3286/2000/$15.00

-e

-

h

e-

t

t

l

off between localization and edge details. A fine resolutgives too much redundant detail, whereas a coarse restion lacks accuracy of edge detection. In addition, duethe medical image acquisition properties, noise or artifaarising in the course of image acquisition generally incredifficulty in edge detection. Thus, the first step of tradtional edge detection algorithms is to employ a noissuppressed process on the original image~e.g., a low-passfilter!. This noise-suppressed process usually causesloss of sharpness in the edges of objects. Therefore,rently, detection and outlining of boundaries of organs atumors are usually performed manually, a task that is bcostly and tedious.

On the other hand, neural networks have featuresfault tolerance and potential for parallel implementatiand have been widely applied to edge detection in recyears. Zhu and Yan2 proposed a modified Hopfield networbased on an active contour model to detect the brain tuboundaries in medical images. Lu and Shen3 used a back-propagation network to extract boundaries, followedboundary enhancement using a modified Hopfield neunetwork. However, these 2-D Hopfield neural networperform edge detection on the basis of binary segmenimages but not the original images. As a result, the quaof edge detection heavily depends on the presegmentesults. In addition, conventional 2-D Hopfield neural neworks lack the ability to take the pixel’s contextual infomation into its evolution consideration that resultsfragmentation and disconnected points. Thus, despitefact that a tremendous amount of research has been don

695© 2000 Society of Photo-Optical Instrumentation Engineers

in a

ed

acon-ed

conisonllyfo

lem

h-sede-

de-ld

-Dedre,n-e-ly

n bn-lelem.

if-tan

ointstainsofnedardchsatd ta

dif-on-Im-ns.

ti-eldingtedo-heichhavreathemainge oa-es.

ainthe

eec-

y.s.terth-them-ist-lu-

ixel-cor-u-ot

k.in-Dcala-

ex-

ualn

. Inhenedher,pos-

ap.

edretion.

e-

Chang and Chung: Two-layer competitive based Hopfield neural network . . .

edge detection, finding true physical boundary edgesmedical image remains a challenging problem.

A recent work4 proposed the contextual-constraint-basHopfield neural cube~CCBHNC!, a human-vision-likehigh-level image segmentation technique that takes intocount each pixel’s feature and the pixel’s surrounding ctextual information for image segmentation. The proposapproach was demonstrated to be able to obtain moretinued and smoother segmentation results in comparwith other methods. However, this network was basicadesigned for the purpose of segmentation rather thanedge detection. Furthermore, it also inherited the probthat the number of classes must be predetermined.

In this paper, inspired by the human-vision-like higlevel vision concept, we present a two-layer Hopfield-baneural network called the competitive Hopfield edgfinding neural network~CHEFNN! by including a pixel’ssurrounding contextual information into the image edgetection. The CHEFNN extends the one-layer 2-D Hopfienetwork at the original image plane to a two-layer 3Hopfield network with edge detection to be implementon its third dimension. With the extended 3-D architectuthe network is capable of incorporating each pixel’s cotextual information into a pixel-labeling procedure. Consquently, the effect of tiny details or noise can be effectiveremoved and the drawback of disconnected fractions cafurther overcome. In addition, each pixel in this humavision-like high-level vision model has only two possiblabelings, edge point or nonedge point. Thus, the probassociated with the decision of class number is avoided

All the Hopfield-based optimization methods5 require anenergy function with certain constraints determined by dferent applications. These constraints play a very imporrole in the solution of optimized problems. There are twtypes of constraints: soft constraints and hard constraSoft constraints are used to enable the network to obmore desirable results. It is unnecessary to satisfy allconstraints so long as a proportional balance is retaiamong them in the entire operation. On the contrary, hconstraints are implemented so that the network can reafeasible resolution. Therefore, they must be completelyisfied. In the past, some hard constraints had to be addethe energy function for the Hopfield network to reachreasonable solution. However, it has proved to be veryficult to determine the weighting factors between hard cstraints and the problem-dependent energy function.proper parameters would lead to unfeasible solutioRecently, Chung et al.6 proposed the concept of competive learning to exclude the hard constraints in the Hopfinetwork and eliminate the issue of determining weightfactors. This proposed competitive learning rule is adopin CHEFNN. Moreover, two soft constraints are also intrduced in CHEFNN in the course of edge detection. Tfirst soft constraint is the homogeneous constraint, whassumes that the pixels belonging to the nonedge classthe minimum Euclidean distance measure within an asurrounding the pixels. The second constraint issmoothness constraint, which uses the contextual infortion to obtain completely connected edge points. Usthese two soft constraints, CHEFNN can take advantagboth the local gray-level variance and contextual informtion of pixels to detect desirable edges from noisy imag

696 Optical Engineering, Vol. 39 No. 3, March 2000

-

-

r

e

t

.

t

a-o

e

-

f

Experimental results show that the CHEFNN can obtmore precise and continued edge points thanMarr-Hildreth-1 and Laplacian-based,7 Canny,8 andwavelet-based9 methods. In addition, the adoption of thcompetitive learning rule in CHEFNN relieves us from thburden of determining proper values for the weighting fators and further enables the network to converge rapidl

The remainder of this paper is organized as followSection 2 describes the CHEFNN architecture. Compusimulations of the CHEFNN are presented in Sec. 3. Maematical derivations to show the convergence ofCHEFNN are given in Sec. 4. An experiment-based coparative study among the proposed method and four exing methods is conducted in Sec. 5. Finally, some concsions are drawn in Sec. 6.

2 Two-Layer Competitive Hopfield NeuralNetwork

In general, edge detection can be considered as a plabeling process that assigns pixels to edge points in acdance with their spatial contextual information. Unfortnately, the conventional 2-D Hopfield architecture canninclude the pixel’s contextual information into the networThis results in fragmentation and disconnected pointsedge detection. In this paper, we propose CHEFNN, a 3neural network architecture that considers both the logray-level variance and the neighbor-contextual informtion to avoid fractions and disconnected points in edgetraction.

To enable the network to consider the pixel’s contextinformation and identify whether or not each pixel is aedge point directly from anN3N image, the designedCHEFNN is made up ofN3N32 neurons, which can beconceived of as a two-layer Hopfield neural architecturethe CHEFNN, the input is the original 2-D image and toutput is an edge map. Each pixel of the image is assigby two neurons arranged in a two layer, one atop anotas shown in Fig. 1, where each neuron represents onesible label~edge point or not!. Therefore, the output of theneurons with the same layer is an edge-based feature mThe architecture of the CHEFNN is shown in Fig. 1.

The CHEFNN is a two-layer neural network, extendfrom the one-layer 2-D Hopfield neural networks, wheeach neuron does not have self-feedback interconnecLet Vx,i ,k denote the binary state of the (x,i )’th neuron inlayer k (Vx,i ,k51 for excitation andVx,i ,k50 for inhibi-tion! andWx,i ,k;y, j ,z denotes the interconnection weight b

Fig. 1 CHEFNN architecture.

d

qs.of

f thee if

at

ithnc-

ixels

as

ot

elthe

elthenc-el-the

hei-

bel

c-

are

Chang and Chung: Two-layer competitive based Hopfield neural network . . .

tween the neuron (x,i ) in layer k and the neuron (y, j ) inlayerz. A neuron (x,i ,k) in this network receives weighteinputsWx,i ,k;y, j ,zVy, j ,z from each neuron (y, j ,z) and a biasinput I x,i ,k from outside. The total input to neuron (x,i ,k) iscomputed as

Netx,i ,k5(z51

2

(y51

N

(j 51

N

Wx,i ,k;y, j ,zVy, j ,z1I x,i ,k , ~1!

and the activation function in the network is defined by

Vx,i ,kn115H 1 ~Netx,i ,k2u!.0

Vx,i ,kn ~Netx,i ,k2u!50

21 (Netx,i ,k2u),0

, ~2!

whereu is a threshold value. According to the update E~1! and~2!, we can define the Lyapunov energy functionthe two-layer Hopfield neural network as

E521

2 (k51

2

(z51

2

(x51

N

(y51

N

(i 51

N

(j 51

N

Vx,i ,kWx,i ,k;y, j ,zVy, j ,z

2 (k51

2

(x51

N

(i 51

N

I x,i ,kVx,i ,k . ~3!

The network achieves a stable state when the energy oLyapunov function is minimized. The layers of thCHEFNN represent the state of each pixel which indicatthe pixel is an edge point. A neuronVx,i ,1 in layer 1 in afiring state indicates that the pixel locate at (x,i ) in theimage is identified as an edge point, and a neuronVy, j ,2 inlayer 2 in a firing state indicates that the pixel located(y, j ) in the image is identified as a nonedge point.

To ensure that the CHEFNN has the ability to deal wcontextual information in edge detection, the energy fution of CHEFNN must satisfy the following conditions.

First, in the nonedge layer, assume that a nonedge p(x,i ) with Vx,i ,251, and its surrounding nonedge pixe(y, j ) within the neighborhood of (x,i )8 with Vy, j ,251have the minimum Euclidean distance measure. Letgx,i

andgy, j represent the gray levels of pixels (x,i ) and (y, j ),respectively. Then this condition can be characterizedfollows:

(x51

N

(y51

(y, j )Þ(x,i )

N

(i 51

N

(j 51

N

dx,i ;y, jFx,ip,q~y, j !Vx,i ,2Vy, j ,2 , ~4!

wheredx,i ;y, j is the normalized difference betweengx,i andgy, j , defined by

dx,i ;y, j5Fgx,i2gy, j

max~G! G2

, ~5!

andFx,ip,q(y, j ) is a function used to specify whether or n

pixel (y, j ) is located within ap3q window area centeredat pixel (x,i ). The functionFx,i

p,q(y, j ) is defined as

e

l

Fx,ip,q~y, j !5 (

l 52q

q

d j ,i 1 l (m52p

p

dy,x1m , ~6!

whered i , j is the Kronecker delta function given by

d i , j5H 1 i 5 j

0 iÞ j. ~7!

With this definitionFx,ip,q(y, j ) will give a value 1 if (y, j ) is

located inside the window area, and 0 otherwise. In Eq.~4!Vx,i ,2 andVy, j ,2 are used to restrict that the local gray-levdifferences are computed only for the pixels labeled bynonedge layer.

Second, in the edge layer, if the labeling result of pix(x,i ) is the same as that of its neighboring pixels, thenenergy function is decreased. Otherwise, the energy fution is increased. The similarity between each pixel’s labing result and its neighboring pixels is computed asfollowing energy term:

(x51

N

(y51

(y, j )Þ(x,i )

N

(i 51

N

(j 51

N

Vx,i ,1Vy, j ,2Fx,ip,q~y, j !, ~8!

In addition to the constraints mentioned above, tCHEFNN needs to satisfy the following two hard condtions to obtain a correct edge detection results:

1. Each pixel can be assigned by one and only one la~edge or not!:

(z51

2

Vx,i,z51. ~9!

2. The sum of all classified pixels must be

(z51

2

(x51

N

(i51

N

Vx,i,z5N2. ~10!

From the preceding four constraints@Eqs.~4!, ~8!, ~9!, and~10!#, the objective function of the network for edge detetion is obtained as

E5A

2 (x51

N

(y51

(y, j )Þ(x,i )

N

(i 51

N

(j 51

N

dx,i ;y, jFx,ip,q~y, j !Vx,i ,2Vy, j ,2

1B

2 (x51

N

(y51

(y, j )Þ(x,i )

N

(i 51

N

(j 51

N

Vx,i ,1Vy, j ,2Fx,ip,q~y, j !

1C

2 (x51

N

(i 51

N

(z51

2

(k51

2

Vx,i ,kVx,i ,z

1D

2 S (z51

2

(x51

N

(i 51

N

Vx,i ,z2N2D 2

. ~11!

Note that the first two terms are soft constraints, whichused to improve the edge detection results~for example, toobtain a more complete and more connected edges!. The

697Optical Engineering, Vol. 39 No. 3, March 2000

onarddg

netts.

orule

-al

ast to

for

onsll

to

b-

q.

ths

m--eivas-Theonly

ig-us,

l-

tinginal

2

-

thetheth-

the

eorkaN:

tsthehe.the

Chang and Chung: Two-layer competitive based Hopfield neural network . . .

network should find a compromise between these soft cstraints. On the other hand, the last two terms are hconstraints. They are the basic requirements of the edetection problem and cannot be violated. Thus, thework must completely satisfy these two hard constrainOtherwise, the obtained results would not be accurate.

To avoid the difficulty of searching for proper values fthe hard constraints, the competitive winner-take-all rproposed by Chung et al.6 is imposed in the CHEFNN forthe updating of the neurons. Based on the winner-takerule for each pixel, one and only one of the neuronsVx,i ,z ,which receives the maximum input, would be regardedthe winner neuron and therefore its output would be se1. The other neuronsVx,i ,z , for zÞk associated with thesame pixel are set to zero. Thus, the output functionVx,i ,k is given as

Vx,i ,k5H 1 if Vx,i ,k5max$Vx,i ,1 ,Vx,i ,2%

0 otherwise. ~12!

The winner-take-all rule guarantees that no two neurVx,i ,1 and Vx,i ,2 fire simultaneously. The winner-take-arule also ensures that all the pixels are classified. Duethese two properties, the last two terms~hard constraints! inEq. ~11! can be completely removed. As a result, the ojection function of the CHEFNN may be modified as

E5 (k51

2

(z51

2

(x51

N

(y51

(y, j )Þ(x,i )

N

(i 51

N

(j 51

N S A

2dx,i ;y, jdz,2dk,2

1B

2dz,2dk,1DFx,i

p,q~y, j !Vx,i ,kVy, j ,z . ~13!

Comparing the objection function of the CHEFNN in E~13! and the Lyapunov function Eq.~3! of the two-layersHopfield network, the synaptic interconnection strengand the bias input of the network are obtained as

Wx,i ,k;y, j ,z52S A

2dx,i ;y, jdz,2dk,21

B

2dz,2dk,1DFx,i

p,q~y, j !

~14!

and

I x,i ,z50, ~15!

respectively. Applying Eqs.~14! and ~15! to Eq. ~1!, thetotal input to neuron (x,i ,k) is

Netx,i ,k521

2 (z51

2

(y51

(y, j )Þ(x,i )

N

(j 51

N

~Adx,i ;y, jdz,2dk,2

1Bdz,2dk,1!Fx,ip,q~y, j !Vy, j ,z . ~16!

From Eq.~16!, we can see that due to the use of the copetitive winner-take-all rule, the CHEFNN is not fully interconnected. The neurons located at the edge layer recinputs from all the neurons in the edge layer and theirsociated neighboring neurons in the nonedge layer.neurons located at the nonedge layer receive inputs

698 Optical Engineering, Vol. 39 No. 3, March 2000

-

e-

l

e

from the neurons at the nonedge layer. This property snificantly reduces the complexity of the network, and thincreases the network evolution speed.

3 Contextual-Constraint-Based Neural CubeAlgorithm

The algorithm of the 3-D CHEFNN is summarized as folows:

3.1 Input

The original image X, the neighborhood parametersp andq and the factorsA andB.

3.2 Output

The stabilized neuron states of different layers representhe classified edge and nonedge feature map of the origimages.

3.3 Algorithm

1. Arbitrarily assigns the initial neuron states toclasses.

2. Use Eq.~16! to calculate the total input of each neuron (x,i ,k).

3. Apply the winner-take-all rule given in Eq.~12! toobtain the new output states for each neuron.

4. Repeat steps 2 and 3 for all classes and countnumber of neurons whose state is changed duringupdating. If there is a change, then go to step 2; oerwise, go to step 5.

5. Output the final states of neurons that indicateedge detection results.

4 Convergence of the CHEFNN

In what follows, we prove that the energy function of thproposed CHEFNN is always decreased during netwevolution. This implies that the network will converge tostable state. Consider the energy function of the CHEFN

E5 (k51

2

(z51

2

(x51

N

(y51

(y, j )Þ(x,i )

N

(i 51

N

(j 51

N S A

2dx,i ;y, jdz,2dk,2

1B

2dz,2dk,1DFx,i

p,q~y, j !Vx,i ,kVy, j ,z . ~17!

According to the architecture of CHEFNN, only the outpuof the neurons with the same layer and the outputs ofneighboring neurons with different layers may affect tclassification result of pixel (m,n). Thus, the energy of Eq~17! can be separated into two terms, one related tostate of the neuron (m,n), E(m,n) , and the other is irrel-evant to the state of the neuron (m,n), Eothers. Thus, theenergy function of Eq.~17! can be rewritten as follows:

teto

atatn

iatef

and

le,

ore,

ilitydif-r-

theg

he

Chang and Chung: Two-layer competitive based Hopfield neural network . . .

E5E(m,n)1Eothers

51

2 (k51

2

(z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2dk,2

1Bdz,2dk,1!Fm,np,q Vm,n,kVy, j ,z

11

2 (k51

2

(z51

2

(x51xÞm

N

(y51

(y, j )Þ(m,n)

N

(i 51iÞn

N

(j 51

N

~Adx,i ;y, j

3dz,2dk,21Bdz,2dk,1!Fx,ip,qVx,i ,kVy, j ,z . ~18!

In Eq. ~18!, only the first term will be affected by the staof the neuron (m,n). Assume that the current iteration isupdate the state of neuron (m,n). According to the winner-take-all learning rule, one and only one neuron is firingposition (x,i ). Without loss of generality, it is assumed ththe neuron (m,n,b) is the only active neuron at positio(m,n) before updating, i.e.,Vm,n,b

old 51 and Vm,n, jold 50 ; i

Þb. After updating, the neuron (m,n,a) is selected to bethe winning node, i.e.,Vm,n,a

new 51 andVm,n, jnew 50 ; iÞa.

According to Eq.~16! and the winner-take-all rule, weobtain

21

2 (z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2da,2

1Bdz,2da,1!Fm,np,q ~y, j !Vy, j ,z

5maxk51,2F21

2 (z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2dk,2

1Bdz,2dk,1!Fm,np,q ~y, j !Vy, j ,zG , ~19!

which implies that

(z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2da,2

1Bdz,2da,1!Fm,np,q ~y, j !Vy, j ,z

,(z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2db,2

1Bdz,2db,1!Fm,np,q ~y, j !Vy, j ,z . ~20!

Since the current updating of neuron states are assocwith pixel (m,n), this updating will not change the value oEothers. Thus the change of the energy values beforeafter network updating could be computed as

d

DE5Enew2Eold5E(m,n)new 2E(m,n)

old

51

2 (k51

2

(z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2dk,2

1Bdz,2dk,1!Fm,np,q Vm,n,k

new Vy, j ,z

21

2 (k51

2

(z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

@Adm,n;y, jdz,2dk,2

1Bdz,2dk,1#Fm,np,q Vm,n,k

old Vy, j ,z . ~21!

According to the mentioned winner-take-all learning ruwe can see thatVm,n,a

new 51, Vm,n,inew 50 ; iÞa, and Vm,n,b

old

51, Vm,n,iold 50 ; iÞb. Thus, Eq.~21! may be simplified as

follows:

DE51

2 F (z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2da,2

1Bdz,2da,1!Fm,np,q ~y, j !Vm,n,a

new Vy, j ,z

2(z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2db,2

1Bdz,2db,1!Fm,np,q ~y, j !Vm,n,b

old Vy, j ,zG . ~22!

By replacingVm,n,anew 51 andVm,n,b

old 51 in Eq.~22!, Eq. ~22!can be further simplified as follows:

DE51

2 F (z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2da,2

1Bdz,2da,1!Fm,np,q ~y, j !Vy, j ,z

2(z51

2

(y51

(y, j )Þ(m,n)

N

(j 51

N

~Adm,n;y, jdz,2db,2

1Bdz,2db,1!Fm,np,q ~y, j !Vy, j ,zG . ~23!

The condition of Eq.~20! yields DE,0. This implies thatthe energy change in the updating is negative. Therefthe convergence of the CHEFNN is guaranteed.

5 Experimental Results

To show that the proposed CHEFNN has a good capabof edge detection and noise immunity, three cases offerent modality medical images, including a computegenerated phantom image@Fig. 2~a!#, a skull-based CT im-age@Fig. 9~a! later in this section# and a knee-joint-basedmagnetic resonance~MR! image @Fig. 10~a! later in thissection# were tested. All the cases used to evaluateCCBHNC were collected from the National Cheng KunUniversity Hospital. The MR images were taken from t

699Optical Engineering, Vol. 39 No. 3, March 2000

se-00are

e,achth

o ainedlsnois

e

p-

ec-im-wiloisth

d,ctlythe

ofges

ete

alsultsults-

N

butand,ise inso,NNultsan

y

per-y,

le 1see

Chang and Chung: Two-layer competitive based Hopfield neural network . . .

Siemen’s Magnetom 63SPA, T2-weighted spin-echoquences, while the CT image is acquired from a GE 98CT scanner. The image sizes of CT and MR images2563256 pixels, with each pixel of 256 gray levels.

Figure 2~a! is a computer-generated phantom imagwhich was made up of seven overlapping ellipses. Eellipse represents one structural area of tissue. Fromperiphery to the center, they were background~BKG, graylevel530), skin or fat~S/F, gray level5120), gray matter~GM, gray level5165), white matter~WM, gray level575), and cerebrospinal fluid~CSF, gray level5210), re-spectively. The gray levels for each tissue were set tconstant value, thus, the edge points can be easily obtaThe noise of the uniform distribution with the gray leveranging from2K to K was also added to this simulatiophantom to generate several noisy test images. The nranges were set to be 18, 20, 23, 25, and 30.

The proposed CHEFNN is compared with thLaplacian-based,7 Marr-Hildreth,1 Canny,8 andwavelet-based9 methods. In the evaluations, the most apropriate parameters~e.g., mask sizeN, local varianced,and threshold! for each method to gain the best edge dettion results in the original computer-generated phantomage are obtained by trial and error. These parametersalso be used in the methods in the subsequent test of nphantom images. The results using these methods fornoiseless image are shown in Figs. 2~b! to 2~f!. From Figs.2~b!, 2~e!, and 2~f!, we can see that the Laplacian-baseCanny’s, and CHEFNN methods extract the edges correfor the noiseless image. On the other hand, withwavelet-based method, although it has the capabilityedge detection, the result end up with disconnected edas shown in Fig. 2~d!. Meanwhile, Fig. 2~c! shows thatalthough the Marr-Hildreth method can extract compledges it also results in redundant edges.

To evaluate noise robustness, these methods aretested on noisy images of different noise levels. The resare shown in Figs. 3–7. Figures 3 and 4 are the reswhen the noise levels are small, withK518 and 20, respectively. From Figs. 3 and 4 we can see that the CHEFN

Fig. 2 (a) Original Phantom image, (b) result by the Laplacian-based method, (c) result by the Marr-Hildreth method, (d) result bythe wavelet-based method, (e) result by the Canny method, and (f)result by the proposed CHEFNN.

700 Optical Engineering, Vol. 39 No. 3, March 2000

e

.

e

lye

,

o

can obtain complete edges under a small level of noise,other methods result in redundant edges. On the other has the noise level increases, some redundant edges arthe CHEFNN results, as illustrated in Figs. 5–7. Eventhe incurred amount of redundant edges with the CHEFis much less than with the other methods. These resimply that the CHEFNN has better noise immunity thother methods.

For quantitative evaluation, the detection error given b9

is used as the measurement:

Pe5ne

n0, ~24!

wheren0 is the number of actual edge points, andne is thenumber of erroneous edge points. The edge detectionformance for the Laplacian-based, Marr-Hildreth, Cannwavelet-based, and CHEFNN methods are listed in Taband are shown Fig. 8. From these results, we can easily

Fig. 3 (a) Phantom image with added noise (K518), (b) result bythe Laplacian-based method, (c) result by the Marr-Hildreth method,(d) result by the wavelet-based method, (e) result by the Cannymethod, and (f) result by the proposed CHEFNN.

Fig. 4 (a) Phantom image with added noise (K520), (b) result bythe Laplacian-based method, (c) result by the Marr-Hildreth method,(d) result by the wavelet-based method, (e) result by the Cannymethod, and (f) result by the proposed CHEFNN.

y incanthe

asd,el

odtheandpece-

ensed

rede

Chang and Chung: Two-layer competitive based Hopfield neural network . . .

that although the wavelet-based methods lack accuracedge detection, they are relatively robust to noise. Thisbe easily observed by the flat curve in Fig. 8 showingincrease of the detection error rate from 28.1 to 65.2%the noise levelK increases from 0 to 30. On the other hanthe Laplacian-based and Canny methods perform wwhen the noise levelK is equal to 0, with the detectionerror rates both equal to 1.3%. However, these two methare highly noise sensitive. As the noise level increases,detection error rate also increases dramatically to 328123.4% for Laplacian-based and Canny methods, restively. The Marr-Hildreth method has a relatively low dtection error rate of 4.1% for noiseless image withK50and a relatively high error detection rate of 91.2% whK530. In contrast to these methods, the propocontextual-based CHEFNN obtains more correct edgesults for both noiseless and noisy images. The averagetection error rate of CHEFNN is 1.3% forK50, 1.5% for

Fig. 5 (a) Phantom image with added noise (K523), (b) result bythe Laplacian-based method, (c) result by the Marr-Hildreth method,(d) result by the wavelet-based method, (e) result by the Cannymethod, and (f) result by the proposed CHEFNN.

Fig. 6 (a) Phantom image with added noise (K525), (b) result bythe Laplacian-based method, (c) result by the Marr-Hildreth method,(d) result by the wavelet-based method, (e) result by the Cannymethod, and (f) result by the proposed CHEFNN.

l

s

-

--

Fig. 7 (a) Phantom image with added noise (K530), (b) result bythe Laplacian-based method, (c) result by the Marr-Hildreth method,(d) result by the wavelet-based method, (e) result by the Cannymethod, and (f) result by the proposed CHEFNN.

Fig. 8 Detection error rates versus different noise levels.

Table 1 The detection error rates for Laplacian-based, Marr-Hildreth, Canny, wavelet, and the proposed CHEFNN methods us-ing the simulated phantom image with K50 to 30.

Noise (%)

Method 0 18 20 23 25 30

Laplacian 1.3 51.9 70.6 89.2 131.4 328.1

Marr-Hildreth 4.1 43 44 45.9 53 91.2

Canny 1.3 23.9 39.2 46.1 101.5 123.4

Wavelet 28.1 41 41.3 42 47.6 65.2

CHEFNN 1.3 1.5 3.1 4.9 8.7 16.1

701Optical Engineering, Vol. 39 No. 3, March 2000

of-se

andkulloorrr-gesults

hatfor-oncore

--

rese

eeon-d inor i

nnyis

d-m-

c-di-

-Do-dthein-hed,be

thetactrr-on,ronn-ly.atar-are

g a

x-cs

eld

in

al

ce-

l

Chang and Chung: Two-layer competitive based Hopfield neural network . . .

K518, 3.1% for K520, 4.9% for K523, 8.7% for K525, and 16.1% for the large noise level ofK530.

Figure 9~a! is a CT head image in which a numbertiny tissues exist. Figures 9~b! and 9~e! show the edge detection image using the Laplacian-based and wavelet-bamethods, respectively. Obviously, the Laplacian-basedwavelet-based methods can not effectively outline the sin the image. Thus, the edge detection results are pFigure 9~c! is the edge detection results of the MaHildreth method. As we can see, there are double edmany fragments, and little holes in the image. The resusing Canny’s edge detector is illustrated in Fig. 9~d!,which shows many unwanted details. Figure 9~f! is theedge detection results using CHEFNN. It clearly shows tmore continuous edge were found when contextual inmation was used in the edge detection process. Thus,again the proposed CHEFNN obtained clearer and maccurate edges in the image.

Figure 10~a! is an MR knee-joint-based transverse image. The edges of Fig. 10~a! obtained using the Laplacianbased method with threshold55, N57, andd51 and theMarr-Hildreth method withN57 andd51 are illustratedin Figs. 10~b! and 10~e!, respectively. As we can see, thefragments and redundant edges exist in Laplacian-baand wavelet-based methods. Figure 10~c! illustrates the re-sult of the Marr-Hildreth method, from which we can sthat edges extracted by the Marr-Hildreth method are csiderably continued; however, the method also resultedouble edges. The result using the Canny edge detectshown in Fig. 10~d!. It is obvious from Fig. 10~d! that manyunwanted details are also falsely detected by the Camethod as edges. The result obtained by CHEFNNshown in Fig. 10~f! from which we can see that the bounaries of the knee joint, articular, and patella were copletely and precisely detected.

6 Conclusion

In this paper, a modified Hopfield neural network architeture, the CHEFNN, is presented for edge detection of me

Fig. 9 (a) Original CT image; (b) result by the Laplacian-basedmethod with threshold55, N57, and d51; (c) result by the Marr-Hildreth method (N59, d51); (d) result by the Canny method; (e)result by the wavelet-based method; and (f) the result of CHEFNN(p5q51, A50.01, B50.032).

702 Optical Engineering, Vol. 39 No. 3, March 2000

d

.

,

e

d

s

cal images. The CHEFNN extends the one-layer 2Hopfield network at the original image plane into a twlayer 3-D Hopfield network with the third dimension usefor edge detection. With the extended 3-D architecture,network is capable of including each pixel’s contextualformation into a pixels-labeling procedure. As a result, teffect of tiny details or noises can be effectively removeand the drawback of disconnected fractions can alsoovercome. The experimental results show thatCHEFNN produces more appropriate, continued, and inedges in comparison with the Laplacian-based, MaHildreth, Canny, and wavelet-based methods. In additiusing the competitive learning rule to update the neustates prevents CHEFNN from satisfying strong costraints. Thus, it enables the network to converge rapidIn addition, the CHEFNN is a self-organized structure this highly interconnected and can be implemented in a pallel manner. It can also be easily designed for hardwdevices to achieve high-speed implementation.

References

1. D. Marr and E. Hildreth, ‘‘Theory of edge detection,’’Proc. R. Soc.LondonB207, 187–217~1980!.

2. Y. Zhu and H. Yan, ‘‘Computerized tumor boundary detection usinHopfield neural network,’’IEEE Trans. Med. Imaging16~1!, 55–67~1997!.

3. S. W. Lu and J. Shen, ‘‘Artificial neural networks for boundary etraction,’’ in Proc. IEEE Int. Conf. on Systems, Man and Cyberneti,pp. 2270–2275~1996!.

4. C. Y. Chang and P. C. Chung, ‘‘Using a three-dimensional Hopfineural network for image segmentation,’’ inProc. 1998 Conf. onComputer Vision, Graphics, and Image Processing, pp. 266–273,Taipei, Taiwan~1998!.

5. J. J. Hopfield and D. W. Tank, ‘‘Neural computation of decisionsoptimization problems,’’Biol. Cybern.52, 141–152~1985!.

6. P. C. Chung, C. T. Tsai, E. L. Chen and Y. N. Sun, ‘‘Polygonapproximation using a competitive Hopfield neural network,’’PatternRecogn.27~11!, 1505–1512~1994!.

7. J. S. Lim, ‘‘Two-dimensional signal and image processing,’’ PrentiHall, Englewood Cliffs, NJ~1989!.

8. J. Canny, ‘‘A computational approach to edge detection,’’IEEETrans. Pattern. Anal. Mach. Intell.PAMI-8 ~6!, 679–697~1986!.

9. T. Aydm, Y. Yemez, E. Anarim, and B. Sankur, ‘‘Multidirectionaand multoscal edge detection via M-band wavelet transform,’’IEEETrans. Image Process.5~9!, 1370–1377~1996!.

Fig. 10 (a) Original MR knee-joint-based transverse image; (b) re-sult by the Laplacian-based method with threshold55, N57, andd51; (c) result by the Marr-Hildreth method (N59, d51); (d) resultby the Canny method; (e) result by the wavelet-based method; and(f) result by CHEFNN (p5q51, A50.01, B50.03).

Chang and Chung: Two-layer competitive based Hopfield neural network . . .

Chuan-Yu Chang received the BS degreein nautical technology from the NationalTaiwan Ocean University, Taiwan, in 1993,and the MS degree from the Department ofElectrical Engineering, National TaiwanOcean University, Taiwan, in 1995. Cur-rently, he is a PhD degree student with theDepartment of Electrical Engineering, Na-tional Cheng Kung University. His currentresearch interests are neural networks,medical image processing, and pattern rec-ognition.

Pau-Choo Chung received the BS and theMS degrees in electrical engineering fromNational Cheng Kung University, Tainan,Taiwan, in 1981 and 1983, respectively,and the PhD degree in electrical engineer-ing from Texas Tech University, Lubbock,in 1991. From 1983 to 1986, she was withthe Chung Shan Institute of Science andTechnology, Taiwan. Since 1991, she hasbeen with the Department of Electrical En-gineering, National Cheng Kung University,

where she is currently a full professor. Her current research includes

neural networks, and their applications to medical image process-ing, medical image analysis, telemedicine, and video image analy-sis.

703Optical Engineering, Vol. 39 No. 3, March 2000