10
Classification of benign and malignant masses based on Zernike moments Amir Tahmasbi n , Fatemeh Saki, Shahriar B. Shokouhi Department of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran, Iran article info Article history: Received 31 August 2010 Accepted 14 June 2011 Keywords: Computer aided diagnosis Mammography Neural network Opposition-based learning Zernike moments abstract In mammography diagnosis systems, high False Negative Rate (FNR) has always been a significant problem since a false negative answer may lead to a patient’s death. This paper is directed towards the development of a novel Computer-aided Diagnosis (CADx) system for the diagnosis of breast masses. It aims at intensifying the performance of CADx algorithms as well as reducing the FNR by utilizing Zernike moments as descriptors of shape and margin characteristics. The input Regions of Interest (ROIs) are segmented manually and further subjected to a number of preprocessing stages. The outcomes of preprocessing stage are two processed images containing co-scaled translated masses. Besides, one of these images represents the shape characteristics of the mass, while the other describes the margin characteristics. Two groups of Zernike moments have been extracted from the preprocessed images and applied to the feature selection stage. Each group includes 32 moments with different orders and iterations. Considering the performance of the overall CADx system, the most effective moments have been chosen and applied to a Multi-layer Perceptron (MLP) classifier, employing both generic Back Propagation (BP) and Opposition-based Learning (OBL) algorithms. The Receiver Opera- tional Characteristics (ROC) curve and the performance of resulting CADx systems are analyzed for each group of features. The designed systems yield Az ¼0.976, representing fair sensitivity, and Az ¼0.975 demonstrating fair specificity. The best achieved FNR and FPR are 0.0% and 5.5%, respectively. & 2011 Elsevier Ltd. All rights reserved. 1. Introduction 1.1. Background American Cancer Society estimated 192,370 new cases of inva- sive breast cancer diagnosis among women and 62,280 additional cases of in-situ breast cancer in the year 2009. Furthermore, approximately 40,170 women are expected not to survive from breast cancer [1]. In the United States, only lung cancer accounts for more cancer deaths in women [1,2]. Moreover, women have about a 1 in 8 lifetime risk of developing invasive breast cancer [2]. It can be inferred that despite advances of technology in the fields of mammography [2], thermography [3], optical tomography [4] and other anticancer methodologies in the last 20 years, breast cancer is still a prominent problem. Early detection of breast cancer increases the survival rate as well as treatment options [2,5]. Mammography has been one of the most reliable methods for early detection and diagnosis of breast cancer [68]; it has reduced breast cancer mortality rates by 30–70% [5]. However, mammography is not perfect; mammograms are difficult to interpret [3]. The sensitivity of screening mammo- graphy is affected by the image quality and the radiologist’s level of expertise [2]. Fortunately, CADx technology can improve the per- formance of radiologists [5]. Radiologists visually search mammograms for specific abnorm- alities. A number of important signs of breast cancer, which radiologists look for, are clusters of microcalcifications, masses, and architectural distortions [2]. A mass is defined as a space- occupying lesion seen in at least two different projections [9]. Although a vast number of algorithms including segmentation, feature extraction and classification approaches have been devel- oped in order to diagnose the masses in mammography images, more research is still needed in this area. The current research is directed towards the development of a CADx system for diagnosis of breast masses in mammography images focusing on feature extrac- tion section. Our proposed features, Zernike moments, have been utilized for extracting the shape and margin properties of masses. Zernike moments are the mapping of an image onto a set of complex Zernike polynomials [10]. Since Zernike polynomials are orthogonal to each other, Zernike moments can represent the properties of an image with no redundancy or overlap of informa- tion between the moments [10]. Because of these important characteristics, Zernike moments have been used widely in different types of applications [1115]. For instance, they have been utilized in shape-based image retrieval [12], in edge detection [13] Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/cbm Computers in Biology and Medicine 0010-4825/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.compbiomed.2011.06.009 n Corresponding author. E-mail addresses: [email protected], [email protected] (A. Tahmasbi), [email protected] (F. Saki), [email protected] (S.B. Shokouhi). Computers in Biology and Medicine 41 (2011) 726–735

Zernike_CBM_2011

Embed Size (px)

DESCRIPTION

Zernike moment

Citation preview

Page 1: Zernike_CBM_2011

Computers in Biology and Medicine 41 (2011) 726–735

Contents lists available at ScienceDirect

Computers in Biology and Medicine

0010-48

doi:10.1

n Corr

E-m

a_tahm

bshokou

journal homepage: www.elsevier.com/locate/cbm

Classification of benign and malignant masses based on Zernike moments

Amir Tahmasbi n, Fatemeh Saki, Shahriar B. Shokouhi

Department of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran, Iran

a r t i c l e i n f o

Article history:

Received 31 August 2010

Accepted 14 June 2011

Keywords:

Computer aided diagnosis

Mammography

Neural network

Opposition-based learning

Zernike moments

25/$ - see front matter & 2011 Elsevier Ltd. A

016/j.compbiomed.2011.06.009

esponding author.

ail addresses: [email protected],

[email protected] (A. Tahmasbi), f_saki@elec

[email protected] (S.B. Shokouhi).

a b s t r a c t

In mammography diagnosis systems, high False Negative Rate (FNR) has always been a significant

problem since a false negative answer may lead to a patient’s death. This paper is directed towards the

development of a novel Computer-aided Diagnosis (CADx) system for the diagnosis of breast masses. It

aims at intensifying the performance of CADx algorithms as well as reducing the FNR by utilizing

Zernike moments as descriptors of shape and margin characteristics. The input Regions of Interest

(ROIs) are segmented manually and further subjected to a number of preprocessing stages. The

outcomes of preprocessing stage are two processed images containing co-scaled translated masses.

Besides, one of these images represents the shape characteristics of the mass, while the other describes

the margin characteristics. Two groups of Zernike moments have been extracted from the preprocessed

images and applied to the feature selection stage. Each group includes 32 moments with different

orders and iterations. Considering the performance of the overall CADx system, the most effective

moments have been chosen and applied to a Multi-layer Perceptron (MLP) classifier, employing both

generic Back Propagation (BP) and Opposition-based Learning (OBL) algorithms. The Receiver Opera-

tional Characteristics (ROC) curve and the performance of resulting CADx systems are analyzed for each

group of features. The designed systems yield Az¼0.976, representing fair sensitivity, and Az¼0.975

demonstrating fair specificity. The best achieved FNR and FPR are 0.0% and 5.5%, respectively.

& 2011 Elsevier Ltd. All rights reserved.

1. Introduction

1.1. Background

American Cancer Society estimated 192,370 new cases of inva-sive breast cancer diagnosis among women and 62,280 additionalcases of in-situ breast cancer in the year 2009. Furthermore,approximately 40,170 women are expected not to survive frombreast cancer [1]. In the United States, only lung cancer accounts formore cancer deaths in women [1,2]. Moreover, women have about a1 in 8 lifetime risk of developing invasive breast cancer [2]. It can beinferred that despite advances of technology in the fields ofmammography [2], thermography [3], optical tomography [4] andother anticancer methodologies in the last 20 years, breast cancer isstill a prominent problem.

Early detection of breast cancer increases the survival rate aswell as treatment options [2,5]. Mammography has been one of themost reliable methods for early detection and diagnosis of breastcancer [6–8]; it has reduced breast cancer mortality rates by30–70% [5]. However, mammography is not perfect; mammograms

ll rights reserved.

.iust.ac.ir (F. Saki),

are difficult to interpret [3]. The sensitivity of screening mammo-graphy is affected by the image quality and the radiologist’s level ofexpertise [2]. Fortunately, CADx technology can improve the per-formance of radiologists [5].

Radiologists visually search mammograms for specific abnorm-alities. A number of important signs of breast cancer, whichradiologists look for, are clusters of microcalcifications, masses,and architectural distortions [2]. A mass is defined as a space-occupying lesion seen in at least two different projections [9].Although a vast number of algorithms including segmentation,feature extraction and classification approaches have been devel-oped in order to diagnose the masses in mammography images,more research is still needed in this area. The current research isdirected towards the development of a CADx system for diagnosis ofbreast masses in mammography images focusing on feature extrac-tion section. Our proposed features, Zernike moments, have beenutilized for extracting the shape and margin properties of masses.

Zernike moments are the mapping of an image onto a set ofcomplex Zernike polynomials [10]. Since Zernike polynomials areorthogonal to each other, Zernike moments can represent theproperties of an image with no redundancy or overlap of informa-tion between the moments [10]. Because of these importantcharacteristics, Zernike moments have been used widely indifferent types of applications [11–15]. For instance, they have beenutilized in shape-based image retrieval [12], in edge detection [13]

Page 2: Zernike_CBM_2011

A. Tahmasbi et al. / Computers in Biology and Medicine 41 (2011) 726–735 727

and as a feature set in pattern recognition [14]. However, they arenot perfect; the most important drawback of Zernike moments istheir high computational complexity making them unsuitable forreal-time applications. Fortunately, many researchers have beendeveloping approaches to the fast computation of Zernike moments[10,16,17].

In this paper, a novel CADx system has been developed for themass diagnosis in mammography images. The aim is to increasethe diagnosis performance as well as to reduce the FNR by takingthe most effective moments into account and discarding lowperformance features.

1.2. Motivation

According to BI-RADS, expert radiologists focus on threedistinct types of features including shape, margin, and densitywhile visually searching the mammograms [2,5]. The mass shapecan be round, oval, lobulated, or irregular [18]. Irregular shapeshave a more likelihood of malignancy [18,19]. The mass margincan be circumscribed, microlobulated, indistinct, or spiculated[18]. Spiculated and indistinct margins have a more likelihood ofmalignancy [18,19]. Density features represent mass densityrelative to normal fibroglandular breast tissue [19]. In addition,BI-RADS categorizes the breast density into four groups of highdensity, isodense, low density, and radiolucent [19]. Masses withhigher density usually have a more likelihood of malignancy.Fig. 1 illustrates various types of mass shapes, margins anddensities, as well as their likelihood of malignancy.

The implied features are exactly equal to those benchmarks towhich an expert radiologist pays attention in the diagnosisprocess. Hence, if one extracts them by an appropriate method,the resulting CADx system may achieve better performances thantraditional approaches.

As a matter of fact, Zernike moments impart two prominentcharacteristics:

(1)

Fig.and

Although they are extremely dependent on the scaling andtranslation of the object, their magnitude is independent fromits rotation angle [11]. Hence, one can utilize them to describeshape characteristics of breast masses and be sure that therotation of mass will not affect the diagnosis process.

(2)

The Zernike polynomials are orthogonal [10]. Thus, one canextract the Zernike moments from an ROI regardless of theshape of mass. The extracted moment can describe the massmargin characteristics, comprehensively.

1. Different types of (a) mass shapes, (b) mass margins, (c) mass densities,

their likelihood of malignancy according to BI-RADS.

In order to extract shape characteristics, the input ROI hasbeen subjected to some preprocessing stages such as segmenta-tion, co-scaling and translation, and the mass shape was obtainedin a binary format. Then, a suitable number of Zernike momentswere extracted, which can completely describe the mass shape.Additionally, in order to extract margin characteristics, the inputROI has been subjected to other preprocessing stages such ashistogram equalization, translation and co-scaling, and the massmargin has been obtained in a gray scale format. Then, theZernike moments were extracted as the descriptors of the massmargin.

According to literature, most of breast masses are either high-density or isodense [20]. Therefore, the density feature can beneglected considering the low likelihood of the presence of lowdensity and radiolucent masses. In fact, it does not provideenough useful information for the diagnosis process.

Finally, either each group of these features or a combinationof them have been applied to an MLP classifier, which is trainedwith both BP and OBL learning rules [21]. The results and perform-ances have been analyzed. Moreover, in order to analyze the effectof orders of Zernike moments on the performance of the overallsystem, a group of high-order as well as low-order Zernikemoments have been separately extracted from ROIs. Then, theROC curve and the performance of each group have been analyzed.

1.3. Database

In this paper, the Mammographic Image Analysis Society(MIAS) database has been utilized to provide the mammographyimages [22]. It contains 322 digital mammograms belonging toright and left breasts of 161 different women. Each mammogramhas been digitized by a resolution of 200 mm pixel edge, and thenstored as a digital image with the size of 1024�1024 pixels. Infact, the digital mammograms are gray-scale images with an 8 bitpixel depth.

These images have been categorized into three differentclasses from the viewpoint of breast tissue which are fatty, fatty-glandular, and dense-glandular. Moreover, they have been cate-gorized into normal, mass containing, micro-calcification clusters,asymmetry and architectural distortions from the viewpoint oflesion type. The MIAS database includes 209 normal breasts, 67ROIs with benign lesions and 54 ROIs with malignant lesions [22].Note that some breasts may have more than one lesion.

In the next section, the proposed approach has been discussedin detail. In Section 3 the experiments and results are reported,and finally, in the last section a conclusion has been made.

2. Methodology

Fig. 2 represents the flowchart of the proposed approach. Theinput data is an ROI which is a suspected part of a mammogramand contains only one mass. In this research, each ROI is a square-shaped region with the same number of rows and columns(N�N). However, different ROIs might have different diametersdepending on the size of masses.

The ROIs have been segmented in the left-hand path, asillustrated in Fig. 2. Then, the Normalized Radial Length (NRL)vector and its average value, which is indeed the average radius ofthe mass, are calculated [6]. By filling the mass, finally there willbe a binary image that represents the mass shape. In the right-hand path, the input ROI is subjected to histogram equalization.The outcome is a gray-scale image which contains mass margininformation.

There will be the same preprocessing stages on both right-handand left-hand paths hereafter. First, both images are subjected to a

Page 3: Zernike_CBM_2011

Fig. 2. The flowchart of the proposed approach including segmentation, prepro-

cessing, feature extraction, feature selection and classification stages.

Fig. 3. Segmentation of masses based on manual segmentation and radial

averaging method.

A. Tahmasbi et al. / Computers in Biology and Medicine 41 (2011) 726–735728

translation stage which translates the centroid of the mass into thecenter of the ROI; v is the proposed translation vector. This stageresolves the problem of dependency of Zernike moments on transla-tion. Furthermore, using the NRL average value (k), the massesshould be co-scaled. In fact, the ROIs should be scaled in such amanner that equalizes the radii of all masses. Not to mention the factthat after translation and co-scaling procedures, a number of rowsand columns might become unusable. Indeed, the new values ofthose rows and columns might carry no useful information. Hence,in the last step of preprocessing stage, each ROI has been cropped inorder to discard the proposed rows and columns.

Now the input ROIs are preprocessed and ready to be appliedto the feature extraction section. Fig. 2 illustrates the featureextraction blocks. The Zernike moments have been extracted fromboth binary and gray scale images which represent shape andmargin characteristics, respectively. Moreover, in order to analyzethe effect of orders of Zernike moments on the performance ofoverall system, a group of high-order Zernike moments and alsoa group of low-order Zernike moments have been separatelyextracted from ROIs. In the feature selection stage, either thedescriptors of margin or the descriptors of shape or a symmetriccombination of them or an asymmetric combination of them mayhave been chosen. Finally, the selected features are applied to anMLP classifier which is trained and tested with different groups ofinput patterns.

In the rest of paper, each stage is discussed in detail.

2.1. Segmentation

In this paper, manual segmentation has been utilized. In fact,each ROI has been segmented by two different expert radio-logists and the final boundary of mass has been calculated using

radial averaging. Fig. 3 depicts the basic idea of radial averagingand its equation is given as follows:

rðyÞ ¼POðyÞþQOðyÞ

2, 80oyo2p ð1Þ

where r(y) is a point on the final mass boundary for a specialangle y. Moreover, O denotes the mass centroid; PO and PQ arethe achieved radial length for a special y by the first and secondradiologists.

2.2. Preprocessing

The operations in the preprocessing stage are divided into twobranches. The first branch includes those functions which areapplied on the gray scale image to extract the Zernike moments asthe descriptors of mass margins. The second branch includesthose functions that are applied on the binary image to extractthe Zernike moments as the descriptors of mass shapes. The firstand second branches are depicted in the right-hand and the left-hand paths in Fig. 2, respectively. Fig. 4 illustrates two breastmasses in different steps of preprocessing stage.

The input ROI is subjected to histogram equalization at first.This increases the contrast of ROI; so, the mass margins will bemore visible. The outcome of this stage is illustrated in Fig. 4c.

Zernike moments are dependent on the translation and scalingof masses in ROIs. In other words, the Zernike moments oftwo similar objects that are not equally scaled and translated aredifferent. Thus, this suffering had better be compensated in thepreprocessing stage. Two steps have been employed to resolve thedependency problems. Firstly, the centroid of each mass is trans-lated into the center of corresponding ROI. This step removes thedependency of Zernike moments to object translation. The centroidof masses has been calculated by the following equations [23]:

c0 ¼

PN�1c ¼ 0

PN�1c ¼ 0 cf ðc,rÞPN�1

c ¼ 0

PN�1c ¼ 0 f ðc,rÞ

" #ð2Þ

r0 ¼

PN�1c ¼ 0

PN�1c ¼ 0 rf ðc,rÞPN�1

c ¼ 0

PN�1c ¼ 0 f ðc,rÞ

" #ð3Þ

where c0 and r0 denote the column number and row number of thecentroid, respectively. The pair (c,r) which is equal to (x,y) denotesthe coordination of the image while f(c, r) is the image function.The size of the image is N�N.

Then, the translation vector is computed and given as follows:

vj!¼ round

N

2

� ��c0

� �c!þ round

N

2

� ��r0

� �r!

ð4Þ

Page 4: Zernike_CBM_2011

Fig. 4. (a) Input ROI that contains only one mass, (b) manual segmentation of mass by an expert radiologistm (c) input ROI after histogram equalization, (d) mass boundary

after segmentation in binary format and (e) mass shape in binary format.

Fig. 5. (a) Mass boundary in a binary image and (b) the NRL vector of the mass and

its mean value.

A. Tahmasbi et al. / Computers in Biology and Medicine 41 (2011) 726–735 729

where v j is the translation vector. In addition, considering thatthe size of the ROI is N�N, N/2 denotes its center. Besides, j is theindex of input ROI. Note that after the translation, the centroid ofmass coincides with the center of ROI.

In the second step, the masses should be co-scaled. In fact, theROIs should be scaled in such a manner that equalizes the radii ofall masses. Initially, the average radius of each mass is foundusing NRL vector, which has been employed as a descriptor ofmass margin in many applications [24–26]. Fig. 5 illustrates howto extract this vector. Each member of d is the Euclidian distanceof the mass margin from its centroid for angle y. Vector d can becalculated using the following relation [24]:

djy ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðcj

y�cj0Þ

2þðrj

y�rj0Þ

2q

, 80ryo2p ð5Þ

where (c0,r0) denotes the coordinates of centroid of the masswhich is already coincided with the center of ROI. Besides, (cy,ry)denotes the coordinates of the mass margin for angle y. Note thaty does not change continuously and has M discrete values.Furthermore, j is the index of input ROI. Vector d is depicted inFig. 5b. Eventually, the following equation is used to calculate thescaling coefficient:

kj ¼R

ð1=MÞPM

i ¼ 1 dji

ð6Þ

where k denotes the suitable scaling coefficient for jth ROI and M

is the length of NRL vector. Moreover, R is the desired final radiusof mass which is set to 50 pixels in this research. Now, each masscan be scaled properly using the suitable k j.

After the translation and co-scaling procedures, some rows andcolumns might become unusable. In fact, the new values of thoserows and columns might carry no useful information. Therefore, inthe last step of preprocessing stage, each ROI has been cropped todiscard the proposed rows and columns. In the crop procedure, thesame numbers of rows are discarded from the top and the bottomof ROI; likewise, the same numbers of columns are discarded fromthe left-side and the right-side of ROI. This results in keeping thecentroid of mass in the center of ROI after the crop procedure. Thenumbers of discarded rows and columns are selected in such amanner that preserves the essential information of ROI and keepsthe size of final image (N) an even number.

The outputs of the preprocessing stage are two images: one ofthem is a binary image, which contains the shape characteristicsand the other is a gray scale image, which contains the margin

characteristics. In the rest of paper, we call the former Mass ShapeImage (MSI) and the latter Mass Margin Image (MMI).

2.3. Feature extraction and selection

The computation of Zernike moments from an input imageincludes three steps: computation of radial polynomials, compu-tation of Zernike basis functions and computation of Zernikemoments by projecting the image onto the Zernike basis func-tions [10–13].

The procedure for obtaining Zernike moments from an inputimage starts with the computation of Zernike radial polynomials[11]. The real-valued 1-D radial polynomial Rn,m is defined as

Rn,m ¼Xðn�9m9Þ=2

s ¼ 0

ð�1Þsðn�sÞ!

s!ðððnþ9m9Þ=2Þ�sÞ!ðððn�9m9Þ=2Þ�sÞ!rn�2s ð7Þ

Page 5: Zernike_CBM_2011

A. Tahmasbi et al. / Computers in Biology and Medicine 41 (2011) 726–735730

where n is a non-negative integer representing the order of the radialpolynomial. m is a positive or negative integer satisfying constraintsn�9m9¼even and 9m9rn representing the repetition of the azi-muthal angle. r is the length of vector from the origin to (x, y) [10,27].

Using (7), complex-valued 2-D Zernike basis functions, whichare defined within a unit circle, are formed by

Vn,mðr,yÞ ¼ Rn,mðrÞejmy, 9r9r1 ð8Þ

The complex Zernike polynomials satisfy the orthogonalitycondition [10–13].

Z 2p

0

Z 1

0Vn

n,mðr,yÞVp,qðr,yÞrdrdy¼p

nþ1 n¼ p,m¼ q

0 otherwise

�ð9Þ

where * denotes the complex conjugate. As mentioned before, theorthogonality implies no redundancy or overlap of informationbetween the moments with different orders and repetitions. Thisproperty enables the contribution of each moment to be uniqueand independent from the information in an image [10]. In fact,they can be fair features.

Complex Zernike moments of order n with repetition m aredefined as [12]

Zn,m ¼nþ1

p

Z 2p

0

Z 1

0f ðr,yÞVn

n,mðr,yÞrdrdy ð10Þ

where f (c,r) is the image function. For digital images, the integralsin (10) can be replaced by summations. Moreover, the coordinatesof the image must be normalized into [0,1] by a mapping trans-form. Fig. 6 depicts a general case of the mapping transform. Notethat in this case, the pixels located on the outside of the circle arenot involved in the computation of the Zernike moments. Even-tually, the discrete form of the Zernike moments for an imagewith the size N�N is expressed as follows [27]:

Zn,m ¼nþ1

lN

XN�1

c ¼ 0

XN�1

r ¼ 0

f ðc,rÞVn

n,mðc,rÞ

¼nþ1

lN

XN�1

c ¼ 0

XN�1

r ¼ 0

f ðc,rÞRn,mðrcrÞe�jmycr ð11Þ

where 0rrcrr1, and lN is a normalization factor. In the discreteimplementation of Zernike moments, the normalization factormust be the number of pixels located in the unit circle by themapping transform, which corresponds to the area of a unit circle,p, in the continuous domain [10]. The transformed distance, rcr,and the phase, ycr, at the pixel of (c, r) are given by the followingequations. Note that c and r denote the column and row number

Fig. 6. (a) N�N digital image with function f(c, r) and (b) the image mapped onto

the unit circle.

of image, respectively

rcr ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffið2c�Nþ1Þ2þð2r�Nþ1Þ2

qN

ycr ¼ tan�1 N�1�2r

2c�Nþ1

� �ð12Þ

Using (11) and (12), a final equation can be obtained which isonly a function of c, r, m and n. Using this equation, we can simplycalculate Zernike moments for every image without havingdifficulty with mapping functions.

As a matter of fact, if the size of the ROI is odd, it will have acenter point with the following coordination:

ðc,rÞ ¼N�1

2,N�1

2

� �ð13Þ

Combining (12) and (13) yields the following results:

rcr ¼ 0

ycr ¼ tan�1 0

0

� �¼Nan ð14Þ

In fact, the phase value will be undefined. The best way toresolve the problem is to select the size of image an even number.Therefore, the image will not have a center and the problem willbe resolved without any redundancy.

It can be shown that rotating the object around the Z-axis donot influence the magnitude response of the Zernike moments[11,28]. Indeed, the rotation only influences the phase response ofZernike moments. In other words, it can be uttered

Zrn,m ¼ Zn,m e�jma 9Zr

n,m9¼ 9Zn,m9 ð15Þ

where Zrn,m and Zn,m are the moments which are extracted from

the object and the rotated object, respectively. The rotation angleis a. The magnitude of the Zernike moment of object and that ofrotated object are equal. Thus, our proposed features are themagnitudes of Zernike moments which are proper descriptors ofshape characteristics. Fig. 7 illustrates the magnitude plots ofsome low order Zernike moments in the unit disk.

Not to mention the fact that high-order Zernike moments notonly have a high computational complexity [10,16,17], but alsorepresent a high sensitivity to noise [29]. In fact, they may diminishthe performance of the system if they are not selected precisely.However, they might be better descriptors of shape and margincharacteristics than low-order Zernike moments. Therefore, in orderto analyze the effect of orders of Zernike moments on the perfor-mance of the overall system, a group of high-order Zernike momentsand also a group of low-order Zernike moments have been extractedfrom MSI and MMI images, separately. These two groups of Zernikemoments are tabulated in Table 1. The first group includes 32 low-order moments which satisfy the following conditions [27]:

Group 1¼ fZn,mg8

3rnr10

9m9rn

n�9m9¼ 2k

kAN

8>>><>>>:

ð16Þ

Moreover, the second group includes 32 high-order momentswhich satisfy the following conditions:

Group 2¼ fZn,mg8

10rnr17

9m9rn

n�9m9¼ 4k

kAN

8>>><>>>:

ð17Þ

Hence, in the feature selection stage 4 different groups offeatures can be selected which are:

(1)

The magnitude of the low-order Zernike moments, which areextracted from MSI pictures (LMSI).
Page 6: Zernike_CBM_2011

Table 1The proposed Zernike moments.

Group n m Number of

moments

1 3 1, 3 32

4 0, 2, 4

5 1, 3, 5

6 0, 2, 4, 6

7 1, 3, 5, 7

8 0, 2, 4, 6, 8

9 1, 3, 5, 7, 9

10 0, 2, 4, 6, 8, 10

2 10 2, 6, 10 32

11 3, 7, 11

12 0, 4, 8, 12

13 1, 5, 9, 13

14 2, 6, 10, 14

15 3, 7, 11, 15

16 0, 4, 8, 12, 16

17 1, 5, 9, 13, 17

Fig. 7. Plots of the magnitude of low-order Zernike basis functions in the unit disk.

A. Tahmasbi et al. / Computers in Biology and Medicine 41 (2011) 726–735 731

(2)

The magnitude of the low-order Zernike moments, whichare extracted from MMI pictures (LMMI).

(3)

The magnitude of the high-order Zernike moments, which areextracted from MSI pictures (HMSI).

(4)

The magnitude of the high-order Zernike moments, which areextracted from MMI pictures (HMMI).

Either every group of introduced features or a symmetriccombination of them or an asymmetric combination of themcan be chosen and applied to the classification stage. Consideringthe performance of the overall system, the best group of featureswill be found. We are interested in knowing how the orders anditerations of Zernike moments will influence the overall CADxsystem’s performance. Thus, the total number of momentsapplied to the classifier has always been kept constant, which is32 in this research, while their orders and iterations has beenchanged; this will reveal the behavior of system as a function oforders and iterations of moments.

2.4. Classification

Artificial Neural Networks (ANNs) have been used widely asgood classifiers in medical diagnosis of mammography images.

For instance, they have been utilized in diagnosis of microcalcifica-tions [30], detection of microcalcification clusters [31,32], diagnosisof masses [33], and detection of suspicious masses [34]. According toliterature, MLP has always been a good classifier in mammographyimage processing applications [30,31]. Therefore, in this research, anMLP classifier has been employed.

OBL can improve the convergence rate in machine intelligentalgorithms [21,35]. It has been used in some evolutionary algo-rithms and in Reinforcement Learning (RL) yielding interestingresults [36]. Moreover, it has been utilized in neural networklearning rules such as resilient back propagation with oppositeweights [21] and back propagation with opposite activation func-tions [37]. In this research, the opposition-based idea is utilized inorder to improve the convergence rate of generic back propaga-tion learning rule as well as speed up the training time [35].

Generally speaking, when a dataset with a large number ofinstances is available, the input patterns are usually partitionedinto three equal-sized groups called training, validation andtesting. In fact, each group gets approximately 33% of total inputpatterns [33]. However, in this research, due to the limitednumber of instances available in the MIAS database, 40% of inputpatterns are dedicated to the training set, 30% of them arededicated to the validation set and also the remaining 30% arededicated to the testing set.

The proposed MLP classifier has been trained using the train-ing set so that the suitable weights are found. Moreover, thenumber of hidden layers and their nodes are being changed untilthe best network topology, which yields the best accuracy as wellas small amount of FNR, is found. Not to mention the fact that inorder to find the best topology and avoid over-training, theaccuracy on validation set has been evaluated every 1000 trainingepochs.

After finding the best topology and appropriate number oftraining epochs with the aid of validation set, the testing patterns,which are unseen for the trained classifier, are applied to theclassifier and the performance has been evaluated. Note that theAccuracy measure reported in the experiments and results sectionrepresents the ratio of correct classified patterns to the totalnumber of patterns in the testing set (e.g. unseen data).

The input layer has 32 nodes which equals to the number ofutilized features. The sigmoid function has been utilized as theactivation function of all internal nodes. Moreover, the outputnode uses a linear activation function. This increases the dynamicrange of output node, so the ROC curve can be computed moreprecisely [38].

3. Experiments and results

The following terms are benchmark functions of a CADxsystem:

Accuracy¼TPþTN

TNþFNþTPþFPð18Þ

FNR¼FN

FNþTPð19Þ

Sensitivity¼ TPR¼TP

TPþFNð20Þ

Specificity¼ 1�FPR¼TN

TNþFPð21Þ

where FP, FN, TP and TN denote false-positive, false-negative,true-positive and true-negative answers, respectively. Moreover,FPR and TPR denote false-positive rate and true-positive rate,respectively. The most common way of reporting the accuracy of

Page 7: Zernike_CBM_2011

A. Tahmasbi et al. / Computers in Biology and Medicine 41 (2011) 726–735732

a binary prediction is using the true (or false) positives and true(or false) negatives separately [38]. This recognizes that a falsenegative prediction may have different consequences from thefalse positive one. In this research, a false-positive answer is lessimportant than a false-negative answer due to the fact that afalse-negative answer may lead to the death of a patient. On theother hand, a false-positive answer leads to an in-vain biopsywhich brings discomfort for women. A plot of Sensitivity (TPR) vs.1-Specificity (FPR) with changing the decision threshold is calledReceiver Operating Characteristic (ROC) [39]. The area under theROC curve, Az, shows how successful the classification is [39].

In this research, each of HMMI, HMSI, LMMI, and LMSI featuresor a combination of them have been applied to MLP classi-fiers with different structures and different learning rules, andthe results have been reported. Eventually, those features and

Table 2Diagnosis performance for HMSI features.

Name No. of hidden

layers

FPR (%) FNR (%) Accuracy (%) Az

N1 1 11.11 40 78.5 0.755

N2 2 16.66 10 85.7 0.94

N3 3 11.13 0 92.8 0.975

Fig. 8. ROC plot for N1, N2, and N3. The dashed line illustrates the ROC plot of a

random decision making system.

Table 3Final result.

Name Selected

features

Feature

group

Learning

rule

No. of training

epochs

Ord

mom

S1 32 shape HMSI BP 10,379 10–

M1 32 margin HMMI BP 12,534 10–

SMH1 16 shapeþ16 margin Complex BP 11,687 10–

SMH2 20 shapeþ12 margin Complex BP 9851 10–

S2 32 shape LMSI BP 9173 3–

M2 32 margin LMMI BP 13,647 3–

SML1 16 shapeþ16 margin Complex BP 12,118 3–

SML2 20 shapeþ12 margin Complex BP 10,226 3–

SO1 32 shape HMSI OBL 4945 10–

MO1 32 margin HMMI OBL 6127 10–

SO2 32 shape LMSI OBL 5326 3–

MO2 32 margin LMMI OBL 5792 3–

classifiers which yield the best performance and the smallest FNR

have been chosen as the final system.The HMSI features are the high-order Zernike moments which

are extracted from the MSI images and contain shape charactris-tics. The utilized MLP has 32 input nodes and one output node.The number of hidden layers and their nodes are being changedso that the best structure is found. The linear function is used asthe activation function of the output node which increases thedynamic range of that node. In order to evaluate the performanceof the HMSI features, we have employed MLPs with 1, 2, and3 hidden layers. Table 2 shows those structures which yield thebest diagnosis performances. Although the MLPs with 4 hiddenlayers need more training time, they do not represent the desiredperformances; so their results are omitted for brevity. In order tofind FPR, FNR and accuracy in the table, the decision threshold isselected in such a manner to yield the smallest number of wronganswers. However, the FNR, FPR and accuracy parameters are notenough measures for a CADx system. Therefore, the ROC plots ofthe systems which are tabulated in Table 2 are illustrated in Fig. 8.

The process which is explained in the previous paragraph hasbeen also applied in other groups of features such as HMMI, LMSI,LMMI, and their combinations. Results are reported in Table 3.The first column is the name of each system while the secondcolumn explains the utilized features in each of them. The thirdcolumn shows the name of feature groups employed in eachsystem. The 4th and 5th columns report the utilized learning ruleand the number of training epochs in each system. Column6 shows the employed orders in each system. Column 7 representsthe number of hidden layers. Eventually, the four last columns oftable report the performances of the developed systems.

The first four rows of the table are those features which utilizethe high-order Zernike moments and BP learning rule. It is shownthat S1, which uses HMSI features, represents an acceptableaccuracy, a fair Az and a FNR which is almost equal to zero.

On the other hand, the M1 which uses HMMI features has notyielded a fair accuracy and its FNR is significantly higher than S1.However, a symmetric combination of these two groups of featuresleads to developing SMH1 system which represents smaller FNR

than M1 while its FPR is almost unchanged. With increasing thenumber of HMSI features in the complex system, the FNR decreaseswhile the FPR stays almost constant. It can be inferred that theZernike moments which are extracted from the margin informationare not suitable descriptors of malignancy of masses. Indeed, thebest performance belongs to the system that utilizes shape descrip-tors. The ROC plots of explained systems are illustrated in Fig. 9. It isobvious that S1 has the best Az.

The middle four rows in Table 3 belong to those systems whichare using low-order Zernike moments and BP learning rule. Thesesystems show the same behavior as those expressed in the

er of

ents (n)

No. of hidden

layers

FPR (%) FNR (%) Accuracy (%) Az

17 3 11.13 0.0 92.8 0.97517 3 22.2 51.1 67.8 0.547

17 1 27.7 39.8 67.85 0.595

17 2 22.2 20.2 78.57 0.816

10 2 5.5 0.0 96.43 0.97610 2 27.78 39.8 67.86 0.531

10 1 16.67 50.3 71.43 0.642

10 1 27.78 29.9 71.43 0.795

17 2 16.66 9.9 85.71 0.87217 2 16.66 49.9 71.42 0.692

10 2 11.11 9.8 89.28 0.8810 2 38.8 20.1 67.86 0.588

Page 8: Zernike_CBM_2011

A. Tahmasbi et al. / Computers in Biology and Medicine 41 (2011) 726–735 733

previous paragraph. The ROC plots of them are illustrated inFig. 10. It is obvious that S2 which utilizes the shape descriptorshas the best ROC curve.

Note that those systems which are using low-order Zernikemoments need smaller number of hidden layers. In other words,they have smaller computational complexity in the featureextraction and classification stages.

The last four rows in Table 3 belong to those systems whichare using OBL learning rule in their classification stage. AlthoughFNR, FPR, Accuracy and Az for those systems which utilize OBLalgorithm is smaller than those employing BP learning rule, thenumber of training epochs for OBL algorithm is significantlysmaller. Therefore, utilizing OBL learning rule in the classificationstage increases the convergence rate and decreases the trainingtime. The ROC plots of these systems illustrated in Fig. 11.

It can be inferred that S1 and S2 represent the best perfor-mances among those systems employed high-order and low-order Zernike moments, respectively. Fig. 12 compares the ROCcurves of these two systems. It can be seen that they have almost

Fig. 9. ROC plot for S1, M1, SMH1, and SMH2. All of them utilize high-order

Zernike moments.

Fig. 10. ROC plot for S2, M2, SML1, and SML2. All of them utilize low-order

Zernike moments.

the same Az. Moreover, S1 represents a higher specificity while S2represents a higher sensitivity. Thus, considering the proposedclinical application, we can use one of them. However, S2 not onlyhas smaller computational complexity than S1 but also representsa kind of better performance.

In addition, a statistical analyze has been done to evaluate thelevel of uncertainty of the proposed system. In fact, the experi-ments have been conducted using 10-fold cross-validation [40].To perform the test, the data is first partitioned into 10 equally

Fig. 11. ROC plot for SO1, SO2, MO1, and MO2. All of them employ OBL

learning rule.

Fig. 12. ROC plot of S1 vs. S2; S2 yields higher sensitivity than S1 while S1 yields

higher specificity.

Table 4Performance measures using 10-fold cross validation.

FPR (%) FNR (%) Accuracy (%)

Average on 10 folds for S1 10.55 5 91.4

Average on 10 folds for S2 8.2 3.2 93.6

Page 9: Zernike_CBM_2011

Table 5The proposed CADx system in comparison with other CADx systems.

Reference Year Feature extraction technique Database FPR (%) FNR (%) Accuracy (%) Az

This research (S2) 2011 Zernike moments as shape descriptors MIAS 5.5 0.0 96.43 0.976

This research (S2 using

10 fold cross-

validation)

2011 Zernike moments as shape descriptors MIAS 8.2 3.2 93.6 –

Tahmasbi et al. [25] 2010 FTRD features MIAS 5.56 9.9 92.8 0.98

Boujelben et al. [41] 2009 Boundary vector of three derives of radial distance measure, convexity

and index angle

DDSM – – 95.98 –

Rojas et al. [42] 2009 Spiculation measure based on relative gradient orientation, spiculation

measure based on signature information, and two features which are

measure of fuzziness of mass margins

DDSM

MIAS

– – 81.0 –

Mu et al. [43] 2008 Fourier factor, Spiculation index, fractal dimension, compactness, and

fractal dimension

MIAS – – – 0.92

Rangayyan et al. [44] 2007 Fractal dimension, Fractional concavity MIAS – – – 0.82

Rangayyan et al. [45] 2006 Index of convexity from the turning angle function MIAS – – – 0.93

Rangayyan et al. [46] 2000 Spiculation Index, compactness and fractional concavity MIAS – – 81.5 0.79

A. Tahmasbi et al. / Computers in Biology and Medicine 41 (2011) 726–735734

sized segments or folds. Then, 10 iterations of training and valida-tion are performed; a different fold of the data is held out in eachiteration for validation while the remaining 9 folds are used fortraining [40]. Table 4 presents the average value of performancemeasures on 10 folds.

Finally, a brief comparison between our proposed approach andother CADx algorithms developed recently is represented in Table 5.We have tried to evaluate those systems that utilize geometricfeatures as the descriptors of mass shape and margin. Not tomention the fact that the CADx systems that only employ suchfeatures are noticeably rare as the geometric features are usuallyused beside other features such as density and texture. Moreover, aprecise comparison of the systems is a difficult task since differentresearchers have employed different mammography databases.Besides, most performance benchmarks are not reported.

According to Table 5, S2 that employs 32 low-order Zernikemoments as shape descriptors represents a fair Az which iscomparable with other reported systems. In addition, its accuracyis noticeably higher than those systems utilized MIAS database.As it mentioned before, one of our objectives were reducing FNR.Unfortunately, most of researchers have not reported the FNR oftheir CADx system. However, our achieved FNR is almost equal tozero and is better than [25].

The second row of the table shows the results of S2 based on 10fold cross-validation approach. Although the performance measuresare slightly lower than the first row of the table, they are staticallyvalidated and are more certain. Fortunately, these results are alsocomparable with results of other reported systems.

4. Conclusion

In this paper, a novel CADx system has been introduced for thediagnosis of breast masses. The Zernike moments are utilizedas the descriptors of shape and margin characteristics of masses.The input ROIs are segmented and preprocessed; and finally,two images which contain shape and margin characteristics ofmasses are applied to the feature extraction stage. Two groups offeatures, each containing 32 Zernike moments with differentorders, are extracted from the images. Considering the perfor-mance of the overall system, the best 32 moments are selectedand applied to a Multi-Layer Perceptron classifier which is trainedwith both BP and OBL learning rules. The two final systems whichemploy the shape descriptors yield an Az equal to 0.975 and 0.976and also have a good specificity and sensitivity, respectively. TheFNR is almost equal to zero in both of these two systems. Besides,the best achieved FPR is 5.5%. Although FNR, FPR, Accuracy and Az

for those systems which utilize the OBL algorithm is smaller thanthose employing BP learning rule, the number of training epochsfor OBL algorithm is significantly smaller. Therefore, utilizing theOBL learning rule in the classification stage increases the con-vergence rate and decreases the training time.

It is worth mentioning that the utilized feature selection stagein this research is manual. The researchers are advised to developthis stage by an autonomous algorithm which takes the FNR andFPR parameters and then finds the best combination of featuresautomatically.

Conflict of interest statement

None declared.

References

[1] American Cancer Society, Breast Cancer Facts & Figures 2009–2010, Atlanta,2009.

[2] C. Alan, Bovik, Handbook of Image and Video Processing, 2nd ed., ElsevierAcademic Press, 2005, pp. 1195–1217.

[3] T.Z. Tan, C. Quek, G.S. Ng, E.Y.K. Ng, A novel cognitive interpretation of breastcancer thermography with complementary learning fuzzy neural memorystructure, Expert Systems with Applications 33 (2007) 652–666.

[4] Q. Fang, S.A. Carp, J. Selb, G. Boverman, Q. Zhang, D.B. Kopans, R.H. Moore,E.L. Miller, D.H. Brooks, D.A. Boas, Combined optical imaging and mammo-graphy of the healthy breast: optical contrast derived from breast structureand compression, IEEE Transactions on Medical Imaging 1 (28) (2009) 30–42.

[5] R.M. Rangayyana, F.J. Ayresa, J.E.L. Desautels, A review of computer aideddiagnosis of breast cancer: toward the detection of subtle signs, Journal of theFranklin Institute 344 (2007) 312–348.

[6] H.D. Cheng, X.J. Shi, R. Min, L.M. Hu, X.P. Cai, H.N. Du, Approaches forautomated detection and classification of masses in mammograms, PatternRecognition 39 (2006) 646–668.

[7] N. Szekely, N. Toth, B. Pataki, A hybrid system for detecting masses inmammographic images, IEEE Transactions on Instruments and Measurement3 (55) (2006) 944–952.

[8] X. Zhang, X. Gao, Y. Wang, MCs detection with combined image features andtwin support vector machines, Journal of Computers 3 (4) (2009) 215–221.

[9] American College of Radiology, ACR BI-RADS—Mammography, Ultrasound &Magnetic Resonance Imaging, 4th ed., American College of Radiology, Reston,VA, 2003.

[10] S.K. Hwang, W.Y. Kim, A novel approach to the fast computation of Zernikemoments, Pattern Recognition 39 (2006) 2065–2076.

[11] W. Wang, J.E. Mottershead, C. Mares, Mode-shape recognition and finiteelement model updating using the Zernike moment descriptor, MechanicalSystems and Signal Processing 23 (2009) 2088–2112.

[12] Sh. Li, M.Ch. Lee, Ch.M. Pun, Complex Zernike moments features for shape-based image retrieval, IEEE Transactions on Systems, Man and Cybernetics,Part A: Systems and Humans 1 (39) (2009) 227–237.

[13] X. Li, A. Song, A new edge detection method using Gaussian–Zernike momentoperator, in: Proceedings of the IEEE, 2nd International Asia Conference onInformatics in Control, Automation and Robotics, 2010, pp. 276–279.

Page 10: Zernike_CBM_2011

A. Tahmasbi et al. / Computers in Biology and Medicine 41 (2011) 726–735 735

[14] J. Haddadnia, M. Ahmadi, K. Faez, An efficient feature extraction method withpseudo-Zernike moment in RBF neural network-based human face recogni-tion system, Journal of Applied Signal Processing 9 (2003) 890–901.

[15] Z. Chen, Sh.K. Sun, A Zernike moment phase-based descriptor for local imagerepresentation and matching, IEEE Transactions on Image Processing 1 (19)(2010) 205–219.

[16] Ch. Y. Wee1, R. Paramesran, F. Takeda, Fast computation of Zernike momentsfor rice sorting system, in: Proceedings of the IEEE, International Conferenceon Image Processing (ICIP), 2007, pp. VI-165–VI-168.

[17] B. Fu, J. Liu, X. Fan, Y. Quan, A hybrid algorithm of fast and accuratecomputing Zernike moments, in: Proceedings of the IEEE, InternationalConference on Fuzzy Systems and Knowledge Discovery (FSKD), 2007,pp. 268–272.

[18] T. Rui-ying, G. Wen, M. Li-hua, L. Ben-yao, X. Guang-wei, BI-RADS categor-ization and positive predictive value of mammographic features, ChineseJournal of Cancer Research 3 (13) (2001) 202–205.

[19] C. Balleyguier, et al., BIRADSTM classification in mammography, EuropeanJournal of Radiology 61 (2007) 192–194.

[20] N.F. Boyd, et al., Heritability of mammographic density, a risk factor forbreast cancer, New England Journal of Medicine 12 (347) (2002) 886–894.

[21] H.R. Tizhoosh, Opposition-based learning: a new scheme for machine intelli-gence, in: Proceedings of the International Conference on ComputationalIntelligence for Modeling Control and Automation CIMCA’05, Vienna, Austria,2005.

[22] J. Suckling, et al., The Mammographic Image Analysis Society DigitalMammogram Database, Exerpta Medica, International Congress Series,1069, 1994, pp. 375–378.

[23] W.K. PRATT, Digital Image Processing: PIKS Inside, third ed., John Wiley &Sons, 2001 (Chapter 18).

[24] L.M. Bruce, R.R. Adhami, Classifying mammographic mass shapes using thewavelet transform modulus-maxima method, IEEE Transactions on MedicalImaging 12 (18) (1999) 1170–1177.

[25] A. Tahmasbi, F. Saki, S.B. Shokouhi, Mass diagnosis in mammography imagesusing novel FTRD features, in: Proceedings of the IEEE, 17th Iranian Confer-ence on Biomedical Engineering (ICBME’2010), Isfahan, Iran, 2010, pp. 1–5.

[26] A. Tahmasbi, F. Saki, S.M. Seyedzadeh, S.B. Shokouhi, Classification of breastmasses based on cognitive resonance, in: Proceedings of the IEEE, 3rdInternational Conference on Signal Acquisition and Processing (ICSAP’2011),Singapore, 2011, pp. V1-97–V1-101.

[27] A. Tahmasbi, F. Saki, S.B. Shokouhi, An effective breast mass diagnosis systemusing zernike moments, in: Proceedings of the IEEE, 17th Iranian Conferenceon Biomedical Engineering (ICBME’2010), Isfahan, Iran, 2010, pp. 1–4.

[28] A. Khotanzad, Y.H. Hong, Invariant image recognition by Zernike moments,IEEE Transactions on Pattern Analysis and Machine Intelligence 5 (12) (1990)489–497.

[29] C.H. Theh, R.T. Chin, On image analysis by the methods of moments, IEEETransactions on Pattern Analysis and Machine Intelligence 4 (10) (1988)496–513.

[30] L. Wei, Y. Yang, R.M. Nishikawa, Y. Jiang, A study on several machine-learningmethods for classification of malignant and benign clustered microcalcifica-tions, IEEE Transactions on Medical Imaging 3 (24) (2005) 371–380.

[31] S. Halkiotisa, T. Botsisa, M. Rangoussib, Automatic detection of clusteredmicrocalcifications in digital mammograms using mathematical morphologyand neural networks, Signal Processing 87 (2007) 1559–1568.

[32] F. Dehghan, H. Abrishami-Moghaddam, Comparison of SVM and neuralnetwork classifiers in automatic detection of clustered microcalcificationsin digitized mammograms, in: Proceedings of the IEEE, 7th InternationalConference on Machine Learning and Cybernetics, Kunming, China, 2008,pp. 756–761.

[33] B. Verma, P. McLeod, A. Klevansky, Classification of benign and malignantpatterns in digital mammograms for the diagnosis of breast cancer, ExpertSystems with Applications 37 (2010) 3344–3351.

[34] K. Bovis, S. Singh, J. Fieldsend, C. Pinder, Identification of masses in digitalmammograms with MLP and RBF Nets, in: Proceedings of the IEEE, Interna-tional Joint Conference on Neural Networks, vol. 1, 2000, pp. 342–347.

[35] F. Saki, A. Tahmasbi, S. B. Shokouhi, A novel opposition-based classifier formass diagnosis in mammography images, in: Proceedings of the IEEE, 17thIranian Conference on Biomedical Engineering (ICBME’2010), Isfahan, Iran,2010, pp. 1–4.

[36] H.R. Tizhoosh, M. Ventresca, Oppositional Concepts in Computational Intelli-gence, Springer, 2008.

[37] M. Ventresca, H.R. Tizhoosh, Improving the convergence of back propagationby opposite transfer function, in: Proceedings of the International JointConference on Neural Networks, 2006, pp. 4777–4784.

[38] M. Gonen, et al., Receiver operating characteristic (ROC) curves, paper 210,in: Proceedings of the SUGI, vol. 31, 2003.

[39] J. Bozek, M. Mustra, K Delac, M. Grgic, A survey of image processingalgorithms in digital mammography, Recent Advances in Multimedia SignalProcessing and Communications 231 (2009) 631–657.

[40] P. Refaeilzadeh, L. Tang, H. Liu, Cross-validation, Encyclopedia of DatabaseSystems, 2009, pp. 532–538.

[41] A. Boujelben, A. Ch. Chaabani, H. Tmar, M. Abid, feature extraction fromcontours shape for tumor analyzing in mammographic images, in: Proceed-ings of the IEEE, International Conference on Digital Image Computing:Techniques and Applications, 2009, pp. 395–399.

[42] A. Rojas-Dominguez, A.K. Nandi, Development of tolerant features for char-acterization of masses in mammograms, Computers in Biology and Medicine39 (2009) 678–688.

[43] T. Mu, A.K. Nandi, R.M. Rangayyan, Classification of breast masses usingselected shape, edge-sharpness, and texture features with linear and kernel-based classifiers, Digital Imaging 2 (21) (2008) 153–169.

[44] R.M. Rangayyan, T.M. Nguyen, Fractal analysis of contours of breast masses inmammograms, Digital Imaging 3 (20) (2007) 223–237.

[45] R.M. Rangayyan, D. Guliato, J. Cavalho, S. Santiago, Feature extraction fromthe turning angle function for the classification of contours of breast tumors,Digital Imaging (2007).

[46] R.M. Rangayyan, N.R. Mudigonda, J.E.L. Desautels, Boundary modeling andshape analysis methods for classification of mammographic masses, Medicaland Biological Engineering and Computing 38 (2000) 487–496.

Amir Tahmasbi received his B.Sc. degree in Electronics Engineering from theDepartment of Electrical Engineering, Shiraz University of Technology, Shiraz, Iranin 2008. Currently, he is pursuing his M.Sc. degree in Electronics Engineering atthe Department of Electrical Engineering, Iran University of Science and Technol-ogy (IUST), Tehran, Iran. His research interests lie in the fields of image processing,biomedical image analysis, real-time digital signal processing and pattern recog-nition. Moreover, he is interested in implementing digital filters as well as imageprocessing algorithms on TMS320C6xxx platforms.

Fatemeh Saki received her B.Sc. degree in Electronics Engineering from theDepartment of Electrical Engineering, Chamran University, Ahwaz, Iran, in 2008.Now, she is studying for M.Sc. degree in Electronics Engineering at the Departmentof Electrical Engineering, Iran University of Science and Technology (IUST), Tehran,Iran. Her research interests include CADx system design for the diagnosis of breastcancer and neural networks.

Shahriar B. Shokouhi received his B.Sc. and M.Sc. degrees in Electronics in 1986and 1989, both from the Department of Electrical Engineering, Iran University ofScience & Technology (IUST). He received his Ph.D. in Electrical Engineering in1999 from School of Electrical Engineering, University of Bath, England. Since2000, he is an assistant professor in the Department of Electrical Engineering,IUST. His research interests include signal and image processing, machine vision,pattern recognition and intelligent systems design.