10

Click here to load reader

Interpretation of MR images using self-organizing maps and knowledge-based expert systems

Embed Size (px)

Citation preview

Page 1: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

Digital Signal Processing 19 (2009) 668–677

Contents lists available at ScienceDirect

Digital Signal Processing

www.elsevier.com/locate/dsp

Interpretation of MR images using self-organizing maps andknowledge-based expert systems

Inan Güler ∗, Ayse Demirhan, Rukiye Karakıs

Department of Electronics and Computer Technology, Gazi University, Faculty of Technology, 06500 Teknikokullar, Ankara, Turkey

a r t i c l e i n f o a b s t r a c t

Article history:Available online 22 August 2008

Keywords:MR imagesImage segmentationSelf-organizing mapsKnowledge-based expert systems

A new image segmentation system is presented to automatically segment and label brainmagnetic resonance (MR) images to show normal and abnormal brain tissues using self-organizing maps (SOM) and knowledge-based expert systems. Elements of a feature vectorare formed by image intensities, first-order features, texture features extracted from gray-level co-occurrence matrix and multiscale features. This feature vector is used as aninput to the SOM. SOM is used to over segment images and a knowledge-based expertsystem is used to join and label the segments. Spatial distributions of segments extractedfrom the SOM are also considered as well as gray level properties. Segments are labeledas background, skull, white matter, gray matter, cerebrospinal fluid (CSF) and suspiciousregions.

© 2008 Elsevier Inc. All rights reserved.

1. Introduction

MR imaging is one of the popular, painless and noninvasive brain imaging techniques. It is used for diagnosis of manydiseases and gives high quality informative images about inside structure of brain. It gives the anatomy of the brain interms of spatial and contrast resolution. It uses information from water molecules that form 75% of the body. MR imaginguses radio frequency. The transformation and processing of the energy of the hydrogen atoms that are excited by the radiofrequency constructs the image. MR images show the different slices of brain by the movement of the machine. Watercontents of healthy and diseased tissues are different. Diseased or injured tissue involves more water than healthy tissues.The power of the signal depends on water content of tissue.

Image segmentation is separation of parts and objects that consist of image. The purpose of image segmentation is theseparation of the desired elements with the elements that have same properties in a set from the other elements of theimage. In medical image segmentation, different image components are used for analysis of different structures and tissues,spatial distribution of functional activities and pathologic regions. Segmentation of brain MR images consist of peeling thebrain from skull and classifying as brain/not-brain or segmenting tissue parts as white matter, gray matter, cerebrospinalfluid or suspicious region [1,2].

Medical image segmentation is usually performed by manually. Areas of interest are drawn by expert radiologists anddoctors. This situation has four disadvantages: (1) it is very time consuming and tiring process. Segmentation of 512 × 512sized 1500–2000 transverse image series takes approximately 2 to 4 h [3]. (2) Manual segmentation is not objective. Differ-ent segmentations achieved by different experts can be very different. For example, brain tumor segmentations performedby experts have approximately 14–22% differences [4]. (3) In different situations the segmentation that is done by the same

* Corresponding author. Fax: +90 312 2120059.E-mail address: [email protected] (I. Güler).

1051-2004/$ – see front matter © 2008 Elsevier Inc. All rights reserved.doi:10.1016/j.dsp.2008.08.002

Page 2: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

I. Güler et al. / Digital Signal Processing 19 (2009) 668–677 669

Fig. 1. The flowchart of the proposed system.

expert can be different. (4) The brightness and contrast of the displaying screen can affect the segmentation accuracy andthe following analyses [5]. Computers usage for segmentation can overcome these problems.

Most frequently used techniques for medical image segmentation can be listed as thresholding, region-based segmen-tation, edge-based segmentation and classification based segmentation [1–20]. Region based segmentation algorithms arethe divide-and-combine [13], region growing algorithm [4,15], and watershed algorithms [16–18]. Edge based algorithmswork with edge finders. For this purpose, traditional Sobel and Laplace detectors can be used [19]. Sobel works as firstderivative and Laplace works as second derivate. Since Laplace is sensitive to noises, it is usually used with a Gaussianfilter. Classification based algorithms can be constructed according to criteria such as brightness similarity, contour energyand curvilinear continuousness. Classification based segmentation needs training. Performance of the classification basedsegmentation strictly depends on inputs of the classifier and training parameters [20].

Knowledge-based expert systems are also used for segmentation of images. Li et al. [21] used unsupervised c-means andexperts systems together to segment brain images according to knowledge bases. Loncaric and others labeled computerizedtomography images using a rule base. They developed an expert system to segment brain edema and lesions [12,22,23].

One of the effective methods that is used for segmentation is classification based artificial neural networks. Artificialneural networks are especially convenient and widely used for pattern recognitions such that classification, clustering andfeature extraction [2,6–8,10,11,13,20].

In this study, a technique for brain MR image segmentation is presented to show different normal and abnormal braintissues based on unsupervised clustering capabilities of SOM and knowledge based expert system. Image intensities, first-order features, texture features extracted from gray-level co-occurrence matrix, and multiscale features are extracted fromthe images. Principal component analysis (PCA) is used to select most discriminant features from the feature vector. SOM istrained with selected features and image is over segmented to regions using SOM. These regions are then evaluated by aknowledge-based expert system using their gray level properties and spatial distributions to join the neighboring ones thathave same properties. The proposed method has been validated on T2 weighted healthy images, T2 weighted images withGlioma TlTc-SPECT brain tumor and a brain model. MR images obtained from publicly available The Whole Brain Atlas project[24] are used in this study. A model of the brain, proposed by M.N. Ahmed [25] has been used to test the performance ofthe system.

2. Method and model

The purpose of image segmentation is grouping of all pixels in the image into the sets that have same properties to use insubsequent processes [6]. The first and most difficult stage of the low level image analysis is the image segmentation. Laterhigh level processes such as pattern recognition, labeling and analysis are highly dependent on the accuracy of segmentation.

In this study, decision making was performed in four stages: feature extraction, feature selection, segmentation usingSOM, labeling using knowledge based expert system (Fig. 1).

2.1. Feature extraction

The main purpose of feature extraction is representing the original data set by proper features to distinguish one inputpattern from another. The selected features must represent the characteristics of the input pattern.

Features that will be used as input to the SOM are extracted from image in four groups: first-order features, texturefeatures, multiscale features and moment features.

First-order features are gray-level distributions of one pixel or pixels in a two-dimensional w × w sized W window.These features do not take spatial state of the pixels into consideration. In this category a 5 × 5 sized window is shifted onthe image, mean and variance of this window are calculated.

Textural features are second-order statistical features. Gray level differences of two pixels in two different places arecompared. Textural features can be found using different methods such as co-occurrence matrix or first-order gradientdistribution. In this study, textural features are extracted using gray-level co-occurrence matrix proposed by Haralick etal. [26]. The co-occurrence matrix is an estimate of the C(i, j | d, θ), probability of going from gray level i to gray level j,given the distance d between two pixels along direction θ . The matrix is square. The dimension of the matrix is the numberof discrete gray levels in the image. Four directions are usually considered, θ = 0◦ , 45◦ , 90◦ , and 135◦ to guarantee that thetexture features are invariant under rotation. Haralick et al. [26] suggested to use the average of the 4 directions. A distanceof d = 1 or 2 is typically used. In this study 5×5 sized window is shifted along the image, gray-level co-occurrence matrices

Page 3: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

670 I. Güler et al. / Digital Signal Processing 19 (2009) 668–677

Fig. 2. Mapping of X feature vector to the output. Feature vector is connected to the output with w weight vector.

are calculated and contrast, homogeneity, energy and entropy features that represent the image’s textural characteristics areextracted.

Scale space is the blurred copies of the input image. It is a powerful tool to extract features from images. Gaussian filter(Eq. (1)) is used to obtain blurred versions of the image as

Gσ (x, y) = 1

(σ√

2π)2e− (x2+y2)

2σ2 . (1)

σ is standard deviation and determines the width of the Gaussian kernel for every x and y coordinate.In this study, four different levels of blurring for four different values of σ are calculated using Gaussian filters. The

Gaussian partial derivatives alone are not invariant under a particular coordinate transformation, it is necessary to constructparticular combinations of derivatives as invariant features. Sobel and Laplacian filters are applied to the blurred image toget first and second derivatives. Hence 8 different features are extracted.

Moment is one of the traditional methods for feature extraction from images. Usage of the moments for image analysisis proposed by Hu [27]. In this paper, seven moments are used to extract feature from the images.

2.2. Feature selection

In image segmentation there can be hundreds of features that characterize the data set. Most of these features are uselessin classification. They decrease performance of the classifier and increase study time. Nevertheless, for high classificationperformance there should be enough features. Designing an effective and reliable system for understanding images usingenough minimum number of features is important. Therefore, a lot of algorithms are developed to find optimum subset ofthe candidate features.

In this study, PCA is used for dimension reduction of the feature vector. PCA is a way of identifying patterns in data, andexpressing the data in such a way to highlight their similarities and differences. Since finding patterns in high dimensiondata can be hard, PCA is a powerful tool for analyzing data. PCA is a multivariable statistical method which explains aset of variables’ variance–covariance structure using their linear combinations, and provides dimension reduction and in-terpretation. In this method, p variables are transformed into linear, orthogonal and independent k variables (k < p). PCA,which is a dimension reduction method used on multi-dimensional data sets, finds eigenvalues of the covariance matrixand maximizes variance of the data set.

2.3. Segmentation using SOM

Image segmentation techniques are clustering of feature vectors by associating each pixel with a feature vector usingtheir textural features. An input image sized Nx × N y is mapped to M number of regions, R = {Ri: 1 � i � M}. SOM is usedto learn or set the mapping.

SOM algorithm maps an input vector set to an output vector according to their characteristic features. SOM reduces thedimensions of data to a map, helps to understand high-dimensional data and groups similar data together. SOM clusters thedata by having several units compete for the current object. The data is entered to system, neurons are trained by providinginformation about inputs. The weight vector of the unit closest to the current object becomes the winning unit. A simpleSOM consist of two layers. First layer includes input nodes and second layer includes output nodes. Output nodes are in atwo-dimensional grid view (Fig. 2). Every input is connected to every output with adjustable weights [28].

SOM classifier determines input type focusing on m-dimensional feature vector. In this study, gray-level image (I) istransformed to a feature vector X = [x0, x1, . . . , xm−1]. Every xi is a different feature and m is the dimension of the featurevector.

At the beginning, values of weight vector can be random or linear. Weights are adjusted while network learns. Aroundthe process element that has given the best answer to the input, a two-dimensional neighborhood set Nc(t) is constructed.

Page 4: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

I. Güler et al. / Digital Signal Processing 19 (2009) 668–677 671

It can be in a rectangular or hexagonal structure. Every neighborhood of an output neuron can be calculated for a givenradius [6].

SOM algorithm can be described as follows:

Step 1. Weights are adjusted to small random values and first radius of neighborhood is determined.Step 2. A new input is presented.Step 3. Distance of an input to every output is calculated

d j =M∑

i=1

(xi(t) − wij(t)

)2. (2)

d j is the distance between the input and the output node j, xi(t) is the value of ith element of input vector at time t , wij(t)is the value of the weight vector between ith input to the jth output node at time t .

Step 4. The output neuron that has the minimum distance is determined. The jth output neuron that has the minimumd j distance is selected.

Step 5. The weights of jth neuron and its neighbors are recalculated for the winning output neuron as:

wij(t + 1) = wij(t) + α(t).(xi(t) − wij(t)

), ∀i ∈ Nc(t). (3)

For the neurons that lose the competition as:

wij(t + 1) = wij(t), ∀i /∈ Nc(t), (4)

α(t) is the gain factor that reduces in time (0 < α(t) < 1), Nc(t) is the neighborhood set that narrows in time.Step 6. If Nc(t) �= 0 then go back to Step 2.

SOM network is developed using SOM Toolbox for MATLAB [29]. Feature vector as described above is used as input to theSOM. Network is initialized linearly and batch training algorithm is used. Batch training is faster than sequential training.Batch training is repetitive like sequential training but instead of sending one data vector to the network at a time forweight adjustment, all data vectors are sent to the map. In every training step data set is clustered such that every datavector will be in the closest map unit’s neighborhood set, Voronoi set. The union of Voronoi sets V i corresponding to thenodes in the neighborhood set can be calculated as:

Si(t) =nVi∑

j=1

x j . (5)

Here nV i is the number of samples in the Voronoi set of i. The new values of the weight vectors are then calculated as:

wi(t + 1) =∑m

j=1 Nij(t)s j(t)∑mj=1 nV j hi j(t)

(6)

where m is the number of map units, nV j is the number of samples that is in V i Voronoi set [28].Two evaluation criteria, resolution and preservation of topology are used to measure the quality of the SOM [29]. One

method to evaluate resolution of the SOM is finding the mean quantization error, Q err. This index is the mean distancebetween every data vector and its best matching unit (BMU) [30]. Preservation of topology can be found calculating topo-graphical error, Terr. This is the ratio of first and second BMUs that is not in the neighborhood of each other [31]. In thisstudy both Q err and Terr are used to evaluate the performance of the SOM.

Using SOM with a [7 7] map size, T2 weighted MR images are segmented to 49 regions.

2.4. Labeling of segments using knowledge-based expert system

T2 weighted healthy images, T2 weighted images with Glioma TlTc-SPECT brain tumor and a brain model that areoversegmented by SOM are joined and labeled by a knowledge-based expert system.

Expert systems are used in situations that an expert is needed such as analysis, classification and diagnosis. Informationis obtained working with experts while the system is being constructed, and is transferred to a computer. The knowledgebase must be incrementable. The best way to present obtained information is to construct a rule base. Rule-base consistsof if–then rules. Every rule has a condition and result parts. Rules can be expressed as “if A and B then X” or “if A or Cthen X .”

Rules are constructed for MR image segmentation using region characteristics of known MR image properties and theirrelationship. Rules are fixed and independent of image. But the features extracted from the image depend on the imageand segments that are found by SOM and used in knowledge-based expert system depends on the feature vector. Rules forbrain structures such as white matter, gray matter, CSF, suspicious region, skull and background are constructed using regionproperties and neighborhood. Region properties used in this study are segment values that are found by SOM, region ID,region’s starting and ending X and Y coordinates, gray-level value of original MR image, and number of pixels in that region.

Page 5: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

672 I. Güler et al. / Digital Signal Processing 19 (2009) 668–677

Fig. 3. Brain model image.

Table 1Brain model description

Abbreviation Structure Gray level value

SK Skin 40WM White matter 250GM Gray matter 80CC Corpus callosum 100LV Left ventricle 30RV Right ventricle 30LCN Left caudate nucleus 130RCN Right caudate nucleus 130LTN Left thalamus nucleus 140RTN Right thalamus nucleus 140FO Fornix 200BG Background 0

Neighbor regions that have same properties are joined together. These regions are processed to be labeled by a knowledgebase that is constructed by expert knowledge. The most important properties of the rules are about region’s intensity andneighborhood knowledge. Because different tissues have different intensity values, these intensity values between 0 and 1are divided into five categories as “very bright,” “bright,” “medium,” “dark” and “very dark.” Labeling is done in the followingstages. First the background, then the skull and the brain is labeled. After that white matter, gray matter, CSF and suspiciousregions (if exists) are labeled. The most difficult stage is labeling the suspicious regions. Suspicious regions can be anywherein the brain and every tumor has different characteristics.

Rules used to label background are: (1) If region’s intensity value is very dark and number of pixels is greater than 30then region is background. (2) If the region is next to a background area and intensity value is very dark then region isbackground. (3) If the region is next to a background area and an unlabeled region and the unlabeled region is bright thenregion is background and the unlabeled neighboring region is background.

Because there is skin/fatty tissue between background and skull, regions that are next to the background and bright areaare labeled as skin/fatty tissue.

After labeling background and skin/fatty tissue similar rules for skull, gray matter, white matter and suspicious regionare executed.

2.5. Brain model

A brain model presented by M.N. Ahmed is being to used to test the system [25]. The model is intended to evaluate therobustness of the system described above. Brain model is a 3D model of the brain. It is 256 × 256 × 256 sized and consistsof brain tissues with their known sizes and properties. Gray level of every tissue is given with one constant value. Graylevels that are associated with brain tissues in model are given in Table 1. Because we worked with 2D images a 256 × 256sized slice of the model is used (Fig. 3).

3. Results and discussion

In this study, the brain model is used as the test the system. T2 weighted healthy images, and T2 weighted imageswith Glioma TlTc-SPECT brain tumor are segmented using the proposed system. Features are extracted from these imagesand used as input to the SOM. Results obtained from the SOM are evaluated and labeled by an expert system using aknowledge-base. It is observed that the applied method is successful for distinguishing different tissues of the brain.

It is observed that using a 5 × 5 window for feature extraction preserves the details in the image and does not increasethe feature extraction time extensively. Reduction of feature vector by selecting most important features for the segmenta-tion from the 21 extracted features using PCA decreased processing time. Considering that using too many features as input

Page 6: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

I. Güler et al. / Digital Signal Processing 19 (2009) 668–677 673

Table 2Performance of the SOM with different normalization algorithms and different number of principal components

Image type Principal component number

4 5 6 7 8 9

T2 healthy Unnormalized 92.86 91.99 91.63 90.85 90.89 90.59histD 95.34 94.44 92.00 62.63 91.20 89.88var 88.90 87.12 54.94 80.49 79.43 41.58log 95.83 95.37 95.32 94.67 94.58 94.15Range 96.78 96.29 96.05 95.25 95.23 65.39Logistic 95.41 94.81 93.61 92.91 92.87 61.16histC 95.77 65.18 94.01 92.58 61.67 60.83

T2 with tumor Unnormalized 62.49 91.01 91.09 90.15 90.09 89.76histD 66.10 65.70 63.97 92.03 90.83 60.74var 89.41 80.70 82.5 75.57 45.56 69.68log 65.02 95.17 94.57 66.14 93.91 95.02Range 65.78 96.18 95.97 65.17 65.79 94.59Logistic 96.80 94.90 94.41 63.34 91.28 90.02histC 67.32 65.05 63.65 62.11 62.70 62.36

Table 3Performance of the SOM with different initialization and training algorithms

Image type Initialization Training Time (s) [Q err Terr] Accuracy (%)

T2 healthy Linear Batch 38.79 [0.020 0.044] 96.78Linear Sequential 58.48 [0.01 0.62] 68.32Random Batch 17.99 [0.033 0.067] 94.99Random Sequential 57.20 [0.01 0.012] 98.69

T2 with tumor Linear Batch 39.87 [0.027 0.050] 96.18Linear Sequential 119.91 [0.016 0.024] 97.99Random Batch 18.56 [0.053 0.075] 93.61Random Sequential 120.78 [0.021 0.017] 98.12

to the SOM decreases the performance, it is obvious that feature selection increases the performance of the classifier. How-ever, there should be enough features to represent different region properties. Normalization of the feature vector directlyaffects the classification performance. Table 2 shows the SOM’s performance with different normalization algorithms anddifferent number of principal components selected by PCA.

The best performance is obtained with 4 principal components for healthy T2 images and 5 principal components for T2images with tumor, using range normalization, which maps the input to the [0 1] interval linearly.

Two algorithms, namely linear and random are used to initialize and determine start values of weight vector of theSOM. And there are two training algorithms; sequential and batch, for the training of the network. Different combinationsof these initialization and training algorithms are used to find the best results for the segmentation of the T2 weighted MRimages. Table 3 shows the training time, quantization and topographical error and the accuracy of the network for differentinitialization and training algorithms.

Random initialization and sequential training for healthy T2 images, and T2 images with tumor gave the best results. Butsequential training takes approximately 3.5 times for healthy images and 7 times more for images with tumor than batchtraining.

Because there is not a standard way to determine network size, different map sizes should be tried to find the best forthe data processed. The accuracy of different map sizes is shown in Fig. 4. As seen from Fig. 4, [7 7] sized map gives thebest performance.

Other factors that affect the performance of the SOM are lattice and shape of the network. Rectangular or hexagonal canbe chosen as lattice, and sheet, cyclic or toroid can be chosen as shape. Fig. 5 shows the accuracy of different lattice andshape choices.

While SOM is trained, adjustment of the weight vectors is done by the chosen neighborhood function. Gaussian andbubble are the most frequently used neighborhood functions. Table 4 shows the results of these functions. It can be seenfrom the table that Gaussian neighborhood function gives better results.

Training epoch size must be chosen correctly for convergence to the desired results. The epoch size must be chosenbig enough to converge and small enough not to increase working time extensively. Table 5 gives the results for differentnumber of epochs for rough and finetuning training.

After these stages the best parameters for the SOM to segment T2 weighted healthy and T2 weighted images with tumorare found to be [7 7] for map size, sheet for map shape, hexagonal for lattice type, Gaussian for neighborhood function and[25 25] epoch size for rough and finetuning training. The brain model gives 97.46% accuracy with these parameters and theworking time is 11.36 s. Accuracies of the chosen parameters are given in Table 6 for different slices of the brain.

Page 7: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

674 I. Güler et al. / Digital Signal Processing 19 (2009) 668–677

Fig. 4. Segmentation accuracy of different map sizes.

Fig. 5. Segmentation accuracies: (a) different lattice types, (b) shapes.

Table 4Results of different neighbourhood functions

Gaussian Bubble

T2 healthy 98.30 97.92T2 with tumor 98.03 97.11

Table 5Training epoch sizes and the performance of the SOM

Image type Epoch size [rough finetuning] Accuracy (%) Time (s)

T2 healthy [100 100] 98.21 34.68[50 50] 98.21 17.55[25 25] 98.21 9.09[10 10] 98.20 3.88

T2 with tumor [100 100] 97.24 36.68[50 50] 97.24 18.57[25 25] 97.23 9.64[10 10] 96.30 4.16

Table 6Performance of the SOM for different brain slices

Brain slice No Accuracy (%)

T2 healthy T2 with tumor

32 96.77 98.1133 97.21 96.9134 98.21 97.2335 97.03 96.9136 97.09 98.1742 97.38 98.00

Page 8: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

I. Güler et al. / Digital Signal Processing 19 (2009) 668–677 675

Fig. 6. (a) Brain model, (b) SOM segmented brain model, (c) labeled brain model, (d) SOM segmented brain model in color, (e) labeled brain model in color.

Table 7Brain model labeling analysis values

Brain structures Number of pixels in thebrain model

Number of pixel found bythe system

Accuracy (%)

Background 27,603 26,259 95.13Skin 3340 2820 84.43White matter 21,356 18,611 87.14Gray matter 4074 3522 86.45CSF 238 198 83.19Ventricular 2456 2410 98.12Left and right nucleus 4510 4180 92.89Corpus collosum 1240 813 65.56

49 segments that are obtained using SOM with the accuracies in Table 6 are then joined together and labeled byknowledge-based expert system as described above. It is observed that results of labeling T2 weighted images with tu-mor were not very clear. The reason for this is the type of the tumor that exists in the used T2 weighted images. The tumorregion was very widespread in the image and it contains edema regions inside as seen in Fig. 8. The tumor spans fromCSF to gray matter. This prevented the density analysis of the images from being adequate. Very bright, bright, medium,dark and very dark expressions that are used in rules could not assign tissue types and regions as expected because of thisreason. It is seen that the system can be more successful for tumor types that are not spreaded along the MR images. La-beling of healthy MR images and the brain model was more successful and labeling of available structures is accomplished.In Fig. 6, the brain model segmented by the SOM and labeled by the knowledge-based expert system is shown. Colorsrepresent different tissue types.

In Fig. 7, slice 34 of a T2 weighted MR image that belongs to a healthy person is shown segmented by the SOM andlabeled by the knowledge-based expert system. Colors represent different tissue types.

In Fig. 8, slice 34 of a T2 weighted MR image with a tumor is shown segmented by the SOM and labeled by theknowledge-based expert system. Colors represent different tissue types.

The segmentation and labeling of the brain model is analyzed and the results are given in Table 7. Analysis is done byfirst finding the number of pixels in every tissue type that has a known gray level value in the original brain model. Thenthe number of pixels that are labeled as a tissue type is found for every tissue type. It is seen that their spatial settlementis correct from the resulting images. Then the percentage of the labeled number of pixels to the original number of pixelsis calculated for every tissue type. In this way the ratio of the correctly labeled tissues are found. Table 7 shows the brainstructure names, number of pixels in the brain model for that tissue type, number of pixels that is labeled as that tissuetype, and the ratio of correct labeling. Accuracy rates show that the system is successful for MR image segmentation.

The system has high segmentation and labeling accuracies for most of the brain tissues as can be seen in Table 7. Thisstudy is the continuation of the authors’ previous studies to improve the interpretation of MR images [32,33].

Page 9: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

676 I. Güler et al. / Digital Signal Processing 19 (2009) 668–677

Fig. 7. (a) T2 weighted healthy brain slice 34, (b) SOM segmented T2 weighted healthy brain slice 34, and (c) labeled T2 weighted healthy brain slice 34,(d) SOM segmented T2 weighted healthy brain slice 34 in color, (e) labeled T2 weighted healthy brain slice 34 in color.

Fig. 8. (a) T2 weighted brain slice 34 with tumor, (b) SOM segmented T2 weighted brain slice 34 with tumor, (c) labeled T2 weighted brain slice 34 withtumor, (d) SOM segmented T2 weighted brain slice 34 with tumor in color, (e) labeled T2 weighted brain slice 34 with tumor in color.

Acknowledgment

This study has been supported by Gazi University Scientific and Research Project Fund (Project No. 07/2007-06).

References

[1] J. Rogowska, Overview and fundamentals of medical image segmentation, in: I. Bankman (Ed.), Handbook of Medical Imaging: Processing and Analysis,Academic Press, Orlando, 2000, pp. 69–85.

[2] M. Egmont-Petersen, D. Ridder, R. Handels, Image processing with neural networks—A review, Pattern Recogn. 35 (2002) 2279–2301.

Page 10: Interpretation of MR images using self-organizing maps and knowledge-based expert systems

I. Güler et al. / Digital Signal Processing 19 (2009) 668–677 677

[3] M. Straka, A.L. Cruz, A. Köchl, M. Sramek, M.E. Gröller, D. Fleischmann, 3D watershed transform combined with a probabilistic atlas for medical imagesegmentation, J. Med. Inform. Technol. 6 (2003) 69–78.

[4] M. Kaus, S.K. Warfield, F.A. Jolesz, R. Kikinis, Adaptive template moderated brain tumor segmentation in MRI, in: Workshop für Bildverarbeitung in derMedizin, 1999, pp. 102–105.

[5] M.S. Brown, M.F. Mcnitt-Gray, J.G.G. Nicholas, J. Mankovich, J. Hiller, L.S. Wilson, D.R. Aberle, Method for segmenting chest CT image data using ananatomical model: Preliminary results, IEEE Trans. Med. Imaging 16 (6) (1997) 828–839.

[6] J. Koh, M. Suk, S.M. Bhandarkar, A multi-layer Kohonen’s self-organizing feature map for range image segmentation, in: IEEE Int. Conf. Neural Networks,San Francisco, vol. 3, 1993, pp. 1270–1275.

[7] V. Murino, G. Vernazza, Artificial neural networks for image analysis and computer vision, Image Vision Comput. 19 (9) (2001) 583–584(2).[8] W. Reddick, J.O. Glass, E.N. Cook, T.D. Eklin, R.J. Deaton, Automated segmentation and classification of multispectral magnetic resonance images of

brain using artificial neural networks, IEEE Trans. Med. Imaging 16 (6) (1997) 911–918.[9] N. Allahverdi, Uzman Sistemler, Atlas Press, Istanbul, 2002.

[10] J. Alirezaie, M.E. Jernigan, C. Nahmias, Automatic segmentation of cerebral MR images using artificial neural networks, IEEE Trans. Nucl. Sci. 45 (4)(1998) 1777–1781.

[11] N.M. Ahmed, A.A. Farag, Two-stage neural network for volume segmentation of medical images, Pattern Recogn. Lett. 18 (11–13) (1997) 1143–1151.[12] M. Matesin, S. Loncaric, D. Petravic, A rule-base approach to stroke lesion analysis from CT brain images, in: Image and Signal Processing and Analysis,

Proceedings of the 2nd International Symposium, ISPA, 2001, pp. 219–223.[13] P. Chen, T. Pavlidis, Image segmentation as an estimation problem, Comput. Graph. Image Process. 12 (1980) 153–172.[14] R. Adams, L. Bischof, Seeded region growing, IEEE Trans. Pattern Anal. Mach. Intell. 16 (6) (1994) 641–647.[15] X. Yu, J. Yla-Jaaski, O. Huttunen, T. Vehkomaki, O. Sipila, T. Katila, Image segmentation combining region growing and edge detection, Int. Conf. Pattern

Recogn. 3 (1992) 481–484.[16] J. Roerdink, A. Meijster, The watershed transform: Definitions, algorithms and parallelization strategies, Fund. Inform. 41 (2001) 187–228.[17] L. Najman, M. Schmitt, Geodesic saliency of watershed contours and hierarchical segmentation, IEEE Trans. Pattern Anal. Mach. Intell. 18 (12) (1996)

1163–1173.[18] V. Grau, A.U.J. Mewes, M. Alcaniz, R. Kikinis, S.K. Warfield, Improved watershed transform for medical image segmentation using prior information,

IEEE Trans. Med. Imaging 23 (4) (2004) 447–458.[19] R.C. Gonzalez, R.E. Woods, Digital Image Processing, Prentice Hall, New Jersey, 2001.[20] D.L. Toulson, J.F. Boyce, Segmentation of MR image using neural nets, Image Vision Comput. 10 (1992) 324–328.[21] C. Li, D.B. Goldgof, L.O. Hall, Knowledge-based classification and tissue labeling of MR images of human brain, IEEE Trans. Med. Imaging 12 (4) (1993)

740–751.[22] S. Loncaric, D. Kovacevic, D. Cosic, Fuzzy expert system for edema segmentation, in: MELECON 98, 9th Mediterranean Electrotechnical Conference,

vol. 2, 1998, pp. 1476–1479.[23] M. Matesin, S. Loncaric, D. Petravic, Image and signal processing and analysis, in: Proceedings of the 2nd International Symposium on ISPA, 2001,

pp. 219–223.[24] K.A. Johnson, J.A. Becker, The Whole Brain Atlas, 1999, available at: http://www.med.harvard.edu/AANLIB/home.html.[25] M.N. Ahmed, Novel image segmentation and registration algorithms for the study of brain structure and function, Ph.D. thesis, University of Louisville,

Louisville, KY, 1997.[26] R.M. Haralick, K. Shanmugam, I. Dinstein, Textural features for image classification, IEEE Trans. Syst. Man Cybernet. SMC-3 (6) (1973) 610–621.[27] A.D. Kulkarni, Artificial Neural Networks for Image Understanding, John Wiley & Sons, Inc., New York, 1994.[28] T. Kohonen, S. Kaski, K. Lagus, J. Salojarvi, J. Honkela, V. Paatero, A. Saarela, Self organization of a massive document collection, IEEE Trans. Neural

Netw. 11 (3) (2000) 574–585.[29] E. Alhoniemi, J. Himberg, J. Parhankangas, J. Vesanta, SOM Toolbox for MATLAB, 1997, http://www.cis.hut.fi/projects/somtoolbox/.[30] S. Kaski, K. Lagus, Comparing self-organizing maps, in: Proceedings of the 1996 International Conference on Artificial Neural Networks, Lect. Notes

Comput. Sci., vol. 1112, 1997, pp. 809–814.[31] H.U. Bauer, K. Pawelzik, Quantifying the neighborhood preservation of self-organizing feature maps, IEEE Trans. Neural Netw. 3 (4) (1992) 570–579.[32] I. Güler, A. Toprak, A. Demirhan, R. Karakıs, MR images restoration with the use of fuzzy filter having adaptive membership parameters, J. Med.

Syst. 32 (3) (2008) 229–234.[33] A. Toprak, I. Güler, Impulse noise reduction in medical images with the use of switch mode fuzzy adaptive median filter, Digital Signal Process. 17

(2007) 711–723.

Inan Güler was born in Düzce, Turkey in 1956. He graduated from Erciyes University in 1981. He took his M.S. degree from MiddleEast Technical University in 1985, and his Ph.D. degree from Istanbul Technical University in 1990, all in electronic engineering. He is aprofessor at Gazi University where he is Head of Department. His interest areas include biomedical instrumentation, biomedical signalprocessing, electronic instrumentation, neural networks, and artificial intelligence. He has written more than 150 articles related with hisinterest areas.

Ayse Demirhan graduated from Gazi University in 2002. She took her M.S. degree in 2005, in Electronics and Computer Technology.She is a research assistant in the Department of Electronics and Computer Technology at Gazi University. Her interest areas are biomedicalsignal and image processing and artificial intelligence.

Rukiye Karakıs graduated from Ondokuz Mayıs University in 2002. She is working on her M.S. degree in Electronics and ComputerTechnology in Gazi University. She is a research assistant in the Department of Electronics and Computer Technology at Gazi University.Her interest areas are on expert systems.