12
Research Article Objectness Supervised Merging Algorithm for Color Image Segmentation Haifeng Sima, 1 Aizhong Mi, 1 Zhiheng Wang, 1 and Youfeng Zou 2 1 e School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, China 2 Henan Polytechnic University, Jiaozuo, China Correspondence should be addressed to Aizhong Mi; [email protected] Received 17 June 2016; Accepted 14 September 2016 Academic Editor: Wonjun Kim Copyright © 2016 Haifeng Sima et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Ideal color image segmentation needs both low-level cues and high-level semantic features. is paper proposes a two-hierarchy segmentation model based on merging homogeneous superpixels. First, a region growing strategy is designed for producing homogenous and compact superpixels in different partitions. Total variation smoothing features are adopted in the growing procedure for locating real boundaries. Before merging, we define a combined color-texture histogram feature for superpixels description and, meanwhile, a novel objectness feature is proposed to supervise the region merging procedure for reliable segmentation. Both color-texture histograms and objectness are computed to measure regional similarities between region pairs, and the mixed standard deviation of the union features is exploited to make stop criteria for merging process. Experimental results on the popular benchmark dataset demonstrate the better segmentation performance of the proposed model compared to other well-known segmentation algorithms. 1. Introduction Color image segmentation is an important task in image anal- ysis and understanding. It is widely used in many image appli- cations as a critical step. e primary purpose of segmenta- tion is to divide an image into meaningful and spatially con- nected regions (such as object and background) on the basis of diverse properties. Existing segmentation approaches can predominantly be divided into the following four categories: region-based methods, feature-based methods, boundary- based methods, and model-based methods. Most existing image segmentation methods consider only the low-level features [1–4] and it is difficult to obtain ideal segmentation results without high-level visual features. erefore, many researchers aim to build a bridge between low-level features and visual cognitive behaviors of human. Most segmentation methods integrated high-level knowledge, which are learning cognitive information from underlying features, and they obtain the corresponding label of elements and textons of image. Researchers aim to find high-level summary tags or labels to represent all the underlying features. With all these elements and textons, the segmentation can employ affinity computing to make decision of pixels labels of semantic objects and regions. is procedure employs knowledge- assisted multimedia analysis model to bridge the gap between semantics and low-level visual features [5, 6]. Many segmentation methods based on objects detection have shown their capability by utilizing low-level visual features [7, 8]. ey integrated basic visual features to detect homogeneous regions for image segmentation. Even though these low-level features have shown favorable results, they are unbefitting in many complex scenes, such as texture, luminance, and overlapping. To deal with this problem, some researchers attempted to combine different features together [9]. However, most of them integrated these features in a simple way. Few studies evaluated the significance of object features to the overall segmentation. Besides, since most previous segmentation methods lack high-level knowledge about objects, it is difficult for them to discriminate semantic regions from cluttered images. e objectness is a pixel-level characteristic obtained from training underlying characteristics, so we will combine it with color and texture to achieve better segmentation in this paper. e objectness is proposed by Alexe et al. [10] Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2016, Article ID 3180357, 11 pages http://dx.doi.org/10.1155/2016/3180357

Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

Research ArticleObjectness Supervised Merging Algorithm forColor Image Segmentation

Haifeng Sima1 Aizhong Mi1 Zhiheng Wang1 and Youfeng Zou2

1The School of Computer Science and Technology Henan Polytechnic University Jiaozuo China2Henan Polytechnic University Jiaozuo China

Correspondence should be addressed to Aizhong Mi miaizhonghpueducn

Received 17 June 2016 Accepted 14 September 2016

Academic Editor Wonjun Kim

Copyright copy 2016 Haifeng Sima et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Ideal color image segmentation needs both low-level cues and high-level semantic features This paper proposes a two-hierarchysegmentation model based on merging homogeneous superpixels First a region growing strategy is designed for producinghomogenous and compact superpixels in different partitions Total variation smoothing features are adopted in the growingprocedure for locating real boundaries Before merging we define a combined color-texture histogram feature for superpixelsdescription and meanwhile a novel objectness feature is proposed to supervise the region merging procedure for reliablesegmentation Both color-texture histograms and objectness are computed to measure regional similarities between region pairsand the mixed standard deviation of the union features is exploited to make stop criteria for merging process Experimental resultson the popular benchmark dataset demonstrate the better segmentation performance of the proposed model compared to otherwell-known segmentation algorithms

1 Introduction

Color image segmentation is an important task in image anal-ysis and understanding It is widely used inmany image appli-cations as a critical step The primary purpose of segmenta-tion is to divide an image into meaningful and spatially con-nected regions (such as object and background) on the basisof diverse properties Existing segmentation approaches canpredominantly be divided into the following four categoriesregion-based methods feature-based methods boundary-based methods and model-based methods Most existingimage segmentation methods consider only the low-levelfeatures [1ndash4] and it is difficult to obtain ideal segmentationresults without high-level visual features Therefore manyresearchers aim to build a bridge between low-level featuresand visual cognitive behaviors of human Most segmentationmethods integrated high-level knowledge which are learningcognitive information from underlying features and theyobtain the corresponding label of elements and textons ofimage Researchers aim to find high-level summary tags orlabels to represent all the underlying features With all theseelements and textons the segmentation can employ affinity

computing to make decision of pixels labels of semanticobjects and regions This procedure employs knowledge-assistedmultimedia analysismodel to bridge the gap betweensemantics and low-level visual features [5 6]

Many segmentation methods based on objects detectionhave shown their capability by utilizing low-level visualfeatures [7 8] They integrated basic visual features to detecthomogeneous regions for image segmentation Even thoughthese low-level features have shown favorable results theyare unbefitting in many complex scenes such as textureluminance and overlapping To deal with this problem someresearchers attempted to combine different features together[9] However most of them integrated these features in asimple way Few studies evaluated the significance of objectfeatures to the overall segmentation Besides since mostprevious segmentation methods lack high-level knowledgeabout objects it is difficult for them to discriminate semanticregions from cluttered images

The objectness is a pixel-level characteristic obtainedfrom training underlying characteristics so we will combineit with color and texture to achieve better segmentation inthis paper The objectness is proposed by Alexe et al [10]

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2016 Article ID 3180357 11 pageshttpdxdoiorg10115520163180357

2 Mathematical Problems in Engineering

as a novel high-level property estimated from multiple low-level features and it is widely used for salience computingobject detection cosegmentation object tracking and soforth It is designed for measuring the likelihood of a windowcontaining a complete object in it at first This measurementemploys four cues in the calculation of objectness multiscalesaliency map color contrast edges density and superpixelsstraddling The four cues have been proved to be useful fordetecting whole object from background Jiang et al [11]proposed a strategy for computing regional objectness bycomputing average objectness values from salient regions inslide windows Through the above analysis we find that theobjectness is an effective feature to describe the integrity andsemantic regions of objects in the image

In order to tackle those issues remaining in existingsegmentation methods we developed a novel color imagesegmentation method based on merging superpixels super-vised by regional objectness and underlying characteristicsin this paper Firstly we adopt seeds growing strategy forsuperpixels computing The seed pixels are selected from theblurring image from total variation (TV) diffusion imageNext we introduce the growing method to expand theseseed pixels to superpixels according to similarity of colorand texture features at local areas In the merging procedurewe define a combined feature consisting of color textureand objectness to measure the similarity between adjacentsuperpixels and implement merging procedure to form thesegmentation results

The main contributions of this paper are summarisedas follows (1) we develop efficacious superpixels extractionmethod to detect homogeneous regions with hybrid features(2) we propose a combined histogram descriptor of colorand texture feature for identifying semantic regions and (3)we design a fast and less sampling objectness computingmodel and integrate it with low-level features for reasonablemerging

The rest of the paper is organized as follows our seg-mentation method is shown in Section 2 including super-pixels computing objectness computing combined featuredefinition of superpixels andmerging rulesThe experimentson the Berkeley Segmentation Datasets are performed inSection 3 and the conclusion is made in Section 4 Thesegmentation process and intermediate results are shown inFigure 1

2 Objectness Supervised SuperpixelsMerging Segmentation

In our method the segmentation process consists of thefollowing two stages First the input image is segmented intohundreds of uniform homogeneous superpixels with hybridfeatures including RGB channels and diffusion image Sec-ond we construct a descriptor for superpixels with invariantcolor-texture histograms as low-level features and define theobjectness of all pixels based on sparse sampling of objectnesswindows as high-level featuresThen we develop themergingrules combined with two levels of features for reasonablesegmentation The detailed procedure of our method is asfollows

21 Compact Superpixels Computing In the color imagesegmentation to obtain homogeneous superpixels is an idealpreprocessing technique to simplify the segmentation taskA superpixel is defined as a connected homogeneous imageregion often called oversegmentation computed by certainstrategies An image is represented by hundreds of super-pixels instead of many redundant pixels which can simplifythe image representation and segmentation There are manysegmentation strategies for superpixels computing such asSLIC [12] N-cut [13] watershed [14] and Mean-Shift [15]and various strategies result in different property and qualityof superpixels Eachmethod has its own advantages howeverall these methods did not perform well when encounteringtexture areas

For segmentation task the real boundary keeping andhomogeneous superpixels extraction are critical for idealfinal segmentation Texture feature is consisted of grouppixels that requires enough pixels to represent the texturepatterns in superpixels How to retain the superpixels bound-aries surrounded complete texture is a difficult problemfor superpixels computing algorithms On the other handsmaller scale superpixel is conducive to saving the real andaccuracy boundaries An important branch of superpixelextraction methods is to predefine segments number beforesegmentation and it is easy to control the running time andsegmentation quality by adjusting segments number in thesemethods It is also convenient to integrate multifeatures inthese methods The other data driven superpixels extractionmethods are randomness in contrast So we devolve ourmethod based on the above analysis We select seeds in thesmooth area in the uniform grids of images and grow themto be superpixels with similarity computing Implementationdetails are as follows

211 Texture Smoothing In order to enhance the descriptiveability of local regions we detect texture information byintegrating the diffusion feature and original channels intothe region growing procedure Perona and Malik proposed[16] an anisotropic diffusion filtering method for multiscalesmoothing and edge detection this diffusion process isa powerful image processing method which encouragesintraregion smoothing in preference to smoothing across theboundaries This filtering method tries to estimate homo-geneous region of image with surface degradation and theedge strengths of local structure To compute texture featureswe adopt nonlinear diffusion to extract texture areas andenhance the edges simultaneously It is easier to smoothregions and detect real boundary of image by TV diffusionmodel in texture area [17] Therefore the TV flow diffusionmethod is used for blurring local area of color image forextracting nonoverlapped superpixels The implicit equationof nonlinear diffusion technique of TV is computed by

120597119868119894119888 (119909 119910 119905)120597119905 = div(119892( 3sum119896=1

1003816100381610038161003816nabla1198681198961003816100381610038161003816) 10038161003816100381610038161003816nabla11986811989410038161003816100381610038161003816) (1)

The initial value of 119868119888 is single channel 119888 of the originalimage in RGB space and 119905 is the iteration times 119892 is thediffusion function defined in [17]

Mathematical Problems in Engineering 3

Segmentation

Region objectness

Color-texture histograms

Input original image

Seed pixels location

Total variations features computing

Superpixels extraction

Merging with color-texture andobjectness

Output segmentation

Figure 1 The segmentation process and intermediate results of our method

119892 (1003816100381610038161003816nabla1198681198961003816100381610038161003816) = 119890minus|nabla119868119896|21205822 (2)

The iteration times rely on the superpixels scale for propersmoothing of local area The reasonable iteration times ofdiffusion are in proportion to the radius of superpixels asymp13in this paper

212 Superpixels Growing Scheme In this paper we adopta seed growing method to obtain compact superpixels withcomplete homogeneous pixels The number of superpixelsis preset in SLIC [12] and Turbo [18] Seed pixels selection

method has been developed by choosingminimum gradientsin divided uniform grids [12] which means to locate seedpixels at the smoothest areas of the grids It is an effectivemethod to find the optimal seed pixels for superpixels so weemploy the seed pixel extraction strategy and improve it bychoosing seeds in the TV diffusion image as the foundationof growing superpixels

The smoothed image is divided into a number of blocksand located gradient minimum point as the seed pixelsThe color image after filtering provides a robust feature

4 Mathematical Problems in Engineering

Figure 2 Some superpixels segmentation results of our method

for computing homogeneous superpixels We locate theinitial seeds at the local minimum in gradient map ofsmoothed image in average grids at first The grids settingmethod is in [12] In the growing process both originalimage and filtered image are integrated into a combinedfeature F[RGBRFGFBF] and the similarity measure-ment between pixels 119883 and 119884 Sim(119883 119884) is calculated byEuclidean distanceThe growing procedure is limited to localregions (12119903) for uniform superpixels and 119903 is the averageradius of grids Let 1199041 1199042 119904119894 denote the initial seeds of thesuperpixels and119860 119894 denote the growing area corresponding to119904119894 The growing algorithm is thus outlined as follows

(1) Initialize the selected seed pixels as areas 119860 119894 andassign a label 119901119894 for each area 119860 119894

(2) Compute average 119865119873 of neighbor regions (12119903) 119865119873and 119865119894 of 119860 119894 check the neighbor pixels of 119860 119894 and ifSim(119873[119894] 119865119894) lt Sim(119873[119894] 119865119873) merge neighbor pixels119873[119894] to 119860 119894 until no pixels can be labeled

(3) Connect all scatter regions lt20 pixels to adjacent andsimilar regions

We have shown some superpixels of test images inFigure 2 Since precise boundary is necessary for successfulwork we adopt a standard measure of boundary recall Theaverage performance of boundaries precision with regard

to superpixels numbers on BSD500 is shown in Figure 3the smallest distance between ground truth and superpixelsboundaries is 2 pixels in the experimentWe can see thatmoresuperpixels are conducive to more accurate boundaries and900 superpixels are big enough for merging segmentation inthe next stage

22 Combined Features and Superpixels Merging Reliableregion merging technique is critical for hierarchical seg-mentation In this section the similarity measurement ofregions and corresponding stopping criteria are proposedbased on the color-texture features and objectness of imageThe merging process begins from the primitive superpixelsof image until termination criterion is achieved and thesegmentation is finished

To improve the segmentation accuracy and dependabilitywe define combined feature CF(HisObj) to describe theoversegmentation regionsThis feature integrates both color-texture information and objectness values His is a kindof histograms containing color information and local colordistribution feature of pixels As high-level visual informa-tion Obj is the objectness value or probability of a regionbelonging to an identifiable object

221 Color-Texture Histograms The number of colors thatcan be discriminated by eyes is limited no more than 60

Mathematical Problems in Engineering 5

200 400 600 800 10000Number of segments

0

01

02

03

04

05

06

07

08

09

1

Boun

dary

reca

ll

Figure 3 Boundary precision with regard to superpixel density

kindsTherefore a quantized image is introduced to computethe color distribution of superpixels for segmentation and animage is quantized to 64 colors by the minimum variancemethod that leads to least distortion as well as reducesthe computing time [19] After quantization the connectionproperty of pixels in superpixels of same colors is computedas the texture features of superpixels We assume Con asthe connection relation between pixel 119862119901 and its neighborpixel Neigh(119895) and eight-neighbor pixels Neigh(119895 = 0ndash7) arecoded by (20minus7 lowast Con119895) as local color pattern (LCP) If colordifference 119862119889 between Neigh(119895) and 119862119901 is greater than 119862119889th(standard deviation of all pixels in quantized image) thenCon119895 = 0 otherwise Con119895 = 1 After coding all pixels sumup all LCP values of pixels with same color in the superpixelas color distribution weights for corresponding color binsThus the composite color-texture histograms feature (CF)is computed by multiplying the probability of quantitativecolors and pixel distribution in the superpixels The bins ofcolor-texture histograms are defined as follows

Bin (119894) = 119862Bin times 119862LCP 119894 = 1 2 3 64 (3)

222 Objectness Based on Sparse Sampling of Regions Asmentioned in [10] the objectness is an attribute of pixelscomputed with multicues for a set of sample slide windowsand the objectness map is biased towards rectangle intensityresponse It seems that objectness has no ability to discrim-inate regions but it provides a potential and useful value ofall pixels in the rectangle Jiang et al [11] proposed a novelmethod to assign objectness value to every pixel based on thedefinition of objectness

Inspired by the objectness computing method proposedby Jiang et al [11] we propose an improved objectnenss com-puting method based on sparse sampling of slide windowsfor all pixels We also employ four cues for estimation ofobjectness characteristics of all pixels in an image the mul-tiscale saliency color contrast edge density and superpixelsstraddling The multiscale saliency (MS cues) in the originalpaper [10] is calculated based on the residual spectrum of

image This method highlights fewer regions of an imageand it is not suitable for segmentation To expand the globalsignificance of objectness we use histogram-based contrast(HC) [20] salient map as the MS cues The histogram-basedcontrast (HC) method is to define the saliency values forimage pixels using color statistics of the input image In theHC model a pixel is defined by its color contrast to all otherpixels in the image and sped up by sparse histograms Thesaliency [20] value of pixel 119868119909119910 in image 119868 is defined as

119878 (119868(119909119910)) = 119899sum119895=1

119891119895119863(119888(119909119910) 119888119895) (4)

where 119888(119909119910) is the color value of pixel 119868(119909119910) 119899 is the numberof pixel colors after histogram quantization and 119891119895 is thestatistic value of pixel color 119888119895 that appears in image 119868 In orderto reduce the noise caused by the randomness of histogramquantization the saliency results are smoothed by the averagesaliency of the nearest 1198994 neighbor colors

The color contrast (CC) is a measure of the dissimilarityof a window to its immediate nearby surrounding area andthe surroundings edge density (ED) measures the density ofedges near the window borders superpixels straddling (SS) isa cue to estimate whether a window covers an object Thesethree cues are specifically described in [10] With the above-mentioned four cues the objectness score of test window119908 is defined by the Naive Bayes posterior probability in[10]

119875119908 (obj | C) = 119901 (C | obj) 119901 (obj)119901 (C) (5)

where C is the combined cues set and 119901(obj) is thepriors estimation value of the objectness All the specificdefinitions of 119901(Cobj) 119901(obj) and 119901(C) are described in[10]

In the objectness computing methods [10 11] the sam-pling windows are selected randomly by scales for calculationof objectness and 100000 windows are sampled for trainingThen they compute the other cues inC with these windows

In this paper we try to sample less windows that mostlikely contain objects Thus presegment results can provideregions for locating object windows with higher probabilityWe calculate all the circumscribed rectangular windows ofthe presegmentation regions resulting fromMean-Shift (hs =10 hr = 10 and minarea = 100) and put all these vertices ofcircumscribed rectangular windows as the candidates vertexof slidewindowsNext we select all the upper left vertices andlower right corner vertices to compose two sets respectivelyleft-up (LU) and right-down (RD)

In order to find the most likely sampling windowsincluding real objects or homogeneous semantic regions wechoose arbitrary vertex 119881LU from LU to combine with allvertices in RD whose coordinate values are less than thecoordinates of 119881LU These coupled vertices can provide morereasonable coordinates of sampling windows that containobjects All the sample windows can be seen in Figure 4 Wecalculate all the objectness value of sampling windows by (5)

6 Mathematical Problems in Engineering

ObjectnessObjectness of windowsOriginal image Mean-Shift+sample windows

Figure 4 Objectness computing process

and sum them by pixels locations as pixel-wise objectnesswith

119874119901 (119909 119910) = 119908isin119882sum119901(119909119910)isin119908

119875 (119908 (119909 119910)) (6)

where 119882 is the sample windows set 119908 is the single windowin 119882 and 119901 is the score of sample window computed by(5) The pixel-wise objectness cannot represent the similarityof regions so we compute the average objectness values ofMean-Shift segment regions as new objectness value of allpixels and the objectness of superpixel is the average valueof all pixels in it

Obj119901 (119909 119910) = AVG119901isin119877119894

119874119901 (119909 119910) (7)

223 Merging Rules The regions merging method and stopcriteria are crucial for the segmentation results We estab-lished adjacent table and similar matrix for all superpixels tocarry out the merging procedure and the metric of similaritybetween two regions can be calculated by the followingformula

Dis (119883 119884) = 120591 lowast (120594 (His (119883) His (119884))) + (1 minus 120591)lowast 1003816100381610038161003816Obj (119883 minus 119884)1003816100381610038161003816 (8)

The fixed weight assignment is not well adapted to obtainthe real object regions by merging process so we definealterable parameter 120591 using two types of features to regulatethe similarity computing (1) fixed weight assignment is notwell adapted to obtain the real object regions by mergingprocess (2) when the feature change is greater the weightshould be set smaller in the similarity computing otherwiseit will lead to overmerging Therefore we define a balanceparameter using two types of features to regulate the simi-larity computing

120591 = Δ (Obj)119877 (Obj) times Δ (His)2120583 + 120590 (His (Superpixels)) (9)

where Δ is the difference of given variables 119877 is the rangeof Obj 120583 is the mean value of His features of superpixelsand 120590 is the standard deviation of HisObj of current mergedsuperpixels Here we set the threshold based on 119881 to controlthe merging process If the similarity between A and B isgreater than 119881 the adjacent merges two regions A and Bupdates adjacent relations and labels and checks next regions

until no pair of regions satisfies the merging criterion 119881 isthe combined standard deviation of all combined features(HisObj) of current merged superpixels and it is defined in

119881 = 120591 lowast 120590 (His (Superpixels)) + (1 minus 120591)lowast 120590 (Obj (Superpixels)) (10)

The description of the merging strategy is displayed inAlgorithm 1 Each time after merging the combined featureCF(HisObj) of the new region is updated as follows firstlythe connection relations of all pixels are unchanged in thenew region then compute the statistics value for each bin ofquantitative colors and update of the histogram feature usingformula (3) of the new region Second Obj feature of the newregion is obtained by computing average objectness value ofall the pixels in the new region

3 Experimental Results

31 Dataset The proposed segmentation model has beentested on Berkeley Segmentation Datasets [21 22] andcompared with human ground truth segmentation resultsSpecifically the BSD dataset images were selected from theCorel Image Database in size of 481 times 321 The BerkeleyDataset andBenchmark iswidely used by the computer visionfor two reasons (1) it provides a large number of colorimages with complex scenes and (2) for each image thereis multiple ground truth segmentation which are providedfor the numerical evaluation of the testing segmentationalgorithms

32 Visual Analysis of Segmentation Results For a subjectivevisual comparison we selected 5 segmentation results of testimages and compared them with three classic algorithms(JSEG [23] CTM [24] and SAS [25]) The superpixelsnumber of our method for comparing is set to be 119870 = 900the JSEG [23] algorithm needs three predefined parametersquantized colors = 10 number of scales = 5 and mergingparameter = 078 the results for CTM [24] is (120582 = 01) theSAS [25] method is implemented with default parametersFrom the results shown in Figure 4 it is clear that theproposedmethod has performed better in detecting completeobjects from images (such as flowers zebra cheetah personand house in the test images) For all the five test imagesour method has shown clear and complete contours ofthe primary targets The SAS method is more accurate inhomogeneous region detection than other methods but it

Mathematical Problems in Engineering 7

Inputsuperpixels 119866(119871) adjacent table Sim(119866) of 119866(119871)

OutputThe merged result

(1) For each119873 node in 119866119873(119894) is neighbor region of119873(2) if Dis(119873119873(119894)) lt 119881 then(3) merged119873(119894) to119873 119871 = 119871 minus 1(4) change the region label of119873(119894) to119873(5) modified the histogram feature of two merged regions(6) update 119866(119871) Sim(119866) and 119881(7) end if(8) If no regions satisfied Dis(119873119873(119894)) lt 119881 end the merge process otherwise return to (1)(9) Merge meaningless and small areas to adjacent regions(10) Return relabeled image

Algorithm 1

perform less oversegmentation except in complex areas CTMperforms better in texture regions but lost in the completenessof the objects JSEG is also a texture-oriented algorithmwhich has the same weakness as CTM Thus three othersegmentation results have brought more over-segments andmeaningless regions from the test results in Figure 5 Theregion number versus iteration times in the merging processof some test images is shown in Figure 6 and the regionnumber versus parameter 119881 in the merging process of sometest images is shown in Figure 7

In Figure 8 we have displayed fourteen segmentationresults from test images These results include different typeof images such as animals buildings person vehicles andplanes with various texture featuresThe segmentation resultsshow that the proposed growing-merging framework cansegment the images into compact regions and restrain theover- or undersegmentation

33 Quantitative Evaluation on BSD300 The comparisonis based on the four quantitative performance measuresprobabilistic rand index (PRI) [26] variation of information(VoI) [27] global consistency error (GCE) [21] and BoundaryDisplacement Error (BDE) [28]The PRI Index compares thesegmented result against a set of ground truth by evaluatingthe relationships between pairs of pixels as a function ofvariability in the ground truth set VoI defines the distancebetween two types of segmentation as the average conditionalentropy of one type of segmentation given the other andGCE is a region-based evaluation method by computingconsistency of segmented regions BDE measures the aver-age displacement error of boundary pixels between twoboundaries images Many researchers have evaluated theseindexes to compare the performance of various methods inimage segmentation The test values of five example imageson four algorithms are shown in Table 1 and the averageperformance on BSD300 [21] are shown in Table 2 Withthe above comparison strategy our algorithm provided moreapproximate boundaries with manual expert segmentationresults The bins number of the combined histograms iscritical parameter for segmentation sowe have take some testexamples by varying bins in Figure 9 to verify 64 bins is moreproper for accurate segmentation

Table 1 The PRI VoI GCE and BDE value of four test images

Index Proposed SAS CTM JSEG

12084

PRI 07848 076155 07051 07026VoI 24184 21261 36425 43114GCE 02256 01753 01658 01209BDE 77028 68194 74629 92061

130066

PRI 08209 07286 07710 07457VoI 14075 22986 26194 36813GCE 01416 03001 01117 01226BDE 94618 186371 102823 133104

134052

PRI 07630 06454 06303 05833VoI 12913 20665 26110 39948GCE 01020 02263 00833 01302BDE 61061 112896 94271 151976

15062

PRI 08882 08973 08673 08705VoI 15428 14820 21843 26231GCE 02303 01959 01386 01075BDE 87102 212930 122526 139926

385028

PRI 09152 91108 09031 09036VoI 15868 36813 18808 22501GCE 01436 02236 00888 01072BDE 91108 91016 86668 86987

Table 2The average values of PRI VoI GCE and BDE for BSD300

SAS JSEG CTM ProposedPRI 08246 07754 07263 08377VOI 17724 23217 21010 1773GCE 01985 02017 02033 01964BDE 126351 144072 119245 108311

34 Boundary Precision Comparison on BSD500 Estradaand Jepson [29] have proposed a thorough-quantitative-evaluation matching strategy to evaluate the segment accu-racy for boundary extraction The algorithms are evaluatedusing an efficient algorithm for computing precision andrecall with regard to human ground truth boundaries The

8 Mathematical Problems in Engineering

Originalimages

Proposed

SAS

CTM

JSEG

Figure 5 Comparison of four segmentation methods (proposed SAS CTM and JSEG)

2 4 6 8 100Iteration number

1

10

100

1000

Regi

on n

umbe

r

12084130066134052

15062385028

Figure 6 Merging curves of iteration-region numbers

evaluation strategy of precision and recall is reasonableand equitable as measures of segmentation quality because

of not being biased in favor of over- or undersegmen-tation An excellent segmentation algorithm should bringlow segmented error and high boundary recall It definesprecision and recall to be proportional to the total number ofunmatched pixels between two types of segmentation 119878sourceand 119878target where 119878source is the boundaries extracted fromsegmentation results by computer algorithms and 119878target isthe boundaries of human segmentation provided by BSD500[22] A boundary pixel is identified as true position whenthe smallest distance between the extracted and ground truthis less than a threshold (120576 = 5 pixels in this paper) Theprecision recall and 119865-measures are defined as follows

Precision 119878source 119878target = Matched (119878source 119878target)1003816100381610038161003816119878source1003816100381610038161003816

Precision 119878source 119878target = Matched (119878source 119878target)10038161003816100381610038161003816119878target10038161003816100381610038161003816(11)

The precision and recall measures provide a more reliableevaluation strategy based on two reasons (1) it refinedboundaries and avoided duplicate comparison of identicalboundary pixels and (2) it provides dynamic displacementparameters for comprehensive and flexible evaluation ofsegmentation performance In this section we displayed the

Mathematical Problems in Engineering 9

124084130066134052

15062385052

0

100

200

300

400

500

600

700

Regi

on n

umbe

r

002 004 006 008 01 012 014 0160V

Figure 7 Merging curve of region number-119881 of test images in Figure 3

Originalimages

Objectnessmap

Segmentationresults

Figure 8 Some segmentation results and objectness map

boundaries precision and recall curveswith five segmentationalgorithms (MS [15] FH [2] JSEG [23] CTM [24] and SAS[25]) in Figure 10The ROC curves fully characterize the per-formance in a direct comparison of the segmentations qualityprovided by different algorithms Figure 10 shows that ourmethod provides more closely boundaries of segmentationswith the ground truth

4 Conclusion

In this paper we propose a novel hierarchical color imagesegmentation model based on region growing and mergingA proper growing procedure is designed to extract compactand complete superpixels with TV diffusion filtered imageat first Second we integrate color texture and objectnessinformation of superpixels into the merging procedure and

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

2 Mathematical Problems in Engineering

as a novel high-level property estimated from multiple low-level features and it is widely used for salience computingobject detection cosegmentation object tracking and soforth It is designed for measuring the likelihood of a windowcontaining a complete object in it at first This measurementemploys four cues in the calculation of objectness multiscalesaliency map color contrast edges density and superpixelsstraddling The four cues have been proved to be useful fordetecting whole object from background Jiang et al [11]proposed a strategy for computing regional objectness bycomputing average objectness values from salient regions inslide windows Through the above analysis we find that theobjectness is an effective feature to describe the integrity andsemantic regions of objects in the image

In order to tackle those issues remaining in existingsegmentation methods we developed a novel color imagesegmentation method based on merging superpixels super-vised by regional objectness and underlying characteristicsin this paper Firstly we adopt seeds growing strategy forsuperpixels computing The seed pixels are selected from theblurring image from total variation (TV) diffusion imageNext we introduce the growing method to expand theseseed pixels to superpixels according to similarity of colorand texture features at local areas In the merging procedurewe define a combined feature consisting of color textureand objectness to measure the similarity between adjacentsuperpixels and implement merging procedure to form thesegmentation results

The main contributions of this paper are summarisedas follows (1) we develop efficacious superpixels extractionmethod to detect homogeneous regions with hybrid features(2) we propose a combined histogram descriptor of colorand texture feature for identifying semantic regions and (3)we design a fast and less sampling objectness computingmodel and integrate it with low-level features for reasonablemerging

The rest of the paper is organized as follows our seg-mentation method is shown in Section 2 including super-pixels computing objectness computing combined featuredefinition of superpixels andmerging rulesThe experimentson the Berkeley Segmentation Datasets are performed inSection 3 and the conclusion is made in Section 4 Thesegmentation process and intermediate results are shown inFigure 1

2 Objectness Supervised SuperpixelsMerging Segmentation

In our method the segmentation process consists of thefollowing two stages First the input image is segmented intohundreds of uniform homogeneous superpixels with hybridfeatures including RGB channels and diffusion image Sec-ond we construct a descriptor for superpixels with invariantcolor-texture histograms as low-level features and define theobjectness of all pixels based on sparse sampling of objectnesswindows as high-level featuresThen we develop themergingrules combined with two levels of features for reasonablesegmentation The detailed procedure of our method is asfollows

21 Compact Superpixels Computing In the color imagesegmentation to obtain homogeneous superpixels is an idealpreprocessing technique to simplify the segmentation taskA superpixel is defined as a connected homogeneous imageregion often called oversegmentation computed by certainstrategies An image is represented by hundreds of super-pixels instead of many redundant pixels which can simplifythe image representation and segmentation There are manysegmentation strategies for superpixels computing such asSLIC [12] N-cut [13] watershed [14] and Mean-Shift [15]and various strategies result in different property and qualityof superpixels Eachmethod has its own advantages howeverall these methods did not perform well when encounteringtexture areas

For segmentation task the real boundary keeping andhomogeneous superpixels extraction are critical for idealfinal segmentation Texture feature is consisted of grouppixels that requires enough pixels to represent the texturepatterns in superpixels How to retain the superpixels bound-aries surrounded complete texture is a difficult problemfor superpixels computing algorithms On the other handsmaller scale superpixel is conducive to saving the real andaccuracy boundaries An important branch of superpixelextraction methods is to predefine segments number beforesegmentation and it is easy to control the running time andsegmentation quality by adjusting segments number in thesemethods It is also convenient to integrate multifeatures inthese methods The other data driven superpixels extractionmethods are randomness in contrast So we devolve ourmethod based on the above analysis We select seeds in thesmooth area in the uniform grids of images and grow themto be superpixels with similarity computing Implementationdetails are as follows

211 Texture Smoothing In order to enhance the descriptiveability of local regions we detect texture information byintegrating the diffusion feature and original channels intothe region growing procedure Perona and Malik proposed[16] an anisotropic diffusion filtering method for multiscalesmoothing and edge detection this diffusion process isa powerful image processing method which encouragesintraregion smoothing in preference to smoothing across theboundaries This filtering method tries to estimate homo-geneous region of image with surface degradation and theedge strengths of local structure To compute texture featureswe adopt nonlinear diffusion to extract texture areas andenhance the edges simultaneously It is easier to smoothregions and detect real boundary of image by TV diffusionmodel in texture area [17] Therefore the TV flow diffusionmethod is used for blurring local area of color image forextracting nonoverlapped superpixels The implicit equationof nonlinear diffusion technique of TV is computed by

120597119868119894119888 (119909 119910 119905)120597119905 = div(119892( 3sum119896=1

1003816100381610038161003816nabla1198681198961003816100381610038161003816) 10038161003816100381610038161003816nabla11986811989410038161003816100381610038161003816) (1)

The initial value of 119868119888 is single channel 119888 of the originalimage in RGB space and 119905 is the iteration times 119892 is thediffusion function defined in [17]

Mathematical Problems in Engineering 3

Segmentation

Region objectness

Color-texture histograms

Input original image

Seed pixels location

Total variations features computing

Superpixels extraction

Merging with color-texture andobjectness

Output segmentation

Figure 1 The segmentation process and intermediate results of our method

119892 (1003816100381610038161003816nabla1198681198961003816100381610038161003816) = 119890minus|nabla119868119896|21205822 (2)

The iteration times rely on the superpixels scale for propersmoothing of local area The reasonable iteration times ofdiffusion are in proportion to the radius of superpixels asymp13in this paper

212 Superpixels Growing Scheme In this paper we adopta seed growing method to obtain compact superpixels withcomplete homogeneous pixels The number of superpixelsis preset in SLIC [12] and Turbo [18] Seed pixels selection

method has been developed by choosingminimum gradientsin divided uniform grids [12] which means to locate seedpixels at the smoothest areas of the grids It is an effectivemethod to find the optimal seed pixels for superpixels so weemploy the seed pixel extraction strategy and improve it bychoosing seeds in the TV diffusion image as the foundationof growing superpixels

The smoothed image is divided into a number of blocksand located gradient minimum point as the seed pixelsThe color image after filtering provides a robust feature

4 Mathematical Problems in Engineering

Figure 2 Some superpixels segmentation results of our method

for computing homogeneous superpixels We locate theinitial seeds at the local minimum in gradient map ofsmoothed image in average grids at first The grids settingmethod is in [12] In the growing process both originalimage and filtered image are integrated into a combinedfeature F[RGBRFGFBF] and the similarity measure-ment between pixels 119883 and 119884 Sim(119883 119884) is calculated byEuclidean distanceThe growing procedure is limited to localregions (12119903) for uniform superpixels and 119903 is the averageradius of grids Let 1199041 1199042 119904119894 denote the initial seeds of thesuperpixels and119860 119894 denote the growing area corresponding to119904119894 The growing algorithm is thus outlined as follows

(1) Initialize the selected seed pixels as areas 119860 119894 andassign a label 119901119894 for each area 119860 119894

(2) Compute average 119865119873 of neighbor regions (12119903) 119865119873and 119865119894 of 119860 119894 check the neighbor pixels of 119860 119894 and ifSim(119873[119894] 119865119894) lt Sim(119873[119894] 119865119873) merge neighbor pixels119873[119894] to 119860 119894 until no pixels can be labeled

(3) Connect all scatter regions lt20 pixels to adjacent andsimilar regions

We have shown some superpixels of test images inFigure 2 Since precise boundary is necessary for successfulwork we adopt a standard measure of boundary recall Theaverage performance of boundaries precision with regard

to superpixels numbers on BSD500 is shown in Figure 3the smallest distance between ground truth and superpixelsboundaries is 2 pixels in the experimentWe can see thatmoresuperpixels are conducive to more accurate boundaries and900 superpixels are big enough for merging segmentation inthe next stage

22 Combined Features and Superpixels Merging Reliableregion merging technique is critical for hierarchical seg-mentation In this section the similarity measurement ofregions and corresponding stopping criteria are proposedbased on the color-texture features and objectness of imageThe merging process begins from the primitive superpixelsof image until termination criterion is achieved and thesegmentation is finished

To improve the segmentation accuracy and dependabilitywe define combined feature CF(HisObj) to describe theoversegmentation regionsThis feature integrates both color-texture information and objectness values His is a kindof histograms containing color information and local colordistribution feature of pixels As high-level visual informa-tion Obj is the objectness value or probability of a regionbelonging to an identifiable object

221 Color-Texture Histograms The number of colors thatcan be discriminated by eyes is limited no more than 60

Mathematical Problems in Engineering 5

200 400 600 800 10000Number of segments

0

01

02

03

04

05

06

07

08

09

1

Boun

dary

reca

ll

Figure 3 Boundary precision with regard to superpixel density

kindsTherefore a quantized image is introduced to computethe color distribution of superpixels for segmentation and animage is quantized to 64 colors by the minimum variancemethod that leads to least distortion as well as reducesthe computing time [19] After quantization the connectionproperty of pixels in superpixels of same colors is computedas the texture features of superpixels We assume Con asthe connection relation between pixel 119862119901 and its neighborpixel Neigh(119895) and eight-neighbor pixels Neigh(119895 = 0ndash7) arecoded by (20minus7 lowast Con119895) as local color pattern (LCP) If colordifference 119862119889 between Neigh(119895) and 119862119901 is greater than 119862119889th(standard deviation of all pixels in quantized image) thenCon119895 = 0 otherwise Con119895 = 1 After coding all pixels sumup all LCP values of pixels with same color in the superpixelas color distribution weights for corresponding color binsThus the composite color-texture histograms feature (CF)is computed by multiplying the probability of quantitativecolors and pixel distribution in the superpixels The bins ofcolor-texture histograms are defined as follows

Bin (119894) = 119862Bin times 119862LCP 119894 = 1 2 3 64 (3)

222 Objectness Based on Sparse Sampling of Regions Asmentioned in [10] the objectness is an attribute of pixelscomputed with multicues for a set of sample slide windowsand the objectness map is biased towards rectangle intensityresponse It seems that objectness has no ability to discrim-inate regions but it provides a potential and useful value ofall pixels in the rectangle Jiang et al [11] proposed a novelmethod to assign objectness value to every pixel based on thedefinition of objectness

Inspired by the objectness computing method proposedby Jiang et al [11] we propose an improved objectnenss com-puting method based on sparse sampling of slide windowsfor all pixels We also employ four cues for estimation ofobjectness characteristics of all pixels in an image the mul-tiscale saliency color contrast edge density and superpixelsstraddling The multiscale saliency (MS cues) in the originalpaper [10] is calculated based on the residual spectrum of

image This method highlights fewer regions of an imageand it is not suitable for segmentation To expand the globalsignificance of objectness we use histogram-based contrast(HC) [20] salient map as the MS cues The histogram-basedcontrast (HC) method is to define the saliency values forimage pixels using color statistics of the input image In theHC model a pixel is defined by its color contrast to all otherpixels in the image and sped up by sparse histograms Thesaliency [20] value of pixel 119868119909119910 in image 119868 is defined as

119878 (119868(119909119910)) = 119899sum119895=1

119891119895119863(119888(119909119910) 119888119895) (4)

where 119888(119909119910) is the color value of pixel 119868(119909119910) 119899 is the numberof pixel colors after histogram quantization and 119891119895 is thestatistic value of pixel color 119888119895 that appears in image 119868 In orderto reduce the noise caused by the randomness of histogramquantization the saliency results are smoothed by the averagesaliency of the nearest 1198994 neighbor colors

The color contrast (CC) is a measure of the dissimilarityof a window to its immediate nearby surrounding area andthe surroundings edge density (ED) measures the density ofedges near the window borders superpixels straddling (SS) isa cue to estimate whether a window covers an object Thesethree cues are specifically described in [10] With the above-mentioned four cues the objectness score of test window119908 is defined by the Naive Bayes posterior probability in[10]

119875119908 (obj | C) = 119901 (C | obj) 119901 (obj)119901 (C) (5)

where C is the combined cues set and 119901(obj) is thepriors estimation value of the objectness All the specificdefinitions of 119901(Cobj) 119901(obj) and 119901(C) are described in[10]

In the objectness computing methods [10 11] the sam-pling windows are selected randomly by scales for calculationof objectness and 100000 windows are sampled for trainingThen they compute the other cues inC with these windows

In this paper we try to sample less windows that mostlikely contain objects Thus presegment results can provideregions for locating object windows with higher probabilityWe calculate all the circumscribed rectangular windows ofthe presegmentation regions resulting fromMean-Shift (hs =10 hr = 10 and minarea = 100) and put all these vertices ofcircumscribed rectangular windows as the candidates vertexof slidewindowsNext we select all the upper left vertices andlower right corner vertices to compose two sets respectivelyleft-up (LU) and right-down (RD)

In order to find the most likely sampling windowsincluding real objects or homogeneous semantic regions wechoose arbitrary vertex 119881LU from LU to combine with allvertices in RD whose coordinate values are less than thecoordinates of 119881LU These coupled vertices can provide morereasonable coordinates of sampling windows that containobjects All the sample windows can be seen in Figure 4 Wecalculate all the objectness value of sampling windows by (5)

6 Mathematical Problems in Engineering

ObjectnessObjectness of windowsOriginal image Mean-Shift+sample windows

Figure 4 Objectness computing process

and sum them by pixels locations as pixel-wise objectnesswith

119874119901 (119909 119910) = 119908isin119882sum119901(119909119910)isin119908

119875 (119908 (119909 119910)) (6)

where 119882 is the sample windows set 119908 is the single windowin 119882 and 119901 is the score of sample window computed by(5) The pixel-wise objectness cannot represent the similarityof regions so we compute the average objectness values ofMean-Shift segment regions as new objectness value of allpixels and the objectness of superpixel is the average valueof all pixels in it

Obj119901 (119909 119910) = AVG119901isin119877119894

119874119901 (119909 119910) (7)

223 Merging Rules The regions merging method and stopcriteria are crucial for the segmentation results We estab-lished adjacent table and similar matrix for all superpixels tocarry out the merging procedure and the metric of similaritybetween two regions can be calculated by the followingformula

Dis (119883 119884) = 120591 lowast (120594 (His (119883) His (119884))) + (1 minus 120591)lowast 1003816100381610038161003816Obj (119883 minus 119884)1003816100381610038161003816 (8)

The fixed weight assignment is not well adapted to obtainthe real object regions by merging process so we definealterable parameter 120591 using two types of features to regulatethe similarity computing (1) fixed weight assignment is notwell adapted to obtain the real object regions by mergingprocess (2) when the feature change is greater the weightshould be set smaller in the similarity computing otherwiseit will lead to overmerging Therefore we define a balanceparameter using two types of features to regulate the simi-larity computing

120591 = Δ (Obj)119877 (Obj) times Δ (His)2120583 + 120590 (His (Superpixels)) (9)

where Δ is the difference of given variables 119877 is the rangeof Obj 120583 is the mean value of His features of superpixelsand 120590 is the standard deviation of HisObj of current mergedsuperpixels Here we set the threshold based on 119881 to controlthe merging process If the similarity between A and B isgreater than 119881 the adjacent merges two regions A and Bupdates adjacent relations and labels and checks next regions

until no pair of regions satisfies the merging criterion 119881 isthe combined standard deviation of all combined features(HisObj) of current merged superpixels and it is defined in

119881 = 120591 lowast 120590 (His (Superpixels)) + (1 minus 120591)lowast 120590 (Obj (Superpixels)) (10)

The description of the merging strategy is displayed inAlgorithm 1 Each time after merging the combined featureCF(HisObj) of the new region is updated as follows firstlythe connection relations of all pixels are unchanged in thenew region then compute the statistics value for each bin ofquantitative colors and update of the histogram feature usingformula (3) of the new region Second Obj feature of the newregion is obtained by computing average objectness value ofall the pixels in the new region

3 Experimental Results

31 Dataset The proposed segmentation model has beentested on Berkeley Segmentation Datasets [21 22] andcompared with human ground truth segmentation resultsSpecifically the BSD dataset images were selected from theCorel Image Database in size of 481 times 321 The BerkeleyDataset andBenchmark iswidely used by the computer visionfor two reasons (1) it provides a large number of colorimages with complex scenes and (2) for each image thereis multiple ground truth segmentation which are providedfor the numerical evaluation of the testing segmentationalgorithms

32 Visual Analysis of Segmentation Results For a subjectivevisual comparison we selected 5 segmentation results of testimages and compared them with three classic algorithms(JSEG [23] CTM [24] and SAS [25]) The superpixelsnumber of our method for comparing is set to be 119870 = 900the JSEG [23] algorithm needs three predefined parametersquantized colors = 10 number of scales = 5 and mergingparameter = 078 the results for CTM [24] is (120582 = 01) theSAS [25] method is implemented with default parametersFrom the results shown in Figure 4 it is clear that theproposedmethod has performed better in detecting completeobjects from images (such as flowers zebra cheetah personand house in the test images) For all the five test imagesour method has shown clear and complete contours ofthe primary targets The SAS method is more accurate inhomogeneous region detection than other methods but it

Mathematical Problems in Engineering 7

Inputsuperpixels 119866(119871) adjacent table Sim(119866) of 119866(119871)

OutputThe merged result

(1) For each119873 node in 119866119873(119894) is neighbor region of119873(2) if Dis(119873119873(119894)) lt 119881 then(3) merged119873(119894) to119873 119871 = 119871 minus 1(4) change the region label of119873(119894) to119873(5) modified the histogram feature of two merged regions(6) update 119866(119871) Sim(119866) and 119881(7) end if(8) If no regions satisfied Dis(119873119873(119894)) lt 119881 end the merge process otherwise return to (1)(9) Merge meaningless and small areas to adjacent regions(10) Return relabeled image

Algorithm 1

perform less oversegmentation except in complex areas CTMperforms better in texture regions but lost in the completenessof the objects JSEG is also a texture-oriented algorithmwhich has the same weakness as CTM Thus three othersegmentation results have brought more over-segments andmeaningless regions from the test results in Figure 5 Theregion number versus iteration times in the merging processof some test images is shown in Figure 6 and the regionnumber versus parameter 119881 in the merging process of sometest images is shown in Figure 7

In Figure 8 we have displayed fourteen segmentationresults from test images These results include different typeof images such as animals buildings person vehicles andplanes with various texture featuresThe segmentation resultsshow that the proposed growing-merging framework cansegment the images into compact regions and restrain theover- or undersegmentation

33 Quantitative Evaluation on BSD300 The comparisonis based on the four quantitative performance measuresprobabilistic rand index (PRI) [26] variation of information(VoI) [27] global consistency error (GCE) [21] and BoundaryDisplacement Error (BDE) [28]The PRI Index compares thesegmented result against a set of ground truth by evaluatingthe relationships between pairs of pixels as a function ofvariability in the ground truth set VoI defines the distancebetween two types of segmentation as the average conditionalentropy of one type of segmentation given the other andGCE is a region-based evaluation method by computingconsistency of segmented regions BDE measures the aver-age displacement error of boundary pixels between twoboundaries images Many researchers have evaluated theseindexes to compare the performance of various methods inimage segmentation The test values of five example imageson four algorithms are shown in Table 1 and the averageperformance on BSD300 [21] are shown in Table 2 Withthe above comparison strategy our algorithm provided moreapproximate boundaries with manual expert segmentationresults The bins number of the combined histograms iscritical parameter for segmentation sowe have take some testexamples by varying bins in Figure 9 to verify 64 bins is moreproper for accurate segmentation

Table 1 The PRI VoI GCE and BDE value of four test images

Index Proposed SAS CTM JSEG

12084

PRI 07848 076155 07051 07026VoI 24184 21261 36425 43114GCE 02256 01753 01658 01209BDE 77028 68194 74629 92061

130066

PRI 08209 07286 07710 07457VoI 14075 22986 26194 36813GCE 01416 03001 01117 01226BDE 94618 186371 102823 133104

134052

PRI 07630 06454 06303 05833VoI 12913 20665 26110 39948GCE 01020 02263 00833 01302BDE 61061 112896 94271 151976

15062

PRI 08882 08973 08673 08705VoI 15428 14820 21843 26231GCE 02303 01959 01386 01075BDE 87102 212930 122526 139926

385028

PRI 09152 91108 09031 09036VoI 15868 36813 18808 22501GCE 01436 02236 00888 01072BDE 91108 91016 86668 86987

Table 2The average values of PRI VoI GCE and BDE for BSD300

SAS JSEG CTM ProposedPRI 08246 07754 07263 08377VOI 17724 23217 21010 1773GCE 01985 02017 02033 01964BDE 126351 144072 119245 108311

34 Boundary Precision Comparison on BSD500 Estradaand Jepson [29] have proposed a thorough-quantitative-evaluation matching strategy to evaluate the segment accu-racy for boundary extraction The algorithms are evaluatedusing an efficient algorithm for computing precision andrecall with regard to human ground truth boundaries The

8 Mathematical Problems in Engineering

Originalimages

Proposed

SAS

CTM

JSEG

Figure 5 Comparison of four segmentation methods (proposed SAS CTM and JSEG)

2 4 6 8 100Iteration number

1

10

100

1000

Regi

on n

umbe

r

12084130066134052

15062385028

Figure 6 Merging curves of iteration-region numbers

evaluation strategy of precision and recall is reasonableand equitable as measures of segmentation quality because

of not being biased in favor of over- or undersegmen-tation An excellent segmentation algorithm should bringlow segmented error and high boundary recall It definesprecision and recall to be proportional to the total number ofunmatched pixels between two types of segmentation 119878sourceand 119878target where 119878source is the boundaries extracted fromsegmentation results by computer algorithms and 119878target isthe boundaries of human segmentation provided by BSD500[22] A boundary pixel is identified as true position whenthe smallest distance between the extracted and ground truthis less than a threshold (120576 = 5 pixels in this paper) Theprecision recall and 119865-measures are defined as follows

Precision 119878source 119878target = Matched (119878source 119878target)1003816100381610038161003816119878source1003816100381610038161003816

Precision 119878source 119878target = Matched (119878source 119878target)10038161003816100381610038161003816119878target10038161003816100381610038161003816(11)

The precision and recall measures provide a more reliableevaluation strategy based on two reasons (1) it refinedboundaries and avoided duplicate comparison of identicalboundary pixels and (2) it provides dynamic displacementparameters for comprehensive and flexible evaluation ofsegmentation performance In this section we displayed the

Mathematical Problems in Engineering 9

124084130066134052

15062385052

0

100

200

300

400

500

600

700

Regi

on n

umbe

r

002 004 006 008 01 012 014 0160V

Figure 7 Merging curve of region number-119881 of test images in Figure 3

Originalimages

Objectnessmap

Segmentationresults

Figure 8 Some segmentation results and objectness map

boundaries precision and recall curveswith five segmentationalgorithms (MS [15] FH [2] JSEG [23] CTM [24] and SAS[25]) in Figure 10The ROC curves fully characterize the per-formance in a direct comparison of the segmentations qualityprovided by different algorithms Figure 10 shows that ourmethod provides more closely boundaries of segmentationswith the ground truth

4 Conclusion

In this paper we propose a novel hierarchical color imagesegmentation model based on region growing and mergingA proper growing procedure is designed to extract compactand complete superpixels with TV diffusion filtered imageat first Second we integrate color texture and objectnessinformation of superpixels into the merging procedure and

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

Mathematical Problems in Engineering 3

Segmentation

Region objectness

Color-texture histograms

Input original image

Seed pixels location

Total variations features computing

Superpixels extraction

Merging with color-texture andobjectness

Output segmentation

Figure 1 The segmentation process and intermediate results of our method

119892 (1003816100381610038161003816nabla1198681198961003816100381610038161003816) = 119890minus|nabla119868119896|21205822 (2)

The iteration times rely on the superpixels scale for propersmoothing of local area The reasonable iteration times ofdiffusion are in proportion to the radius of superpixels asymp13in this paper

212 Superpixels Growing Scheme In this paper we adopta seed growing method to obtain compact superpixels withcomplete homogeneous pixels The number of superpixelsis preset in SLIC [12] and Turbo [18] Seed pixels selection

method has been developed by choosingminimum gradientsin divided uniform grids [12] which means to locate seedpixels at the smoothest areas of the grids It is an effectivemethod to find the optimal seed pixels for superpixels so weemploy the seed pixel extraction strategy and improve it bychoosing seeds in the TV diffusion image as the foundationof growing superpixels

The smoothed image is divided into a number of blocksand located gradient minimum point as the seed pixelsThe color image after filtering provides a robust feature

4 Mathematical Problems in Engineering

Figure 2 Some superpixels segmentation results of our method

for computing homogeneous superpixels We locate theinitial seeds at the local minimum in gradient map ofsmoothed image in average grids at first The grids settingmethod is in [12] In the growing process both originalimage and filtered image are integrated into a combinedfeature F[RGBRFGFBF] and the similarity measure-ment between pixels 119883 and 119884 Sim(119883 119884) is calculated byEuclidean distanceThe growing procedure is limited to localregions (12119903) for uniform superpixels and 119903 is the averageradius of grids Let 1199041 1199042 119904119894 denote the initial seeds of thesuperpixels and119860 119894 denote the growing area corresponding to119904119894 The growing algorithm is thus outlined as follows

(1) Initialize the selected seed pixels as areas 119860 119894 andassign a label 119901119894 for each area 119860 119894

(2) Compute average 119865119873 of neighbor regions (12119903) 119865119873and 119865119894 of 119860 119894 check the neighbor pixels of 119860 119894 and ifSim(119873[119894] 119865119894) lt Sim(119873[119894] 119865119873) merge neighbor pixels119873[119894] to 119860 119894 until no pixels can be labeled

(3) Connect all scatter regions lt20 pixels to adjacent andsimilar regions

We have shown some superpixels of test images inFigure 2 Since precise boundary is necessary for successfulwork we adopt a standard measure of boundary recall Theaverage performance of boundaries precision with regard

to superpixels numbers on BSD500 is shown in Figure 3the smallest distance between ground truth and superpixelsboundaries is 2 pixels in the experimentWe can see thatmoresuperpixels are conducive to more accurate boundaries and900 superpixels are big enough for merging segmentation inthe next stage

22 Combined Features and Superpixels Merging Reliableregion merging technique is critical for hierarchical seg-mentation In this section the similarity measurement ofregions and corresponding stopping criteria are proposedbased on the color-texture features and objectness of imageThe merging process begins from the primitive superpixelsof image until termination criterion is achieved and thesegmentation is finished

To improve the segmentation accuracy and dependabilitywe define combined feature CF(HisObj) to describe theoversegmentation regionsThis feature integrates both color-texture information and objectness values His is a kindof histograms containing color information and local colordistribution feature of pixels As high-level visual informa-tion Obj is the objectness value or probability of a regionbelonging to an identifiable object

221 Color-Texture Histograms The number of colors thatcan be discriminated by eyes is limited no more than 60

Mathematical Problems in Engineering 5

200 400 600 800 10000Number of segments

0

01

02

03

04

05

06

07

08

09

1

Boun

dary

reca

ll

Figure 3 Boundary precision with regard to superpixel density

kindsTherefore a quantized image is introduced to computethe color distribution of superpixels for segmentation and animage is quantized to 64 colors by the minimum variancemethod that leads to least distortion as well as reducesthe computing time [19] After quantization the connectionproperty of pixels in superpixels of same colors is computedas the texture features of superpixels We assume Con asthe connection relation between pixel 119862119901 and its neighborpixel Neigh(119895) and eight-neighbor pixels Neigh(119895 = 0ndash7) arecoded by (20minus7 lowast Con119895) as local color pattern (LCP) If colordifference 119862119889 between Neigh(119895) and 119862119901 is greater than 119862119889th(standard deviation of all pixels in quantized image) thenCon119895 = 0 otherwise Con119895 = 1 After coding all pixels sumup all LCP values of pixels with same color in the superpixelas color distribution weights for corresponding color binsThus the composite color-texture histograms feature (CF)is computed by multiplying the probability of quantitativecolors and pixel distribution in the superpixels The bins ofcolor-texture histograms are defined as follows

Bin (119894) = 119862Bin times 119862LCP 119894 = 1 2 3 64 (3)

222 Objectness Based on Sparse Sampling of Regions Asmentioned in [10] the objectness is an attribute of pixelscomputed with multicues for a set of sample slide windowsand the objectness map is biased towards rectangle intensityresponse It seems that objectness has no ability to discrim-inate regions but it provides a potential and useful value ofall pixels in the rectangle Jiang et al [11] proposed a novelmethod to assign objectness value to every pixel based on thedefinition of objectness

Inspired by the objectness computing method proposedby Jiang et al [11] we propose an improved objectnenss com-puting method based on sparse sampling of slide windowsfor all pixels We also employ four cues for estimation ofobjectness characteristics of all pixels in an image the mul-tiscale saliency color contrast edge density and superpixelsstraddling The multiscale saliency (MS cues) in the originalpaper [10] is calculated based on the residual spectrum of

image This method highlights fewer regions of an imageand it is not suitable for segmentation To expand the globalsignificance of objectness we use histogram-based contrast(HC) [20] salient map as the MS cues The histogram-basedcontrast (HC) method is to define the saliency values forimage pixels using color statistics of the input image In theHC model a pixel is defined by its color contrast to all otherpixels in the image and sped up by sparse histograms Thesaliency [20] value of pixel 119868119909119910 in image 119868 is defined as

119878 (119868(119909119910)) = 119899sum119895=1

119891119895119863(119888(119909119910) 119888119895) (4)

where 119888(119909119910) is the color value of pixel 119868(119909119910) 119899 is the numberof pixel colors after histogram quantization and 119891119895 is thestatistic value of pixel color 119888119895 that appears in image 119868 In orderto reduce the noise caused by the randomness of histogramquantization the saliency results are smoothed by the averagesaliency of the nearest 1198994 neighbor colors

The color contrast (CC) is a measure of the dissimilarityof a window to its immediate nearby surrounding area andthe surroundings edge density (ED) measures the density ofedges near the window borders superpixels straddling (SS) isa cue to estimate whether a window covers an object Thesethree cues are specifically described in [10] With the above-mentioned four cues the objectness score of test window119908 is defined by the Naive Bayes posterior probability in[10]

119875119908 (obj | C) = 119901 (C | obj) 119901 (obj)119901 (C) (5)

where C is the combined cues set and 119901(obj) is thepriors estimation value of the objectness All the specificdefinitions of 119901(Cobj) 119901(obj) and 119901(C) are described in[10]

In the objectness computing methods [10 11] the sam-pling windows are selected randomly by scales for calculationof objectness and 100000 windows are sampled for trainingThen they compute the other cues inC with these windows

In this paper we try to sample less windows that mostlikely contain objects Thus presegment results can provideregions for locating object windows with higher probabilityWe calculate all the circumscribed rectangular windows ofthe presegmentation regions resulting fromMean-Shift (hs =10 hr = 10 and minarea = 100) and put all these vertices ofcircumscribed rectangular windows as the candidates vertexof slidewindowsNext we select all the upper left vertices andlower right corner vertices to compose two sets respectivelyleft-up (LU) and right-down (RD)

In order to find the most likely sampling windowsincluding real objects or homogeneous semantic regions wechoose arbitrary vertex 119881LU from LU to combine with allvertices in RD whose coordinate values are less than thecoordinates of 119881LU These coupled vertices can provide morereasonable coordinates of sampling windows that containobjects All the sample windows can be seen in Figure 4 Wecalculate all the objectness value of sampling windows by (5)

6 Mathematical Problems in Engineering

ObjectnessObjectness of windowsOriginal image Mean-Shift+sample windows

Figure 4 Objectness computing process

and sum them by pixels locations as pixel-wise objectnesswith

119874119901 (119909 119910) = 119908isin119882sum119901(119909119910)isin119908

119875 (119908 (119909 119910)) (6)

where 119882 is the sample windows set 119908 is the single windowin 119882 and 119901 is the score of sample window computed by(5) The pixel-wise objectness cannot represent the similarityof regions so we compute the average objectness values ofMean-Shift segment regions as new objectness value of allpixels and the objectness of superpixel is the average valueof all pixels in it

Obj119901 (119909 119910) = AVG119901isin119877119894

119874119901 (119909 119910) (7)

223 Merging Rules The regions merging method and stopcriteria are crucial for the segmentation results We estab-lished adjacent table and similar matrix for all superpixels tocarry out the merging procedure and the metric of similaritybetween two regions can be calculated by the followingformula

Dis (119883 119884) = 120591 lowast (120594 (His (119883) His (119884))) + (1 minus 120591)lowast 1003816100381610038161003816Obj (119883 minus 119884)1003816100381610038161003816 (8)

The fixed weight assignment is not well adapted to obtainthe real object regions by merging process so we definealterable parameter 120591 using two types of features to regulatethe similarity computing (1) fixed weight assignment is notwell adapted to obtain the real object regions by mergingprocess (2) when the feature change is greater the weightshould be set smaller in the similarity computing otherwiseit will lead to overmerging Therefore we define a balanceparameter using two types of features to regulate the simi-larity computing

120591 = Δ (Obj)119877 (Obj) times Δ (His)2120583 + 120590 (His (Superpixels)) (9)

where Δ is the difference of given variables 119877 is the rangeof Obj 120583 is the mean value of His features of superpixelsand 120590 is the standard deviation of HisObj of current mergedsuperpixels Here we set the threshold based on 119881 to controlthe merging process If the similarity between A and B isgreater than 119881 the adjacent merges two regions A and Bupdates adjacent relations and labels and checks next regions

until no pair of regions satisfies the merging criterion 119881 isthe combined standard deviation of all combined features(HisObj) of current merged superpixels and it is defined in

119881 = 120591 lowast 120590 (His (Superpixels)) + (1 minus 120591)lowast 120590 (Obj (Superpixels)) (10)

The description of the merging strategy is displayed inAlgorithm 1 Each time after merging the combined featureCF(HisObj) of the new region is updated as follows firstlythe connection relations of all pixels are unchanged in thenew region then compute the statistics value for each bin ofquantitative colors and update of the histogram feature usingformula (3) of the new region Second Obj feature of the newregion is obtained by computing average objectness value ofall the pixels in the new region

3 Experimental Results

31 Dataset The proposed segmentation model has beentested on Berkeley Segmentation Datasets [21 22] andcompared with human ground truth segmentation resultsSpecifically the BSD dataset images were selected from theCorel Image Database in size of 481 times 321 The BerkeleyDataset andBenchmark iswidely used by the computer visionfor two reasons (1) it provides a large number of colorimages with complex scenes and (2) for each image thereis multiple ground truth segmentation which are providedfor the numerical evaluation of the testing segmentationalgorithms

32 Visual Analysis of Segmentation Results For a subjectivevisual comparison we selected 5 segmentation results of testimages and compared them with three classic algorithms(JSEG [23] CTM [24] and SAS [25]) The superpixelsnumber of our method for comparing is set to be 119870 = 900the JSEG [23] algorithm needs three predefined parametersquantized colors = 10 number of scales = 5 and mergingparameter = 078 the results for CTM [24] is (120582 = 01) theSAS [25] method is implemented with default parametersFrom the results shown in Figure 4 it is clear that theproposedmethod has performed better in detecting completeobjects from images (such as flowers zebra cheetah personand house in the test images) For all the five test imagesour method has shown clear and complete contours ofthe primary targets The SAS method is more accurate inhomogeneous region detection than other methods but it

Mathematical Problems in Engineering 7

Inputsuperpixels 119866(119871) adjacent table Sim(119866) of 119866(119871)

OutputThe merged result

(1) For each119873 node in 119866119873(119894) is neighbor region of119873(2) if Dis(119873119873(119894)) lt 119881 then(3) merged119873(119894) to119873 119871 = 119871 minus 1(4) change the region label of119873(119894) to119873(5) modified the histogram feature of two merged regions(6) update 119866(119871) Sim(119866) and 119881(7) end if(8) If no regions satisfied Dis(119873119873(119894)) lt 119881 end the merge process otherwise return to (1)(9) Merge meaningless and small areas to adjacent regions(10) Return relabeled image

Algorithm 1

perform less oversegmentation except in complex areas CTMperforms better in texture regions but lost in the completenessof the objects JSEG is also a texture-oriented algorithmwhich has the same weakness as CTM Thus three othersegmentation results have brought more over-segments andmeaningless regions from the test results in Figure 5 Theregion number versus iteration times in the merging processof some test images is shown in Figure 6 and the regionnumber versus parameter 119881 in the merging process of sometest images is shown in Figure 7

In Figure 8 we have displayed fourteen segmentationresults from test images These results include different typeof images such as animals buildings person vehicles andplanes with various texture featuresThe segmentation resultsshow that the proposed growing-merging framework cansegment the images into compact regions and restrain theover- or undersegmentation

33 Quantitative Evaluation on BSD300 The comparisonis based on the four quantitative performance measuresprobabilistic rand index (PRI) [26] variation of information(VoI) [27] global consistency error (GCE) [21] and BoundaryDisplacement Error (BDE) [28]The PRI Index compares thesegmented result against a set of ground truth by evaluatingthe relationships between pairs of pixels as a function ofvariability in the ground truth set VoI defines the distancebetween two types of segmentation as the average conditionalentropy of one type of segmentation given the other andGCE is a region-based evaluation method by computingconsistency of segmented regions BDE measures the aver-age displacement error of boundary pixels between twoboundaries images Many researchers have evaluated theseindexes to compare the performance of various methods inimage segmentation The test values of five example imageson four algorithms are shown in Table 1 and the averageperformance on BSD300 [21] are shown in Table 2 Withthe above comparison strategy our algorithm provided moreapproximate boundaries with manual expert segmentationresults The bins number of the combined histograms iscritical parameter for segmentation sowe have take some testexamples by varying bins in Figure 9 to verify 64 bins is moreproper for accurate segmentation

Table 1 The PRI VoI GCE and BDE value of four test images

Index Proposed SAS CTM JSEG

12084

PRI 07848 076155 07051 07026VoI 24184 21261 36425 43114GCE 02256 01753 01658 01209BDE 77028 68194 74629 92061

130066

PRI 08209 07286 07710 07457VoI 14075 22986 26194 36813GCE 01416 03001 01117 01226BDE 94618 186371 102823 133104

134052

PRI 07630 06454 06303 05833VoI 12913 20665 26110 39948GCE 01020 02263 00833 01302BDE 61061 112896 94271 151976

15062

PRI 08882 08973 08673 08705VoI 15428 14820 21843 26231GCE 02303 01959 01386 01075BDE 87102 212930 122526 139926

385028

PRI 09152 91108 09031 09036VoI 15868 36813 18808 22501GCE 01436 02236 00888 01072BDE 91108 91016 86668 86987

Table 2The average values of PRI VoI GCE and BDE for BSD300

SAS JSEG CTM ProposedPRI 08246 07754 07263 08377VOI 17724 23217 21010 1773GCE 01985 02017 02033 01964BDE 126351 144072 119245 108311

34 Boundary Precision Comparison on BSD500 Estradaand Jepson [29] have proposed a thorough-quantitative-evaluation matching strategy to evaluate the segment accu-racy for boundary extraction The algorithms are evaluatedusing an efficient algorithm for computing precision andrecall with regard to human ground truth boundaries The

8 Mathematical Problems in Engineering

Originalimages

Proposed

SAS

CTM

JSEG

Figure 5 Comparison of four segmentation methods (proposed SAS CTM and JSEG)

2 4 6 8 100Iteration number

1

10

100

1000

Regi

on n

umbe

r

12084130066134052

15062385028

Figure 6 Merging curves of iteration-region numbers

evaluation strategy of precision and recall is reasonableand equitable as measures of segmentation quality because

of not being biased in favor of over- or undersegmen-tation An excellent segmentation algorithm should bringlow segmented error and high boundary recall It definesprecision and recall to be proportional to the total number ofunmatched pixels between two types of segmentation 119878sourceand 119878target where 119878source is the boundaries extracted fromsegmentation results by computer algorithms and 119878target isthe boundaries of human segmentation provided by BSD500[22] A boundary pixel is identified as true position whenthe smallest distance between the extracted and ground truthis less than a threshold (120576 = 5 pixels in this paper) Theprecision recall and 119865-measures are defined as follows

Precision 119878source 119878target = Matched (119878source 119878target)1003816100381610038161003816119878source1003816100381610038161003816

Precision 119878source 119878target = Matched (119878source 119878target)10038161003816100381610038161003816119878target10038161003816100381610038161003816(11)

The precision and recall measures provide a more reliableevaluation strategy based on two reasons (1) it refinedboundaries and avoided duplicate comparison of identicalboundary pixels and (2) it provides dynamic displacementparameters for comprehensive and flexible evaluation ofsegmentation performance In this section we displayed the

Mathematical Problems in Engineering 9

124084130066134052

15062385052

0

100

200

300

400

500

600

700

Regi

on n

umbe

r

002 004 006 008 01 012 014 0160V

Figure 7 Merging curve of region number-119881 of test images in Figure 3

Originalimages

Objectnessmap

Segmentationresults

Figure 8 Some segmentation results and objectness map

boundaries precision and recall curveswith five segmentationalgorithms (MS [15] FH [2] JSEG [23] CTM [24] and SAS[25]) in Figure 10The ROC curves fully characterize the per-formance in a direct comparison of the segmentations qualityprovided by different algorithms Figure 10 shows that ourmethod provides more closely boundaries of segmentationswith the ground truth

4 Conclusion

In this paper we propose a novel hierarchical color imagesegmentation model based on region growing and mergingA proper growing procedure is designed to extract compactand complete superpixels with TV diffusion filtered imageat first Second we integrate color texture and objectnessinformation of superpixels into the merging procedure and

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

4 Mathematical Problems in Engineering

Figure 2 Some superpixels segmentation results of our method

for computing homogeneous superpixels We locate theinitial seeds at the local minimum in gradient map ofsmoothed image in average grids at first The grids settingmethod is in [12] In the growing process both originalimage and filtered image are integrated into a combinedfeature F[RGBRFGFBF] and the similarity measure-ment between pixels 119883 and 119884 Sim(119883 119884) is calculated byEuclidean distanceThe growing procedure is limited to localregions (12119903) for uniform superpixels and 119903 is the averageradius of grids Let 1199041 1199042 119904119894 denote the initial seeds of thesuperpixels and119860 119894 denote the growing area corresponding to119904119894 The growing algorithm is thus outlined as follows

(1) Initialize the selected seed pixels as areas 119860 119894 andassign a label 119901119894 for each area 119860 119894

(2) Compute average 119865119873 of neighbor regions (12119903) 119865119873and 119865119894 of 119860 119894 check the neighbor pixels of 119860 119894 and ifSim(119873[119894] 119865119894) lt Sim(119873[119894] 119865119873) merge neighbor pixels119873[119894] to 119860 119894 until no pixels can be labeled

(3) Connect all scatter regions lt20 pixels to adjacent andsimilar regions

We have shown some superpixels of test images inFigure 2 Since precise boundary is necessary for successfulwork we adopt a standard measure of boundary recall Theaverage performance of boundaries precision with regard

to superpixels numbers on BSD500 is shown in Figure 3the smallest distance between ground truth and superpixelsboundaries is 2 pixels in the experimentWe can see thatmoresuperpixels are conducive to more accurate boundaries and900 superpixels are big enough for merging segmentation inthe next stage

22 Combined Features and Superpixels Merging Reliableregion merging technique is critical for hierarchical seg-mentation In this section the similarity measurement ofregions and corresponding stopping criteria are proposedbased on the color-texture features and objectness of imageThe merging process begins from the primitive superpixelsof image until termination criterion is achieved and thesegmentation is finished

To improve the segmentation accuracy and dependabilitywe define combined feature CF(HisObj) to describe theoversegmentation regionsThis feature integrates both color-texture information and objectness values His is a kindof histograms containing color information and local colordistribution feature of pixels As high-level visual informa-tion Obj is the objectness value or probability of a regionbelonging to an identifiable object

221 Color-Texture Histograms The number of colors thatcan be discriminated by eyes is limited no more than 60

Mathematical Problems in Engineering 5

200 400 600 800 10000Number of segments

0

01

02

03

04

05

06

07

08

09

1

Boun

dary

reca

ll

Figure 3 Boundary precision with regard to superpixel density

kindsTherefore a quantized image is introduced to computethe color distribution of superpixels for segmentation and animage is quantized to 64 colors by the minimum variancemethod that leads to least distortion as well as reducesthe computing time [19] After quantization the connectionproperty of pixels in superpixels of same colors is computedas the texture features of superpixels We assume Con asthe connection relation between pixel 119862119901 and its neighborpixel Neigh(119895) and eight-neighbor pixels Neigh(119895 = 0ndash7) arecoded by (20minus7 lowast Con119895) as local color pattern (LCP) If colordifference 119862119889 between Neigh(119895) and 119862119901 is greater than 119862119889th(standard deviation of all pixels in quantized image) thenCon119895 = 0 otherwise Con119895 = 1 After coding all pixels sumup all LCP values of pixels with same color in the superpixelas color distribution weights for corresponding color binsThus the composite color-texture histograms feature (CF)is computed by multiplying the probability of quantitativecolors and pixel distribution in the superpixels The bins ofcolor-texture histograms are defined as follows

Bin (119894) = 119862Bin times 119862LCP 119894 = 1 2 3 64 (3)

222 Objectness Based on Sparse Sampling of Regions Asmentioned in [10] the objectness is an attribute of pixelscomputed with multicues for a set of sample slide windowsand the objectness map is biased towards rectangle intensityresponse It seems that objectness has no ability to discrim-inate regions but it provides a potential and useful value ofall pixels in the rectangle Jiang et al [11] proposed a novelmethod to assign objectness value to every pixel based on thedefinition of objectness

Inspired by the objectness computing method proposedby Jiang et al [11] we propose an improved objectnenss com-puting method based on sparse sampling of slide windowsfor all pixels We also employ four cues for estimation ofobjectness characteristics of all pixels in an image the mul-tiscale saliency color contrast edge density and superpixelsstraddling The multiscale saliency (MS cues) in the originalpaper [10] is calculated based on the residual spectrum of

image This method highlights fewer regions of an imageand it is not suitable for segmentation To expand the globalsignificance of objectness we use histogram-based contrast(HC) [20] salient map as the MS cues The histogram-basedcontrast (HC) method is to define the saliency values forimage pixels using color statistics of the input image In theHC model a pixel is defined by its color contrast to all otherpixels in the image and sped up by sparse histograms Thesaliency [20] value of pixel 119868119909119910 in image 119868 is defined as

119878 (119868(119909119910)) = 119899sum119895=1

119891119895119863(119888(119909119910) 119888119895) (4)

where 119888(119909119910) is the color value of pixel 119868(119909119910) 119899 is the numberof pixel colors after histogram quantization and 119891119895 is thestatistic value of pixel color 119888119895 that appears in image 119868 In orderto reduce the noise caused by the randomness of histogramquantization the saliency results are smoothed by the averagesaliency of the nearest 1198994 neighbor colors

The color contrast (CC) is a measure of the dissimilarityof a window to its immediate nearby surrounding area andthe surroundings edge density (ED) measures the density ofedges near the window borders superpixels straddling (SS) isa cue to estimate whether a window covers an object Thesethree cues are specifically described in [10] With the above-mentioned four cues the objectness score of test window119908 is defined by the Naive Bayes posterior probability in[10]

119875119908 (obj | C) = 119901 (C | obj) 119901 (obj)119901 (C) (5)

where C is the combined cues set and 119901(obj) is thepriors estimation value of the objectness All the specificdefinitions of 119901(Cobj) 119901(obj) and 119901(C) are described in[10]

In the objectness computing methods [10 11] the sam-pling windows are selected randomly by scales for calculationof objectness and 100000 windows are sampled for trainingThen they compute the other cues inC with these windows

In this paper we try to sample less windows that mostlikely contain objects Thus presegment results can provideregions for locating object windows with higher probabilityWe calculate all the circumscribed rectangular windows ofthe presegmentation regions resulting fromMean-Shift (hs =10 hr = 10 and minarea = 100) and put all these vertices ofcircumscribed rectangular windows as the candidates vertexof slidewindowsNext we select all the upper left vertices andlower right corner vertices to compose two sets respectivelyleft-up (LU) and right-down (RD)

In order to find the most likely sampling windowsincluding real objects or homogeneous semantic regions wechoose arbitrary vertex 119881LU from LU to combine with allvertices in RD whose coordinate values are less than thecoordinates of 119881LU These coupled vertices can provide morereasonable coordinates of sampling windows that containobjects All the sample windows can be seen in Figure 4 Wecalculate all the objectness value of sampling windows by (5)

6 Mathematical Problems in Engineering

ObjectnessObjectness of windowsOriginal image Mean-Shift+sample windows

Figure 4 Objectness computing process

and sum them by pixels locations as pixel-wise objectnesswith

119874119901 (119909 119910) = 119908isin119882sum119901(119909119910)isin119908

119875 (119908 (119909 119910)) (6)

where 119882 is the sample windows set 119908 is the single windowin 119882 and 119901 is the score of sample window computed by(5) The pixel-wise objectness cannot represent the similarityof regions so we compute the average objectness values ofMean-Shift segment regions as new objectness value of allpixels and the objectness of superpixel is the average valueof all pixels in it

Obj119901 (119909 119910) = AVG119901isin119877119894

119874119901 (119909 119910) (7)

223 Merging Rules The regions merging method and stopcriteria are crucial for the segmentation results We estab-lished adjacent table and similar matrix for all superpixels tocarry out the merging procedure and the metric of similaritybetween two regions can be calculated by the followingformula

Dis (119883 119884) = 120591 lowast (120594 (His (119883) His (119884))) + (1 minus 120591)lowast 1003816100381610038161003816Obj (119883 minus 119884)1003816100381610038161003816 (8)

The fixed weight assignment is not well adapted to obtainthe real object regions by merging process so we definealterable parameter 120591 using two types of features to regulatethe similarity computing (1) fixed weight assignment is notwell adapted to obtain the real object regions by mergingprocess (2) when the feature change is greater the weightshould be set smaller in the similarity computing otherwiseit will lead to overmerging Therefore we define a balanceparameter using two types of features to regulate the simi-larity computing

120591 = Δ (Obj)119877 (Obj) times Δ (His)2120583 + 120590 (His (Superpixels)) (9)

where Δ is the difference of given variables 119877 is the rangeof Obj 120583 is the mean value of His features of superpixelsand 120590 is the standard deviation of HisObj of current mergedsuperpixels Here we set the threshold based on 119881 to controlthe merging process If the similarity between A and B isgreater than 119881 the adjacent merges two regions A and Bupdates adjacent relations and labels and checks next regions

until no pair of regions satisfies the merging criterion 119881 isthe combined standard deviation of all combined features(HisObj) of current merged superpixels and it is defined in

119881 = 120591 lowast 120590 (His (Superpixels)) + (1 minus 120591)lowast 120590 (Obj (Superpixels)) (10)

The description of the merging strategy is displayed inAlgorithm 1 Each time after merging the combined featureCF(HisObj) of the new region is updated as follows firstlythe connection relations of all pixels are unchanged in thenew region then compute the statistics value for each bin ofquantitative colors and update of the histogram feature usingformula (3) of the new region Second Obj feature of the newregion is obtained by computing average objectness value ofall the pixels in the new region

3 Experimental Results

31 Dataset The proposed segmentation model has beentested on Berkeley Segmentation Datasets [21 22] andcompared with human ground truth segmentation resultsSpecifically the BSD dataset images were selected from theCorel Image Database in size of 481 times 321 The BerkeleyDataset andBenchmark iswidely used by the computer visionfor two reasons (1) it provides a large number of colorimages with complex scenes and (2) for each image thereis multiple ground truth segmentation which are providedfor the numerical evaluation of the testing segmentationalgorithms

32 Visual Analysis of Segmentation Results For a subjectivevisual comparison we selected 5 segmentation results of testimages and compared them with three classic algorithms(JSEG [23] CTM [24] and SAS [25]) The superpixelsnumber of our method for comparing is set to be 119870 = 900the JSEG [23] algorithm needs three predefined parametersquantized colors = 10 number of scales = 5 and mergingparameter = 078 the results for CTM [24] is (120582 = 01) theSAS [25] method is implemented with default parametersFrom the results shown in Figure 4 it is clear that theproposedmethod has performed better in detecting completeobjects from images (such as flowers zebra cheetah personand house in the test images) For all the five test imagesour method has shown clear and complete contours ofthe primary targets The SAS method is more accurate inhomogeneous region detection than other methods but it

Mathematical Problems in Engineering 7

Inputsuperpixels 119866(119871) adjacent table Sim(119866) of 119866(119871)

OutputThe merged result

(1) For each119873 node in 119866119873(119894) is neighbor region of119873(2) if Dis(119873119873(119894)) lt 119881 then(3) merged119873(119894) to119873 119871 = 119871 minus 1(4) change the region label of119873(119894) to119873(5) modified the histogram feature of two merged regions(6) update 119866(119871) Sim(119866) and 119881(7) end if(8) If no regions satisfied Dis(119873119873(119894)) lt 119881 end the merge process otherwise return to (1)(9) Merge meaningless and small areas to adjacent regions(10) Return relabeled image

Algorithm 1

perform less oversegmentation except in complex areas CTMperforms better in texture regions but lost in the completenessof the objects JSEG is also a texture-oriented algorithmwhich has the same weakness as CTM Thus three othersegmentation results have brought more over-segments andmeaningless regions from the test results in Figure 5 Theregion number versus iteration times in the merging processof some test images is shown in Figure 6 and the regionnumber versus parameter 119881 in the merging process of sometest images is shown in Figure 7

In Figure 8 we have displayed fourteen segmentationresults from test images These results include different typeof images such as animals buildings person vehicles andplanes with various texture featuresThe segmentation resultsshow that the proposed growing-merging framework cansegment the images into compact regions and restrain theover- or undersegmentation

33 Quantitative Evaluation on BSD300 The comparisonis based on the four quantitative performance measuresprobabilistic rand index (PRI) [26] variation of information(VoI) [27] global consistency error (GCE) [21] and BoundaryDisplacement Error (BDE) [28]The PRI Index compares thesegmented result against a set of ground truth by evaluatingthe relationships between pairs of pixels as a function ofvariability in the ground truth set VoI defines the distancebetween two types of segmentation as the average conditionalentropy of one type of segmentation given the other andGCE is a region-based evaluation method by computingconsistency of segmented regions BDE measures the aver-age displacement error of boundary pixels between twoboundaries images Many researchers have evaluated theseindexes to compare the performance of various methods inimage segmentation The test values of five example imageson four algorithms are shown in Table 1 and the averageperformance on BSD300 [21] are shown in Table 2 Withthe above comparison strategy our algorithm provided moreapproximate boundaries with manual expert segmentationresults The bins number of the combined histograms iscritical parameter for segmentation sowe have take some testexamples by varying bins in Figure 9 to verify 64 bins is moreproper for accurate segmentation

Table 1 The PRI VoI GCE and BDE value of four test images

Index Proposed SAS CTM JSEG

12084

PRI 07848 076155 07051 07026VoI 24184 21261 36425 43114GCE 02256 01753 01658 01209BDE 77028 68194 74629 92061

130066

PRI 08209 07286 07710 07457VoI 14075 22986 26194 36813GCE 01416 03001 01117 01226BDE 94618 186371 102823 133104

134052

PRI 07630 06454 06303 05833VoI 12913 20665 26110 39948GCE 01020 02263 00833 01302BDE 61061 112896 94271 151976

15062

PRI 08882 08973 08673 08705VoI 15428 14820 21843 26231GCE 02303 01959 01386 01075BDE 87102 212930 122526 139926

385028

PRI 09152 91108 09031 09036VoI 15868 36813 18808 22501GCE 01436 02236 00888 01072BDE 91108 91016 86668 86987

Table 2The average values of PRI VoI GCE and BDE for BSD300

SAS JSEG CTM ProposedPRI 08246 07754 07263 08377VOI 17724 23217 21010 1773GCE 01985 02017 02033 01964BDE 126351 144072 119245 108311

34 Boundary Precision Comparison on BSD500 Estradaand Jepson [29] have proposed a thorough-quantitative-evaluation matching strategy to evaluate the segment accu-racy for boundary extraction The algorithms are evaluatedusing an efficient algorithm for computing precision andrecall with regard to human ground truth boundaries The

8 Mathematical Problems in Engineering

Originalimages

Proposed

SAS

CTM

JSEG

Figure 5 Comparison of four segmentation methods (proposed SAS CTM and JSEG)

2 4 6 8 100Iteration number

1

10

100

1000

Regi

on n

umbe

r

12084130066134052

15062385028

Figure 6 Merging curves of iteration-region numbers

evaluation strategy of precision and recall is reasonableand equitable as measures of segmentation quality because

of not being biased in favor of over- or undersegmen-tation An excellent segmentation algorithm should bringlow segmented error and high boundary recall It definesprecision and recall to be proportional to the total number ofunmatched pixels between two types of segmentation 119878sourceand 119878target where 119878source is the boundaries extracted fromsegmentation results by computer algorithms and 119878target isthe boundaries of human segmentation provided by BSD500[22] A boundary pixel is identified as true position whenthe smallest distance between the extracted and ground truthis less than a threshold (120576 = 5 pixels in this paper) Theprecision recall and 119865-measures are defined as follows

Precision 119878source 119878target = Matched (119878source 119878target)1003816100381610038161003816119878source1003816100381610038161003816

Precision 119878source 119878target = Matched (119878source 119878target)10038161003816100381610038161003816119878target10038161003816100381610038161003816(11)

The precision and recall measures provide a more reliableevaluation strategy based on two reasons (1) it refinedboundaries and avoided duplicate comparison of identicalboundary pixels and (2) it provides dynamic displacementparameters for comprehensive and flexible evaluation ofsegmentation performance In this section we displayed the

Mathematical Problems in Engineering 9

124084130066134052

15062385052

0

100

200

300

400

500

600

700

Regi

on n

umbe

r

002 004 006 008 01 012 014 0160V

Figure 7 Merging curve of region number-119881 of test images in Figure 3

Originalimages

Objectnessmap

Segmentationresults

Figure 8 Some segmentation results and objectness map

boundaries precision and recall curveswith five segmentationalgorithms (MS [15] FH [2] JSEG [23] CTM [24] and SAS[25]) in Figure 10The ROC curves fully characterize the per-formance in a direct comparison of the segmentations qualityprovided by different algorithms Figure 10 shows that ourmethod provides more closely boundaries of segmentationswith the ground truth

4 Conclusion

In this paper we propose a novel hierarchical color imagesegmentation model based on region growing and mergingA proper growing procedure is designed to extract compactand complete superpixels with TV diffusion filtered imageat first Second we integrate color texture and objectnessinformation of superpixels into the merging procedure and

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

Mathematical Problems in Engineering 5

200 400 600 800 10000Number of segments

0

01

02

03

04

05

06

07

08

09

1

Boun

dary

reca

ll

Figure 3 Boundary precision with regard to superpixel density

kindsTherefore a quantized image is introduced to computethe color distribution of superpixels for segmentation and animage is quantized to 64 colors by the minimum variancemethod that leads to least distortion as well as reducesthe computing time [19] After quantization the connectionproperty of pixels in superpixels of same colors is computedas the texture features of superpixels We assume Con asthe connection relation between pixel 119862119901 and its neighborpixel Neigh(119895) and eight-neighbor pixels Neigh(119895 = 0ndash7) arecoded by (20minus7 lowast Con119895) as local color pattern (LCP) If colordifference 119862119889 between Neigh(119895) and 119862119901 is greater than 119862119889th(standard deviation of all pixels in quantized image) thenCon119895 = 0 otherwise Con119895 = 1 After coding all pixels sumup all LCP values of pixels with same color in the superpixelas color distribution weights for corresponding color binsThus the composite color-texture histograms feature (CF)is computed by multiplying the probability of quantitativecolors and pixel distribution in the superpixels The bins ofcolor-texture histograms are defined as follows

Bin (119894) = 119862Bin times 119862LCP 119894 = 1 2 3 64 (3)

222 Objectness Based on Sparse Sampling of Regions Asmentioned in [10] the objectness is an attribute of pixelscomputed with multicues for a set of sample slide windowsand the objectness map is biased towards rectangle intensityresponse It seems that objectness has no ability to discrim-inate regions but it provides a potential and useful value ofall pixels in the rectangle Jiang et al [11] proposed a novelmethod to assign objectness value to every pixel based on thedefinition of objectness

Inspired by the objectness computing method proposedby Jiang et al [11] we propose an improved objectnenss com-puting method based on sparse sampling of slide windowsfor all pixels We also employ four cues for estimation ofobjectness characteristics of all pixels in an image the mul-tiscale saliency color contrast edge density and superpixelsstraddling The multiscale saliency (MS cues) in the originalpaper [10] is calculated based on the residual spectrum of

image This method highlights fewer regions of an imageand it is not suitable for segmentation To expand the globalsignificance of objectness we use histogram-based contrast(HC) [20] salient map as the MS cues The histogram-basedcontrast (HC) method is to define the saliency values forimage pixels using color statistics of the input image In theHC model a pixel is defined by its color contrast to all otherpixels in the image and sped up by sparse histograms Thesaliency [20] value of pixel 119868119909119910 in image 119868 is defined as

119878 (119868(119909119910)) = 119899sum119895=1

119891119895119863(119888(119909119910) 119888119895) (4)

where 119888(119909119910) is the color value of pixel 119868(119909119910) 119899 is the numberof pixel colors after histogram quantization and 119891119895 is thestatistic value of pixel color 119888119895 that appears in image 119868 In orderto reduce the noise caused by the randomness of histogramquantization the saliency results are smoothed by the averagesaliency of the nearest 1198994 neighbor colors

The color contrast (CC) is a measure of the dissimilarityof a window to its immediate nearby surrounding area andthe surroundings edge density (ED) measures the density ofedges near the window borders superpixels straddling (SS) isa cue to estimate whether a window covers an object Thesethree cues are specifically described in [10] With the above-mentioned four cues the objectness score of test window119908 is defined by the Naive Bayes posterior probability in[10]

119875119908 (obj | C) = 119901 (C | obj) 119901 (obj)119901 (C) (5)

where C is the combined cues set and 119901(obj) is thepriors estimation value of the objectness All the specificdefinitions of 119901(Cobj) 119901(obj) and 119901(C) are described in[10]

In the objectness computing methods [10 11] the sam-pling windows are selected randomly by scales for calculationof objectness and 100000 windows are sampled for trainingThen they compute the other cues inC with these windows

In this paper we try to sample less windows that mostlikely contain objects Thus presegment results can provideregions for locating object windows with higher probabilityWe calculate all the circumscribed rectangular windows ofthe presegmentation regions resulting fromMean-Shift (hs =10 hr = 10 and minarea = 100) and put all these vertices ofcircumscribed rectangular windows as the candidates vertexof slidewindowsNext we select all the upper left vertices andlower right corner vertices to compose two sets respectivelyleft-up (LU) and right-down (RD)

In order to find the most likely sampling windowsincluding real objects or homogeneous semantic regions wechoose arbitrary vertex 119881LU from LU to combine with allvertices in RD whose coordinate values are less than thecoordinates of 119881LU These coupled vertices can provide morereasonable coordinates of sampling windows that containobjects All the sample windows can be seen in Figure 4 Wecalculate all the objectness value of sampling windows by (5)

6 Mathematical Problems in Engineering

ObjectnessObjectness of windowsOriginal image Mean-Shift+sample windows

Figure 4 Objectness computing process

and sum them by pixels locations as pixel-wise objectnesswith

119874119901 (119909 119910) = 119908isin119882sum119901(119909119910)isin119908

119875 (119908 (119909 119910)) (6)

where 119882 is the sample windows set 119908 is the single windowin 119882 and 119901 is the score of sample window computed by(5) The pixel-wise objectness cannot represent the similarityof regions so we compute the average objectness values ofMean-Shift segment regions as new objectness value of allpixels and the objectness of superpixel is the average valueof all pixels in it

Obj119901 (119909 119910) = AVG119901isin119877119894

119874119901 (119909 119910) (7)

223 Merging Rules The regions merging method and stopcriteria are crucial for the segmentation results We estab-lished adjacent table and similar matrix for all superpixels tocarry out the merging procedure and the metric of similaritybetween two regions can be calculated by the followingformula

Dis (119883 119884) = 120591 lowast (120594 (His (119883) His (119884))) + (1 minus 120591)lowast 1003816100381610038161003816Obj (119883 minus 119884)1003816100381610038161003816 (8)

The fixed weight assignment is not well adapted to obtainthe real object regions by merging process so we definealterable parameter 120591 using two types of features to regulatethe similarity computing (1) fixed weight assignment is notwell adapted to obtain the real object regions by mergingprocess (2) when the feature change is greater the weightshould be set smaller in the similarity computing otherwiseit will lead to overmerging Therefore we define a balanceparameter using two types of features to regulate the simi-larity computing

120591 = Δ (Obj)119877 (Obj) times Δ (His)2120583 + 120590 (His (Superpixels)) (9)

where Δ is the difference of given variables 119877 is the rangeof Obj 120583 is the mean value of His features of superpixelsand 120590 is the standard deviation of HisObj of current mergedsuperpixels Here we set the threshold based on 119881 to controlthe merging process If the similarity between A and B isgreater than 119881 the adjacent merges two regions A and Bupdates adjacent relations and labels and checks next regions

until no pair of regions satisfies the merging criterion 119881 isthe combined standard deviation of all combined features(HisObj) of current merged superpixels and it is defined in

119881 = 120591 lowast 120590 (His (Superpixels)) + (1 minus 120591)lowast 120590 (Obj (Superpixels)) (10)

The description of the merging strategy is displayed inAlgorithm 1 Each time after merging the combined featureCF(HisObj) of the new region is updated as follows firstlythe connection relations of all pixels are unchanged in thenew region then compute the statistics value for each bin ofquantitative colors and update of the histogram feature usingformula (3) of the new region Second Obj feature of the newregion is obtained by computing average objectness value ofall the pixels in the new region

3 Experimental Results

31 Dataset The proposed segmentation model has beentested on Berkeley Segmentation Datasets [21 22] andcompared with human ground truth segmentation resultsSpecifically the BSD dataset images were selected from theCorel Image Database in size of 481 times 321 The BerkeleyDataset andBenchmark iswidely used by the computer visionfor two reasons (1) it provides a large number of colorimages with complex scenes and (2) for each image thereis multiple ground truth segmentation which are providedfor the numerical evaluation of the testing segmentationalgorithms

32 Visual Analysis of Segmentation Results For a subjectivevisual comparison we selected 5 segmentation results of testimages and compared them with three classic algorithms(JSEG [23] CTM [24] and SAS [25]) The superpixelsnumber of our method for comparing is set to be 119870 = 900the JSEG [23] algorithm needs three predefined parametersquantized colors = 10 number of scales = 5 and mergingparameter = 078 the results for CTM [24] is (120582 = 01) theSAS [25] method is implemented with default parametersFrom the results shown in Figure 4 it is clear that theproposedmethod has performed better in detecting completeobjects from images (such as flowers zebra cheetah personand house in the test images) For all the five test imagesour method has shown clear and complete contours ofthe primary targets The SAS method is more accurate inhomogeneous region detection than other methods but it

Mathematical Problems in Engineering 7

Inputsuperpixels 119866(119871) adjacent table Sim(119866) of 119866(119871)

OutputThe merged result

(1) For each119873 node in 119866119873(119894) is neighbor region of119873(2) if Dis(119873119873(119894)) lt 119881 then(3) merged119873(119894) to119873 119871 = 119871 minus 1(4) change the region label of119873(119894) to119873(5) modified the histogram feature of two merged regions(6) update 119866(119871) Sim(119866) and 119881(7) end if(8) If no regions satisfied Dis(119873119873(119894)) lt 119881 end the merge process otherwise return to (1)(9) Merge meaningless and small areas to adjacent regions(10) Return relabeled image

Algorithm 1

perform less oversegmentation except in complex areas CTMperforms better in texture regions but lost in the completenessof the objects JSEG is also a texture-oriented algorithmwhich has the same weakness as CTM Thus three othersegmentation results have brought more over-segments andmeaningless regions from the test results in Figure 5 Theregion number versus iteration times in the merging processof some test images is shown in Figure 6 and the regionnumber versus parameter 119881 in the merging process of sometest images is shown in Figure 7

In Figure 8 we have displayed fourteen segmentationresults from test images These results include different typeof images such as animals buildings person vehicles andplanes with various texture featuresThe segmentation resultsshow that the proposed growing-merging framework cansegment the images into compact regions and restrain theover- or undersegmentation

33 Quantitative Evaluation on BSD300 The comparisonis based on the four quantitative performance measuresprobabilistic rand index (PRI) [26] variation of information(VoI) [27] global consistency error (GCE) [21] and BoundaryDisplacement Error (BDE) [28]The PRI Index compares thesegmented result against a set of ground truth by evaluatingthe relationships between pairs of pixels as a function ofvariability in the ground truth set VoI defines the distancebetween two types of segmentation as the average conditionalentropy of one type of segmentation given the other andGCE is a region-based evaluation method by computingconsistency of segmented regions BDE measures the aver-age displacement error of boundary pixels between twoboundaries images Many researchers have evaluated theseindexes to compare the performance of various methods inimage segmentation The test values of five example imageson four algorithms are shown in Table 1 and the averageperformance on BSD300 [21] are shown in Table 2 Withthe above comparison strategy our algorithm provided moreapproximate boundaries with manual expert segmentationresults The bins number of the combined histograms iscritical parameter for segmentation sowe have take some testexamples by varying bins in Figure 9 to verify 64 bins is moreproper for accurate segmentation

Table 1 The PRI VoI GCE and BDE value of four test images

Index Proposed SAS CTM JSEG

12084

PRI 07848 076155 07051 07026VoI 24184 21261 36425 43114GCE 02256 01753 01658 01209BDE 77028 68194 74629 92061

130066

PRI 08209 07286 07710 07457VoI 14075 22986 26194 36813GCE 01416 03001 01117 01226BDE 94618 186371 102823 133104

134052

PRI 07630 06454 06303 05833VoI 12913 20665 26110 39948GCE 01020 02263 00833 01302BDE 61061 112896 94271 151976

15062

PRI 08882 08973 08673 08705VoI 15428 14820 21843 26231GCE 02303 01959 01386 01075BDE 87102 212930 122526 139926

385028

PRI 09152 91108 09031 09036VoI 15868 36813 18808 22501GCE 01436 02236 00888 01072BDE 91108 91016 86668 86987

Table 2The average values of PRI VoI GCE and BDE for BSD300

SAS JSEG CTM ProposedPRI 08246 07754 07263 08377VOI 17724 23217 21010 1773GCE 01985 02017 02033 01964BDE 126351 144072 119245 108311

34 Boundary Precision Comparison on BSD500 Estradaand Jepson [29] have proposed a thorough-quantitative-evaluation matching strategy to evaluate the segment accu-racy for boundary extraction The algorithms are evaluatedusing an efficient algorithm for computing precision andrecall with regard to human ground truth boundaries The

8 Mathematical Problems in Engineering

Originalimages

Proposed

SAS

CTM

JSEG

Figure 5 Comparison of four segmentation methods (proposed SAS CTM and JSEG)

2 4 6 8 100Iteration number

1

10

100

1000

Regi

on n

umbe

r

12084130066134052

15062385028

Figure 6 Merging curves of iteration-region numbers

evaluation strategy of precision and recall is reasonableand equitable as measures of segmentation quality because

of not being biased in favor of over- or undersegmen-tation An excellent segmentation algorithm should bringlow segmented error and high boundary recall It definesprecision and recall to be proportional to the total number ofunmatched pixels between two types of segmentation 119878sourceand 119878target where 119878source is the boundaries extracted fromsegmentation results by computer algorithms and 119878target isthe boundaries of human segmentation provided by BSD500[22] A boundary pixel is identified as true position whenthe smallest distance between the extracted and ground truthis less than a threshold (120576 = 5 pixels in this paper) Theprecision recall and 119865-measures are defined as follows

Precision 119878source 119878target = Matched (119878source 119878target)1003816100381610038161003816119878source1003816100381610038161003816

Precision 119878source 119878target = Matched (119878source 119878target)10038161003816100381610038161003816119878target10038161003816100381610038161003816(11)

The precision and recall measures provide a more reliableevaluation strategy based on two reasons (1) it refinedboundaries and avoided duplicate comparison of identicalboundary pixels and (2) it provides dynamic displacementparameters for comprehensive and flexible evaluation ofsegmentation performance In this section we displayed the

Mathematical Problems in Engineering 9

124084130066134052

15062385052

0

100

200

300

400

500

600

700

Regi

on n

umbe

r

002 004 006 008 01 012 014 0160V

Figure 7 Merging curve of region number-119881 of test images in Figure 3

Originalimages

Objectnessmap

Segmentationresults

Figure 8 Some segmentation results and objectness map

boundaries precision and recall curveswith five segmentationalgorithms (MS [15] FH [2] JSEG [23] CTM [24] and SAS[25]) in Figure 10The ROC curves fully characterize the per-formance in a direct comparison of the segmentations qualityprovided by different algorithms Figure 10 shows that ourmethod provides more closely boundaries of segmentationswith the ground truth

4 Conclusion

In this paper we propose a novel hierarchical color imagesegmentation model based on region growing and mergingA proper growing procedure is designed to extract compactand complete superpixels with TV diffusion filtered imageat first Second we integrate color texture and objectnessinformation of superpixels into the merging procedure and

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

6 Mathematical Problems in Engineering

ObjectnessObjectness of windowsOriginal image Mean-Shift+sample windows

Figure 4 Objectness computing process

and sum them by pixels locations as pixel-wise objectnesswith

119874119901 (119909 119910) = 119908isin119882sum119901(119909119910)isin119908

119875 (119908 (119909 119910)) (6)

where 119882 is the sample windows set 119908 is the single windowin 119882 and 119901 is the score of sample window computed by(5) The pixel-wise objectness cannot represent the similarityof regions so we compute the average objectness values ofMean-Shift segment regions as new objectness value of allpixels and the objectness of superpixel is the average valueof all pixels in it

Obj119901 (119909 119910) = AVG119901isin119877119894

119874119901 (119909 119910) (7)

223 Merging Rules The regions merging method and stopcriteria are crucial for the segmentation results We estab-lished adjacent table and similar matrix for all superpixels tocarry out the merging procedure and the metric of similaritybetween two regions can be calculated by the followingformula

Dis (119883 119884) = 120591 lowast (120594 (His (119883) His (119884))) + (1 minus 120591)lowast 1003816100381610038161003816Obj (119883 minus 119884)1003816100381610038161003816 (8)

The fixed weight assignment is not well adapted to obtainthe real object regions by merging process so we definealterable parameter 120591 using two types of features to regulatethe similarity computing (1) fixed weight assignment is notwell adapted to obtain the real object regions by mergingprocess (2) when the feature change is greater the weightshould be set smaller in the similarity computing otherwiseit will lead to overmerging Therefore we define a balanceparameter using two types of features to regulate the simi-larity computing

120591 = Δ (Obj)119877 (Obj) times Δ (His)2120583 + 120590 (His (Superpixels)) (9)

where Δ is the difference of given variables 119877 is the rangeof Obj 120583 is the mean value of His features of superpixelsand 120590 is the standard deviation of HisObj of current mergedsuperpixels Here we set the threshold based on 119881 to controlthe merging process If the similarity between A and B isgreater than 119881 the adjacent merges two regions A and Bupdates adjacent relations and labels and checks next regions

until no pair of regions satisfies the merging criterion 119881 isthe combined standard deviation of all combined features(HisObj) of current merged superpixels and it is defined in

119881 = 120591 lowast 120590 (His (Superpixels)) + (1 minus 120591)lowast 120590 (Obj (Superpixels)) (10)

The description of the merging strategy is displayed inAlgorithm 1 Each time after merging the combined featureCF(HisObj) of the new region is updated as follows firstlythe connection relations of all pixels are unchanged in thenew region then compute the statistics value for each bin ofquantitative colors and update of the histogram feature usingformula (3) of the new region Second Obj feature of the newregion is obtained by computing average objectness value ofall the pixels in the new region

3 Experimental Results

31 Dataset The proposed segmentation model has beentested on Berkeley Segmentation Datasets [21 22] andcompared with human ground truth segmentation resultsSpecifically the BSD dataset images were selected from theCorel Image Database in size of 481 times 321 The BerkeleyDataset andBenchmark iswidely used by the computer visionfor two reasons (1) it provides a large number of colorimages with complex scenes and (2) for each image thereis multiple ground truth segmentation which are providedfor the numerical evaluation of the testing segmentationalgorithms

32 Visual Analysis of Segmentation Results For a subjectivevisual comparison we selected 5 segmentation results of testimages and compared them with three classic algorithms(JSEG [23] CTM [24] and SAS [25]) The superpixelsnumber of our method for comparing is set to be 119870 = 900the JSEG [23] algorithm needs three predefined parametersquantized colors = 10 number of scales = 5 and mergingparameter = 078 the results for CTM [24] is (120582 = 01) theSAS [25] method is implemented with default parametersFrom the results shown in Figure 4 it is clear that theproposedmethod has performed better in detecting completeobjects from images (such as flowers zebra cheetah personand house in the test images) For all the five test imagesour method has shown clear and complete contours ofthe primary targets The SAS method is more accurate inhomogeneous region detection than other methods but it

Mathematical Problems in Engineering 7

Inputsuperpixels 119866(119871) adjacent table Sim(119866) of 119866(119871)

OutputThe merged result

(1) For each119873 node in 119866119873(119894) is neighbor region of119873(2) if Dis(119873119873(119894)) lt 119881 then(3) merged119873(119894) to119873 119871 = 119871 minus 1(4) change the region label of119873(119894) to119873(5) modified the histogram feature of two merged regions(6) update 119866(119871) Sim(119866) and 119881(7) end if(8) If no regions satisfied Dis(119873119873(119894)) lt 119881 end the merge process otherwise return to (1)(9) Merge meaningless and small areas to adjacent regions(10) Return relabeled image

Algorithm 1

perform less oversegmentation except in complex areas CTMperforms better in texture regions but lost in the completenessof the objects JSEG is also a texture-oriented algorithmwhich has the same weakness as CTM Thus three othersegmentation results have brought more over-segments andmeaningless regions from the test results in Figure 5 Theregion number versus iteration times in the merging processof some test images is shown in Figure 6 and the regionnumber versus parameter 119881 in the merging process of sometest images is shown in Figure 7

In Figure 8 we have displayed fourteen segmentationresults from test images These results include different typeof images such as animals buildings person vehicles andplanes with various texture featuresThe segmentation resultsshow that the proposed growing-merging framework cansegment the images into compact regions and restrain theover- or undersegmentation

33 Quantitative Evaluation on BSD300 The comparisonis based on the four quantitative performance measuresprobabilistic rand index (PRI) [26] variation of information(VoI) [27] global consistency error (GCE) [21] and BoundaryDisplacement Error (BDE) [28]The PRI Index compares thesegmented result against a set of ground truth by evaluatingthe relationships between pairs of pixels as a function ofvariability in the ground truth set VoI defines the distancebetween two types of segmentation as the average conditionalentropy of one type of segmentation given the other andGCE is a region-based evaluation method by computingconsistency of segmented regions BDE measures the aver-age displacement error of boundary pixels between twoboundaries images Many researchers have evaluated theseindexes to compare the performance of various methods inimage segmentation The test values of five example imageson four algorithms are shown in Table 1 and the averageperformance on BSD300 [21] are shown in Table 2 Withthe above comparison strategy our algorithm provided moreapproximate boundaries with manual expert segmentationresults The bins number of the combined histograms iscritical parameter for segmentation sowe have take some testexamples by varying bins in Figure 9 to verify 64 bins is moreproper for accurate segmentation

Table 1 The PRI VoI GCE and BDE value of four test images

Index Proposed SAS CTM JSEG

12084

PRI 07848 076155 07051 07026VoI 24184 21261 36425 43114GCE 02256 01753 01658 01209BDE 77028 68194 74629 92061

130066

PRI 08209 07286 07710 07457VoI 14075 22986 26194 36813GCE 01416 03001 01117 01226BDE 94618 186371 102823 133104

134052

PRI 07630 06454 06303 05833VoI 12913 20665 26110 39948GCE 01020 02263 00833 01302BDE 61061 112896 94271 151976

15062

PRI 08882 08973 08673 08705VoI 15428 14820 21843 26231GCE 02303 01959 01386 01075BDE 87102 212930 122526 139926

385028

PRI 09152 91108 09031 09036VoI 15868 36813 18808 22501GCE 01436 02236 00888 01072BDE 91108 91016 86668 86987

Table 2The average values of PRI VoI GCE and BDE for BSD300

SAS JSEG CTM ProposedPRI 08246 07754 07263 08377VOI 17724 23217 21010 1773GCE 01985 02017 02033 01964BDE 126351 144072 119245 108311

34 Boundary Precision Comparison on BSD500 Estradaand Jepson [29] have proposed a thorough-quantitative-evaluation matching strategy to evaluate the segment accu-racy for boundary extraction The algorithms are evaluatedusing an efficient algorithm for computing precision andrecall with regard to human ground truth boundaries The

8 Mathematical Problems in Engineering

Originalimages

Proposed

SAS

CTM

JSEG

Figure 5 Comparison of four segmentation methods (proposed SAS CTM and JSEG)

2 4 6 8 100Iteration number

1

10

100

1000

Regi

on n

umbe

r

12084130066134052

15062385028

Figure 6 Merging curves of iteration-region numbers

evaluation strategy of precision and recall is reasonableand equitable as measures of segmentation quality because

of not being biased in favor of over- or undersegmen-tation An excellent segmentation algorithm should bringlow segmented error and high boundary recall It definesprecision and recall to be proportional to the total number ofunmatched pixels between two types of segmentation 119878sourceand 119878target where 119878source is the boundaries extracted fromsegmentation results by computer algorithms and 119878target isthe boundaries of human segmentation provided by BSD500[22] A boundary pixel is identified as true position whenthe smallest distance between the extracted and ground truthis less than a threshold (120576 = 5 pixels in this paper) Theprecision recall and 119865-measures are defined as follows

Precision 119878source 119878target = Matched (119878source 119878target)1003816100381610038161003816119878source1003816100381610038161003816

Precision 119878source 119878target = Matched (119878source 119878target)10038161003816100381610038161003816119878target10038161003816100381610038161003816(11)

The precision and recall measures provide a more reliableevaluation strategy based on two reasons (1) it refinedboundaries and avoided duplicate comparison of identicalboundary pixels and (2) it provides dynamic displacementparameters for comprehensive and flexible evaluation ofsegmentation performance In this section we displayed the

Mathematical Problems in Engineering 9

124084130066134052

15062385052

0

100

200

300

400

500

600

700

Regi

on n

umbe

r

002 004 006 008 01 012 014 0160V

Figure 7 Merging curve of region number-119881 of test images in Figure 3

Originalimages

Objectnessmap

Segmentationresults

Figure 8 Some segmentation results and objectness map

boundaries precision and recall curveswith five segmentationalgorithms (MS [15] FH [2] JSEG [23] CTM [24] and SAS[25]) in Figure 10The ROC curves fully characterize the per-formance in a direct comparison of the segmentations qualityprovided by different algorithms Figure 10 shows that ourmethod provides more closely boundaries of segmentationswith the ground truth

4 Conclusion

In this paper we propose a novel hierarchical color imagesegmentation model based on region growing and mergingA proper growing procedure is designed to extract compactand complete superpixels with TV diffusion filtered imageat first Second we integrate color texture and objectnessinformation of superpixels into the merging procedure and

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

Mathematical Problems in Engineering 7

Inputsuperpixels 119866(119871) adjacent table Sim(119866) of 119866(119871)

OutputThe merged result

(1) For each119873 node in 119866119873(119894) is neighbor region of119873(2) if Dis(119873119873(119894)) lt 119881 then(3) merged119873(119894) to119873 119871 = 119871 minus 1(4) change the region label of119873(119894) to119873(5) modified the histogram feature of two merged regions(6) update 119866(119871) Sim(119866) and 119881(7) end if(8) If no regions satisfied Dis(119873119873(119894)) lt 119881 end the merge process otherwise return to (1)(9) Merge meaningless and small areas to adjacent regions(10) Return relabeled image

Algorithm 1

perform less oversegmentation except in complex areas CTMperforms better in texture regions but lost in the completenessof the objects JSEG is also a texture-oriented algorithmwhich has the same weakness as CTM Thus three othersegmentation results have brought more over-segments andmeaningless regions from the test results in Figure 5 Theregion number versus iteration times in the merging processof some test images is shown in Figure 6 and the regionnumber versus parameter 119881 in the merging process of sometest images is shown in Figure 7

In Figure 8 we have displayed fourteen segmentationresults from test images These results include different typeof images such as animals buildings person vehicles andplanes with various texture featuresThe segmentation resultsshow that the proposed growing-merging framework cansegment the images into compact regions and restrain theover- or undersegmentation

33 Quantitative Evaluation on BSD300 The comparisonis based on the four quantitative performance measuresprobabilistic rand index (PRI) [26] variation of information(VoI) [27] global consistency error (GCE) [21] and BoundaryDisplacement Error (BDE) [28]The PRI Index compares thesegmented result against a set of ground truth by evaluatingthe relationships between pairs of pixels as a function ofvariability in the ground truth set VoI defines the distancebetween two types of segmentation as the average conditionalentropy of one type of segmentation given the other andGCE is a region-based evaluation method by computingconsistency of segmented regions BDE measures the aver-age displacement error of boundary pixels between twoboundaries images Many researchers have evaluated theseindexes to compare the performance of various methods inimage segmentation The test values of five example imageson four algorithms are shown in Table 1 and the averageperformance on BSD300 [21] are shown in Table 2 Withthe above comparison strategy our algorithm provided moreapproximate boundaries with manual expert segmentationresults The bins number of the combined histograms iscritical parameter for segmentation sowe have take some testexamples by varying bins in Figure 9 to verify 64 bins is moreproper for accurate segmentation

Table 1 The PRI VoI GCE and BDE value of four test images

Index Proposed SAS CTM JSEG

12084

PRI 07848 076155 07051 07026VoI 24184 21261 36425 43114GCE 02256 01753 01658 01209BDE 77028 68194 74629 92061

130066

PRI 08209 07286 07710 07457VoI 14075 22986 26194 36813GCE 01416 03001 01117 01226BDE 94618 186371 102823 133104

134052

PRI 07630 06454 06303 05833VoI 12913 20665 26110 39948GCE 01020 02263 00833 01302BDE 61061 112896 94271 151976

15062

PRI 08882 08973 08673 08705VoI 15428 14820 21843 26231GCE 02303 01959 01386 01075BDE 87102 212930 122526 139926

385028

PRI 09152 91108 09031 09036VoI 15868 36813 18808 22501GCE 01436 02236 00888 01072BDE 91108 91016 86668 86987

Table 2The average values of PRI VoI GCE and BDE for BSD300

SAS JSEG CTM ProposedPRI 08246 07754 07263 08377VOI 17724 23217 21010 1773GCE 01985 02017 02033 01964BDE 126351 144072 119245 108311

34 Boundary Precision Comparison on BSD500 Estradaand Jepson [29] have proposed a thorough-quantitative-evaluation matching strategy to evaluate the segment accu-racy for boundary extraction The algorithms are evaluatedusing an efficient algorithm for computing precision andrecall with regard to human ground truth boundaries The

8 Mathematical Problems in Engineering

Originalimages

Proposed

SAS

CTM

JSEG

Figure 5 Comparison of four segmentation methods (proposed SAS CTM and JSEG)

2 4 6 8 100Iteration number

1

10

100

1000

Regi

on n

umbe

r

12084130066134052

15062385028

Figure 6 Merging curves of iteration-region numbers

evaluation strategy of precision and recall is reasonableand equitable as measures of segmentation quality because

of not being biased in favor of over- or undersegmen-tation An excellent segmentation algorithm should bringlow segmented error and high boundary recall It definesprecision and recall to be proportional to the total number ofunmatched pixels between two types of segmentation 119878sourceand 119878target where 119878source is the boundaries extracted fromsegmentation results by computer algorithms and 119878target isthe boundaries of human segmentation provided by BSD500[22] A boundary pixel is identified as true position whenthe smallest distance between the extracted and ground truthis less than a threshold (120576 = 5 pixels in this paper) Theprecision recall and 119865-measures are defined as follows

Precision 119878source 119878target = Matched (119878source 119878target)1003816100381610038161003816119878source1003816100381610038161003816

Precision 119878source 119878target = Matched (119878source 119878target)10038161003816100381610038161003816119878target10038161003816100381610038161003816(11)

The precision and recall measures provide a more reliableevaluation strategy based on two reasons (1) it refinedboundaries and avoided duplicate comparison of identicalboundary pixels and (2) it provides dynamic displacementparameters for comprehensive and flexible evaluation ofsegmentation performance In this section we displayed the

Mathematical Problems in Engineering 9

124084130066134052

15062385052

0

100

200

300

400

500

600

700

Regi

on n

umbe

r

002 004 006 008 01 012 014 0160V

Figure 7 Merging curve of region number-119881 of test images in Figure 3

Originalimages

Objectnessmap

Segmentationresults

Figure 8 Some segmentation results and objectness map

boundaries precision and recall curveswith five segmentationalgorithms (MS [15] FH [2] JSEG [23] CTM [24] and SAS[25]) in Figure 10The ROC curves fully characterize the per-formance in a direct comparison of the segmentations qualityprovided by different algorithms Figure 10 shows that ourmethod provides more closely boundaries of segmentationswith the ground truth

4 Conclusion

In this paper we propose a novel hierarchical color imagesegmentation model based on region growing and mergingA proper growing procedure is designed to extract compactand complete superpixels with TV diffusion filtered imageat first Second we integrate color texture and objectnessinformation of superpixels into the merging procedure and

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

8 Mathematical Problems in Engineering

Originalimages

Proposed

SAS

CTM

JSEG

Figure 5 Comparison of four segmentation methods (proposed SAS CTM and JSEG)

2 4 6 8 100Iteration number

1

10

100

1000

Regi

on n

umbe

r

12084130066134052

15062385028

Figure 6 Merging curves of iteration-region numbers

evaluation strategy of precision and recall is reasonableand equitable as measures of segmentation quality because

of not being biased in favor of over- or undersegmen-tation An excellent segmentation algorithm should bringlow segmented error and high boundary recall It definesprecision and recall to be proportional to the total number ofunmatched pixels between two types of segmentation 119878sourceand 119878target where 119878source is the boundaries extracted fromsegmentation results by computer algorithms and 119878target isthe boundaries of human segmentation provided by BSD500[22] A boundary pixel is identified as true position whenthe smallest distance between the extracted and ground truthis less than a threshold (120576 = 5 pixels in this paper) Theprecision recall and 119865-measures are defined as follows

Precision 119878source 119878target = Matched (119878source 119878target)1003816100381610038161003816119878source1003816100381610038161003816

Precision 119878source 119878target = Matched (119878source 119878target)10038161003816100381610038161003816119878target10038161003816100381610038161003816(11)

The precision and recall measures provide a more reliableevaluation strategy based on two reasons (1) it refinedboundaries and avoided duplicate comparison of identicalboundary pixels and (2) it provides dynamic displacementparameters for comprehensive and flexible evaluation ofsegmentation performance In this section we displayed the

Mathematical Problems in Engineering 9

124084130066134052

15062385052

0

100

200

300

400

500

600

700

Regi

on n

umbe

r

002 004 006 008 01 012 014 0160V

Figure 7 Merging curve of region number-119881 of test images in Figure 3

Originalimages

Objectnessmap

Segmentationresults

Figure 8 Some segmentation results and objectness map

boundaries precision and recall curveswith five segmentationalgorithms (MS [15] FH [2] JSEG [23] CTM [24] and SAS[25]) in Figure 10The ROC curves fully characterize the per-formance in a direct comparison of the segmentations qualityprovided by different algorithms Figure 10 shows that ourmethod provides more closely boundaries of segmentationswith the ground truth

4 Conclusion

In this paper we propose a novel hierarchical color imagesegmentation model based on region growing and mergingA proper growing procedure is designed to extract compactand complete superpixels with TV diffusion filtered imageat first Second we integrate color texture and objectnessinformation of superpixels into the merging procedure and

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

Mathematical Problems in Engineering 9

124084130066134052

15062385052

0

100

200

300

400

500

600

700

Regi

on n

umbe

r

002 004 006 008 01 012 014 0160V

Figure 7 Merging curve of region number-119881 of test images in Figure 3

Originalimages

Objectnessmap

Segmentationresults

Figure 8 Some segmentation results and objectness map

boundaries precision and recall curveswith five segmentationalgorithms (MS [15] FH [2] JSEG [23] CTM [24] and SAS[25]) in Figure 10The ROC curves fully characterize the per-formance in a direct comparison of the segmentations qualityprovided by different algorithms Figure 10 shows that ourmethod provides more closely boundaries of segmentationswith the ground truth

4 Conclusion

In this paper we propose a novel hierarchical color imagesegmentation model based on region growing and mergingA proper growing procedure is designed to extract compactand complete superpixels with TV diffusion filtered imageat first Second we integrate color texture and objectnessinformation of superpixels into the merging procedure and

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 10: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

10 Mathematical Problems in Engineering

16 bins 32 bins8 bins Original image

Figure 9 Some segmentation results of varying bins

ProposedSASCTM

JSEGFHMS

01 02 03 04 05 06 07 08 09 10Recall

0

01

02

03

04

05

06

07

08

09

1

Prec

ision

Figure 10 Boundaries of ROC curves on BSD500 when 120576 = 5

develop effective merging rules for ideal segmentation Inaddition using a histogram-based color-texture distancemetric themerging strategy can obtain complete objects withsmooth boundaries and locate the boundaries accuratelyThe subjective and objective experimental results for naturecolor images show a good segmentation performance anddemonstrate a better discriminate ability with objectnessfeature for complex images of the proposed method Inthe future we will focus on the extension of objectness todifferent kinds of segmentation strategies

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Haifeng Sima acknowledges the support of the Key Projectsof National Natural Foundation China (U1261206) NationalNatural Science Foundation of China (61572173 and61602157) Science and Technology Planning Project of Hen-an Province China (162102210062) the Key Scientific Re-search Funds of Henan Provincial Education Departmentfor Higher School (15A520072) and Doctoral Foundation ofHenan Polytechnic University (B2016-37)

References

[1] J Fan D K Y Yau A K Elmagarmid and W G Aref ldquoAuto-matic image segmentation by integrating color-edge extractionand seeded region growingrdquo IEEE Transactions on ImageProcessing vol 10 no 10 pp 1454ndash1466 2001

[2] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

[3] M Mignotte ldquoSegmentation by fusion of histogram-based K-means clusters indifferent colour spacesrdquo IEEE Transactions onImage Processing vol 17 no 5 pp 780ndash787 2008

[4] C Li C Xu C Gui and M D Fox ldquoDistance regularized levelset evolution and its application to image segmentationrdquo IEEETransactions on Image Processing vol 19 no 12 pp 3243ndash32542010

[5] S Gould and X He ldquoScene understanding by labeling pixelsrdquoCommunications of the ACM vol 57 no 11 pp 68ndash77 2014

[6] T Athanasiadis P Mylonas Y Avrithis and S Kollias ldquoSeman-tic image segmentation and object labelingrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 17 no 3 pp298ndash311 2007

[7] S Vicente C Rother and V Kolmogorov ldquoObject cosegmen-tationrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo11) pp 2217ndash2224 IEEEProvidence RI USA June 2011

[8] R Girshick J Donahue T Darrell and J Malik ldquoRich fea-ture hierarchies for accurate object detection and semanticsegmentationrdquo in Proceedings of the 27th IEEE Conference on

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 11: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

Mathematical Problems in Engineering 11

Computer Vision and Pattern Recognition (CVPR rsquo14) pp 580ndash587 Columbus Ohio USA June 2014

[9] W Tao Y Zhou L Liu K Li K Sun and Z Zhang ldquoSpatialadjacent bag of features with multiple superpixels for objectsegmentation and classificationrdquo Information Sciences vol 281pp 373ndash385 2014

[10] B Alexe T Deselaers and V Ferrari ldquoMeasuring the objectnessof image windowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2189ndash2202 2012

[11] P Jiang H Ling J Yu and J Peng ldquoSalient region detectionby UFO uniqueness focusness and objectnessrdquo in Proceedingsof the 14th IEEE International Conference on Computer Vision(ICCV rsquo13) pp 1976ndash1983 Sydney Australia December 2013

[12] R Achanta A Shaji K Smith A Lucchi P Fua and SSusstrunk ldquoSLIC superpixels compared to state-of-the-artsuperpixel methodsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 34 no 11 pp 2274ndash2282 2012

[13] J Shi and J Malik ldquoNormalized cuts and image segmentationrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 22 no 8 pp 888ndash905 2000

[14] L Vincent and P Soille ldquoWatersheds in digital spaces anefficient algorithm based on immersion simulationsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol13 no 6 pp 583ndash598 1991

[15] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[16] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[17] M Rousson T Brox and R Deriche ldquoActive unsupervisedtexture segmentation on a diffusion based feature spacerdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 699ndash704 Madison Wis USA June2003

[18] A Levinshtein A Stere K N Kutulakos D J Fleet S JDickinson and K Siddiqi ldquoTurboPixels fast superpixels usinggeometric flowsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 31 no 12 pp 2290ndash2297 2009

[19] J-P Braquelaire and L Brun ldquoComparison and optimizationof methods of color image quantizationrdquo IEEE Transactions onImage Processing vol 6 no 7 pp 1048ndash1052 1997

[20] M-M Cheng G-X Zhang N J Mitra X Huang and S-M Hu ldquoGlobal contrast based salient region detectionrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo11) pp 409ndash416 June 2011

[21] D Martin C Fowlkes D Tal and J Malik ldquoA databaseof human segmented natural images and its application toevaluating segmentation algorithms and measuring ecologicalstatisticsrdquo in Proceedings of the 8th IEEE International Confer-ence on Computer Vision pp 416ndash423 Vancouver Canada July2001

[22] httpwwweecsberkeleyeduResearchProjectsCSvisiongroupingresourceshtml

[23] Y Deng and B S Manjunath ldquoUnsupervised segmentation ofcolor-texture regions in images and videordquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 23 no 8 pp800ndash810 2001

[24] A Y Yang J Wright Y Ma and S S Sastry ldquoUnsupervisedsegmentation of natural images via lossy data compressionrdquo

Computer Vision and Image Understanding vol 110 no 2 pp212ndash225 2008

[25] Z Li X-M Wu and S-F Chang ldquoSegmentation using super-pixels a bipartite graph partitioning approachrdquo in Proceedingsof the 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 789ndash796 Providence RI USA June2012

[26] C Pantofaru and M Hebert ldquoA comparison of image segmen-tation algorithmsrdquo Tech Rep CMU-RI-TR-05-40 CarnegieMellon University 2005

[27] M Meila ldquoComparing clusterings an axiomatic viewrdquo in Pro-ceedings of the 22nd ACM International Conference on MachineLearning (ICML rsquo05) pp 577ndash584 New York NY USA 2005

[28] J Freixenet X Munoz D Raba J Marti and X Cuff ldquoYetanother survey on image segmentationrdquo in Computer VisionmdashECCV 2002 7th European Conference on Computer VisionCopenhagen Denmark May 28ndash31 2002 Proceedings Part IIIvol 2352 of Lecture Notes in Computer Science pp 408ndash422Springer Berlin Germany 2002

[29] F J Estrada and A D Jepson ldquoBenchmarking image segmenta-tion algorithmsrdquo International Journal of Computer Vision vol85 no 2 pp 167ndash181 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 12: Research Article Objectness Supervised Merging Algorithm for …downloads.hindawi.com/journals/mpe/2016/3180357.pdf · 2019-07-30 · Research Article Objectness Supervised Merging

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of