Transcript
Page 1: Image reduction using means on discrete product lattices.bak

1070 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 3, MARCH 2012

Image Reduction Using Means onDiscrete Product Lattices

Gleb Beliakov, Senior Member, IEEE, Humbeto Bustince, Member, IEEE, and Daniel Paternain

Abstract—We investigate the problem of averaging values on lat-tices and, in particular, on discrete product lattices. This problemarises in image processing when several color values given in RGB,HSL, or another coding scheme need to be combined. We showhow the arithmetic mean and the median can be constructed byminimizing appropriate penalties, and we discuss which of themcoincide with the Cartesian product of the standard mean and themedian. We apply these functions in image processing. We presentthree algorithms for color image reduction based on minimizingpenalty functions on discrete product lattices.

Index Terms—Aggregation operators, image reduction, mean,median, penalty functions.

I. INTRODUCTION

T HE NEED TO aggregate several inputs into a single rep-resentative output frequently arises in many practical ap-

plications. In image processing, it is often necessary to averagethe values of several neighboring pixels (to reduce the imagesize or apply a filter) or to average pixel values in two differentbut related images (e.g., in stereovision [1]). When the imagesare in color, i.e., typically coded as discrete RGB, CMY, or HSLvalues, then it is customary to average the values in the respec-tive channels. It is not immediately clear that this is appropriateand what are the other ways to average color values.

In this paper, we study averaging on product lattices (RGB oranother color coding scheme is an example of a product lattice).We note previous works related to triangular norms on posetsand lattices [2], [3] and on discrete chains [4]. Our setting isdifferent as we do not deal with associativity of aggregation op-erations but, in contrast, require averaging behavior.

We focus on a large class of averages based on minimizinga penalty function [5]–[8]. We show that, with an appropriatelychosen class of penalties, the resulting penalty-based functionsare monotone and idempotent. We also show that the averages

Manuscript received September 15, 2010; revised July 17, 2011; acceptedSeptember 04, 2011. Date of publication September 15, 2011; date of currentversion February 17, 2012. This work was supported in part by the Govern-ment of Spain under Grant TIN2010-15055. The associate editor coordinatingthe review of this manuscript and approving it for publication was Dr. Rick P.Millane.

G. Beliakov is with the School of Information Technology, Deakin University,Burwood 3125, Australia (e-mail: [email protected]).

H. Bustince and D. Paternain are with the Department of Automatics andComputation, Public University of Nevarra, 31006 Pamplona, Spain (e-mail:[email protected]; [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIP.2011.2168412

over a product lattice are, in general, different from the Carte-sian products of the averages. This has an implication over themethods of color image reduction.

We recall the problem of image reduction for grayscale im-ages, and we justify the importance of penalty functions. Weprove that, when we reconstruct a reduced image, the error withrespect to the original image may be determined by the reduc-tion method that has been employed.

We present three new color image reduction algorithms thatare based on minimizing a penalty function defined over productlattices. We carry out an experimental study in which we com-pare the proposed algorithms with the alternative methods thatcan be found in the literature, and we analyze the stability of thealgorithms with respect to noise in the images.

The structure of this paper is as follows. In Section II, we pro-vide preliminary definitions. In Section III, we give the defini-tions of aggregation functions based on penalties, i.e., defined onproduct lattices, and we present the problem of image reductionalgorithms. We discuss solutions to resulting optimization prob-lems in Section IV. In Section V, we present the color imagereduction algorithms, and we present an experimental study inSection VI. Conclusions are presented in Section VII.

II. PRELIMINARIES

A. Aggregation Functions

The research effort concerning aggregation functions, theirbehavior, and properties has been disseminated throughout var-ious fields including decision making, knowledge-based sys-tems, artificial intelligence, and image processing. Recent worksproviding a comprehensive overview include [9]–[13].

Definition 1: Function is called anaggregation function if it is monotonically nondecreasing ineach variable and satisfies and , with

and , respectively.Definition 2: The aggregation function is called averaging

if it is bounded by the minimum and the maximum of itsarguments

It is immediate that averaging aggregation functions areidempotent (i.e., ) and (becauseof monotonicity) vice versa. Then clearly, the boundary condi-tions and are satisfied.

1057-7149/$26.00 © 2011 IEEE

http://ieeexploreprojects.blogspot.com

Page 2: Image reduction using means on discrete product lattices.bak

BELIAKOV et al.: IMAGE REDUCTION USING MEANS ON DISCRETE PRODUCT LATTICES 1071

Well-known examples of averaging functions are the arith-metic mean and the median. It is known that the arithmetic meanand the median are solutions to simple optimization problems,in which a measure of disagreement between the inputs is min-imized (see [5]–[7], [10], [14]). The main motivation is the fol-lowing: Let be the inputs and be the output. If all the inputscoincide , then the output is , and wehave a unanimous vote. If some input , then we imposea “penalty” for this disagreement. The larger the disagreementand the more inputs disagree with the output, the larger (in gen-eral) is the penalty. We look for an aggregated value that mini-mizes the penalty.

Thus, we need to define a suitable measure of disagreementor dissimilarity.

Definition 3: Let be a penalty functionwith the properties:

1 for all , ;2 if all ;3 is quasi-convex in for any .The penalty-based function is

if is the unique minimizer and if the set ofminimizers is the interval .

Remark 1: is quasi-convex iffor all and all , within its

domain.In [5], it was shown that any averaging aggregation function

can be represented as a penalty-based function. Further, the clas-sical means, such as the arithmetic mean and the median, arerepresented via the following penalty functions. The arithmeticmean is the solution to

whereas the median is a solution to

In this paper, we will deal with penalty-based functions de-fined on discrete lattices rather than interval .

B. Lattices

Definition 4: Let be a set. Lattice is aposet with the partial order on and meet and join operations

and , respectively, if every pair of elements from has bothmeet and join.

Definition 5: Let be a poset. A chain in is a totally or-dered subset of . The length of a chain is its cardinality.

Definition 6: If andare two lattices, their Cartesian product is lattice

with defined by

and

We will deal with Cartesian products of finite chains , whichare precisely the type of a product lattice representing colors inimage processing, with the length of each chain typically being256. We note that all finite chains of the same length are isomor-phic to each other; hence, we can represent them as nonnegativeintegers and elements of product lattices as tuples

and .Definition 7: Let and be two aggregation functions de-

fined on sets and , respectively. The Cartesian product ofaggregation functions isdefined by

C. Image Reduction

Image reduction consists in reducing the dimension of theimage while keeping as much information as possible. Imagereduction can be used to accelerate computations on an imageor just to reduce the cost of its storage or transmission.

There exist several methods for image reduction in the litera-ture. Some of them consider the image to be reduced in a globalway [15]–[17] or in a transform domain [18]. Other widely usedmethods act locally over pieces (blocks) of the image [19], [20].The division of the image in blocks of small size allows one todesign simple reduction algorithms.

In this paper, we consider an image of pixels as a set ofelements arranged in rows and columns. Each element of

a grayscale image is represented by with and. Element has a value between 0 and .

If we consider color image in the RGB reference system, eachelement of the image is denoted by .Each color component will also have a value between 0 and

.A typical local image reduction algorithm is presented as

follows.

Input: of dimension

Output: of dimension

1: Divide the image into disjoint blocks of dimension .If or are not multiples of eliminate the smallest numberof rows and/or columns to satisfy this condition.

2: Choose an averaging function .

3: for each block in do

4: Calculate .

5: Place the result in the corresponding pixel of the reducedimage (see Fig. 1).

6: end for

In Fig. 2, we show three reduced images obtained from theoriginal image Lena using the following aggregation functionsin step 2 of previous algorithm: the geometric mean (b), thearithmetic mean (c), and the median (d).

http://ieeexploreprojects.blogspot.com

Page 3: Image reduction using means on discrete product lattices.bak

1072 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 3, MARCH 2012

Fig. 1. Scheme of the reduction algorithm.

Fig. 2. (a) Original image Lena and reductions by the grayscale image reduc-tion algorithm taking � � � and (b) geometric mean, (c) arithmetic mean, and(d) median.

TABLE IMSE AND MAE BETWEEN THE RECONSTRUCTED IMAGES OF FIG. 2 AND THE

ORIGINAL LENA IMAGE

There exist a number of methods to determine which is thebest reduction. Among the most frequently used methods arethe following:

1) Magnify the reduced image to the dimensions of the orig-inal image.

2) Measure the error between the reconstructed image and theoriginal image.

There exist different image magnification methods that willinfluence the final result [21], [22]. However, in this paper, wedo not consider this problem. We focus on the influence of thechoice of the measure of error in the second point.

For simplicity, we consider the following reconstructionmethod: for each pixel of the reduced image, build a new blockof dimension whose elements have the same value asthat pixel.

Next, we show that, once the reduction and magnificationmethods are fixed, the difference between the original imageand the reduced (and then magnified) image may be determinedby the aggregation function used in the reduction algorithm.

We measure the error in the reconstructed images by usingthe following expressions to compare the two images andof dimension , i.e., MSE and the mean absolute error(MAE), as follows:

MSE

MAE

Notice that, from the results in Table I, we have the following.1) If we take MSE, then the best reduction is obtained using

the arithmetic mean.2) If we take MAE, then the best reduction is obtained using

the median.Observe that these two facts agree with Section II-A and jus-

tify the study of penalty functions for image reduction. We areinterested in color images; hence, our interest to penalty func-tions defined over the product lattices.

III. MAIN DEFINITIONS

Following the representations of the arithmetic mean and themedian as penalty-based aggregation functions, we now definesimilar constructions on lattices.

Definition 8: Let be a product of finitechains. The distance between and is defined as thelength of the maximal chain with the least elementand the greatest element minus 1, i.e.,

length

This distance is called the geodesic distance since it corre-sponds to the smallest number of edges between vertices toin the covering graph of .

Remark 2: We note that all maximal chains with the leastelement and the greatest element on a product lattice in Def-inition 8 have all the same length. This definition is equivalentto the following:

where is the distance in the th chain in the product ofchains.

Definition 9: Let be a product of finite chains. Considerelements that need to be averaged. Let the

penalty function be . The penalty-based functionon is given by

http://ieeexploreprojects.blogspot.com

Page 4: Image reduction using means on discrete product lattices.bak

BELIAKOV et al.: IMAGE REDUCTION USING MEANS ON DISCRETE PRODUCT LATTICES 1073

Remark 3: is quasi-convex in , as in Definition 3. How-ever, now, rather than an element of a chain. To accom-modate this in the definition of a quasi-convex function, we usethe following: We remind that function isquasi-convex on if all its level sets are convex. We call func-tion quasi-convex if its extensionis quasi-convex, where is the smallest convex set con-taining . Similarly, is convex if its extension isconvex.

The minimum always exists and . There can be severalminimizers. In this case, one can take any minimizer. A conve-nient rule is to take the largest minimizer according to a totalorder, e.g., lexicographical. Finally, is not necessarily mono-tone, i.e., an aggregation function.

Theorem 1: The function in Definition 9 is an averaging(and hence idempotent) function.

Proof: Clearly, because, forany , with , and similarly at the otherend.

A special case of penalty-based functions was considered in[8], which is called dissimilarity functions (see also [23], [24]),where penalty is given by

(1)

where is a convex function with the unique minimum. In this case, the penalty-based function is monotone, i.e., an

aggregation function. By adapting this definition to our case, wehave the following result.

Theorem 2: Function in Definition 9, with given by

is an averaging aggregation function on a product lattice.Proof: We only need to prove monotonicity; the proof is

similar to that in [8] (see also [5]) and is adapted here to productlattices. The convex function , increasing on , has theproperty if ;

, , , and ; and . Consider and , such thatfor all and except one pair, and .

We need to show that if for all , thenfor all .

Suppose . Take, ,

, and . Now,, which is 1 if or 1, if

otherwise. In addition, ,which is also either 1 or 1, but because , it takes value

1 only if . Hence, .Now from , we have

Remark 4: One can use distinct convex functions , where, in (1) rather than the common function , and

the result of Theorem 2 holds. In particular, an interesting caseis , with and , which givesrise to weighted means and medians.

In the following, we provide definitions for some specific in-stances of penalty-based aggregation, which are based on theanalogs with the classical means. In all cases, we have penaltiesin form (1); therefore, Theorem 2 applies.

Definition 10: Let be as follows:1 ; hence, . Then, the

resulting penalty-based aggregation function is the arith-metic mean.

2 ; hence, . Letbe a weighting vector , with . Then, the

resulting penalty-based aggregation function is a weightedarithmetic mean.

3 ; hence, . Then, theresulting penalty-based aggregation function is the median.

4 ; hence, .Then, the resulting penalty-based aggregation function is aweighted median (the definitions of the weighted medianscan be found in [5], [6], [9]).

IV. SOLUTION TO PENALTY MINIMIZATION PROBLEMS

Consider now the issue of obtaining solutions to the mini-mization problem in Definition 9. First, consider the arithmeticmean. We have the following problem:

(2)

where denotes the th component of the th tuple .We note that this problem is convex in . We also note that thesolution is different from the Cartesian product of the means,as the following example illustrates, and the differences are notjust due to the rounding problem.

Example 1: Let be the product of two chains .Take the mean of (10, 10), (8, 0), and (3, 2). The Cartesianproduct of means gives (7, 4) with the objective value

. The solutions to the minimization problem are (9, 2)with objective value and (8, 3) with thesame objective value.

While we could not obtain a closed-form solution, we notethat, starting from any and, in particular, starting fromthe Cartesian product of means or the medians, and performingcoordinate descent (because of the convexity of the objective),one can reach the minimum algorithmically.

Consider now the median. For the median, we have the fol-lowing problem:

(3)

Each term in the inner sum in the latter expression dependson only; thus, the solution to the problem can be obtained bysolving separate problems as

http://ieeexploreprojects.blogspot.com

Page 5: Image reduction using means on discrete product lattices.bak

1074 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 3, MARCH 2012

The solution to each of these problems is the me-dian function. Hence, the minimum in (3) is achieved at

, i.e., the result is the Carte-sian product of the medians. It is not difficult to confirm thatthe same conclusion is also valid for weighted medians.

There are several interesting examples of penalty functionspresented in [5], which give rise to their analogs defined onproduct lattices. None of them results in a Cartesian product ofthe respective aggregation functions though.

V. APPLICATION IN IMAGE REDUCTION

In this section, we consider a practical application of aggre-gation on product lattices to color image processing. We presentthree color image reduction algorithms based on the minimiza-tion of penalty functions that are not built as Cartesian productsof the corresponding aggregation functions. The first two algo-rithms are approximate algorithms: They provide putative so-lutions to the penalty minimization problem, i.e., chosen fromsmaller subsets of alternatives. The rationale here is compu-tational efficiency. The third algorithm finds the actual solu-tion to the penalty minimization problem using the approach inSection IV. We compare the accuracy and running times for allalgorithms.

A. First Algorithm for Image Reduction

1) Algorithm: In the first color image reduction algorithm,we fix a number of different averaging aggregation functions.We apply the aggregation functions to each of the blocks in theimage (componentwise) obtaining possible pixels in the re-duced image. We select the pixel that minimizes a fixed penaltyfunction . A diagram of Algorithm 1 can be found in Fig. 3.

Algorithm 1 First color image reduction algorithm

Input: of dimension

Output: of dimension

1: Divide the image in disjoint blocks of dimension . Ifor are not multiples of , eliminate the necessary number

of rows and/or columns to satisfy this condition.

2: Choose the penalty function .

3: Take averaging aggregation functions .

4: for each block in do

5: Apply to each pixel in each block (in the three channels R,G, and B) aggregation functions as follows:

Fig. 3. Diagram of Algorithm 1.

6: Calculate penalties for each with.

7: Assign value with the smallest penalty to thecorresponding pixel of the reduced image.

8: end for

We illustrate Algorithm 1 on the following example. We re-duce a block of dimension 3 3. We take five different ag-gregation functions: minimum (min), geometric mean (geom),arithmetic mean (arith), median (med) and maximum (max).

Example 2: We consider the block of an image shown at thebottom of the following page.

Suppose that the following penalty function (correspondingto the arithmetic mean) is fixed, i.e.,

http://ieeexploreprojects.blogspot.com

Page 6: Image reduction using means on discrete product lattices.bak

BELIAKOV et al.: IMAGE REDUCTION USING MEANS ON DISCRETE PRODUCT LATTICES 1075

We apply the aggregation functions to the elements of the blockcomponentwise obtaining the following results:

For this block of the image, we take , which correspondsto the smallest value of penalty for this block. Note that, al-though the penalty function corresponds to the arithmetic mean,the solution is not (which corresponds to the Cartesianproduct of arithmetic means), which is consistent with the argu-ment in Section IV.

In Fig. 4, we illustrate Algorithm 1 on two color images inRGB [images (a) and (c)] in the same setting as in Example 2. InTable II, we show the frequency of choosing each of the aggre-gation functions. Notice that the biggest percentage correspondsto taking the arithmetic mean as the aggregation function.

Remark 5: Observe that if all the values of the color compo-nents are the same, we can take any averaging function becausethey are all idempotent (column Any in Table II).

2) Reaction to Noise: Now we want to analyze how Algo-rithm 1 behaves when images have been altered with impul-sive noise of the salt-and-pepper type, frequent in practice. Wemodify the images in Fig. 4 adding a 5, 10, 20 and 30% of noisedensity. In Table III we show the frequency of choosing each ag-gregation function in the setting of Example 2. The results areillustrated in Fig. 5. In firtst row we show original images withnoise. In second row we show the images obtained applying Al-gorithm 1 and a simple reduction algorithm of subsampling.

Notice that when the amount of salt-and-pepper noise in theimages increases, the frequency of choosing the median alsoincreases. This is shown in Fig. 6. On the horizontal axis, weshow the percentage of pixels affected by noise. On the verticalaxis, we show the percentage of times that each aggregationfunction is selected by Algorithm 1. The larger the impulsivenoise is, the more often the median is selected instead of thearithmetic mean.

As the median is taken most frequently over each block of theimage, Algorithm 1 allows one to discard the impulsive noise.This is explained by the fact that the median is not affected bythe extremal values that are taken by the corrupted pixels.

Fig. 4. (a) and (c) Original color images. (b) and (d) Reduced images applyingAlgorithm 1.

The main advantage of Algorithm 1 is that it makes unneces-sary to use an ad hoc filter prior to the image reduction in orderto eliminate this kind of noise.

B. Second Algorithm for Image Reduction

We see that Algorithm 1 does not ensure that we select theglobal minimizer of the penalty function by trying distinct

http://ieeexploreprojects.blogspot.com

Page 7: Image reduction using means on discrete product lattices.bak

1076 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 3, MARCH 2012

TABLE IIFREQUENCY OF CHOOSING AGGREGATION FUNCTIONS BY ALGORITHM 1 IN

IMAGES (A) AND (C) OF FIG. 4

TABLE IIIFREQUENCY OF CHOOSING AGGREGATION FUNCTIONS BY ALGORITHM 1 WHEN

IMAGES (A) AND (C) OF FIG. 4 ARE AFFECTED BY SALT-AND-PEPPER NOISE

Fig. 5. (a) and (c) Original images with 20% of impulsive noise and (b1) and(d1) Reductions applying Algorithm 1 and (b2) and (d2) subsampling algorithm.

aggregation functions. The second proposed algorithm im-proves on that. We repeat steps 1–5 of Algorithm 1. Once we

Fig. 6. Frequency of aggregation functions as a function of the intensity of thesalt-and-pepper noise of original images (a) and (c) of Fig. 4.

have , Algorithm 2 is based on the calculation ofall the possible combinations of , , and (thereare such combinations) in the following way:

Notice that the possible outputs of Algorithm 1 are a subset ofpossible outputs of Algorithm 2. We analyze under which condi-tions that the solutions of Algorithm 2 are different from those inAlgorithm 1. In these cases, the value of the same penalty func-tion with respect to will be less than value calculatedin Algorithm 1. In Fig. 7, we show a diagram of Algorithm 2.

Algorithm 2 Second color image reduction algorithm

Input: of dimension

Output: of dimension

1: Divide image in disjoint blocks of dimension . Ifor are not multiples of eliminate the necessary number ofrows and/or columns to satisfy this condition.

2: Choose the penalty function .

http://ieeexploreprojects.blogspot.com

Page 8: Image reduction using means on discrete product lattices.bak

BELIAKOV et al.: IMAGE REDUCTION USING MEANS ON DISCRETE PRODUCT LATTICES 1077

Fig. 7. Diagram of Algorithm 2.

3: Take averaging aggregation functions .

4: for each block in do

5: Apply to each pixel in each block (in the three channels R,G and B) aggregation functions, as follows:

6: Calculate combinations of the values obtained in theprevious step, as follows:

where .

7: Calculate penalties for each.

8: Assign value with the smallest penalty to thecorresponding pixel of the reduced image

9: end for

In Fig. 8, we apply Algorithm 2 in the same setting of Ex-ample 2. The first image [image (a)] is a synthetic one withsmall variations of color. The second image [image (b)] is a tex-ture image with large variations of intensity. By analyzing theresults, we observe that when we apply Algorithms 1 and 2 to

Fig. 8. (a), (b) Original images. (c), (d) Color image reductions obtained byAlgorithm 2.

image (a), we obtain the same results. However, when we applythem to image (b), around 60% of pixels are different.

Hence, when dealing with large intensity changes, the solu-tions given by Algorithm 2 provide smaller values of penalty .However, the computational cost of this algorithm is higher. Asthe number of aggregations increases, the running time of Al-gorithm 2 increases exponentially, (whereas, for Algorithm 1, itincreases linearly). This prompted us to develop another algo-rithm improving on Algorithms 1 and 2.

C. Third Algorithm for Image Reduction

The third reduction algorithm aims at identifying the globalminimum of penalty function for each block of the image. Itis based on coordinate descent, as outlined in Section IV. Theidea of the algorithm is the following: First, we initialize thevalue of . Then, for the first component,the goal of the coordinate descent is to find value suchthat is a minimum in :

We apply the same process of coordinate descent to thesecond and the third components. Once we have the new valueof , we repeat the same process (minimization of the threecomponents) until the value of remains the same for twoconsecutive iterations. Then, is the value that minimizes thepenalty function . In Fig. 9, we show a diagram of Algorithm3.

Algorithm 3 Third color image reduction algorithm

Input: of dimension

Output: of dimension

1: Divide the image in disjoint blocks of dimension . Ifor are not multiples of , eliminate the necessary number

of rows and/or columns to satisfy this condition.

2: Choose the penalty function .

3: for each block in do

http://ieeexploreprojects.blogspot.com

Page 9: Image reduction using means on discrete product lattices.bak

1078 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 3, MARCH 2012

4: Calculate by means of thecoordinate descent algorithm (Algorithm 4)

5: Assign value to the corresponding pixel of the reducedimage.

6: end for

Algorithm 4 Coordinate descent Algorithm

Input:

Output:

1:

2: repeat

3: for each component do

4: if component then

5:6: else if component then

7:

8: else

9:

10: end if

11:

12: if then

13: repeat

14:

15: update ,

16: until

17:

18: else

19: if then

20: repeat

21:

22: update ,

23: until

24:

25: end if

26: end if

27: end for

28: until is no longer modified

Fig. 9. Diagram of Algorithm 3.

Fig. 10. Differences in each color component between reductions obtained byAlgorithm 1 and by Algorithm 3.

As was the case with Algorithm 2, the largest difference be-tween images reduced by Algorithms 1 and 3 can be found inthe areas with a bigger variation of intensities. In Fig. 10, weshow, for each color component, an image of normalized dif-ferences between the images reduced with Algorithm 1 and thesame images reduced with Algorithm 3. In these images, clearerpixels correspond to a bigger difference between the images.Observe that light areas correspond to edges, i.e., areas withlarge changes of intensity.

VI. EXPERIMENTAL RESULTS

In this section, we present a formal comparative study of theperformance of the algorithms proposed in this paper and someother image reduction algorithms from the literature. Other al-gorithms consider each color component separately, i.e., arebased on the Cartesian product. We analyze 11 images in RGBof dimension 256 256 from URL http://decsai.ugr.es/cvg/dbimagenes/index.php. In Fig. 11, we show the first six orig-inal images out of the 11 images considered.

We compare three proposed reduction algorithms with a clas-sical subsampling algorithm (sub) (taking only one pixel from

http://ieeexploreprojects.blogspot.com

Page 10: Image reduction using means on discrete product lattices.bak

BELIAKOV et al.: IMAGE REDUCTION USING MEANS ON DISCRETE PRODUCT LATTICES 1079

Fig. 11. Original images used in the experimental study.

TABLE IVMSE OF RECONSTRUCTIONS OF 11 ORIGINAL IMAGES USING ALGORITHMS

1–3, FUZZY TRANSFORM, AND SUBSAMPLING

TABLE VSIM OF RECONSTRUCTIONS OF 11 ORIGINAL IMAGES USING ALGORITHMS 1–3,

FUZZY TRANSFORM, AND SUBSAMPLING

each block, usually the central one) and a recent method basedon the fuzzy transform (Trans) [15]. We take, as the penalty, thefunction in (2). To measure the accuracy of each method, wefollow the same scheme presented in Section II-C: to enlarge the

TABLE VIPEN OF RECONSTRUCTIONS OF 11 ORIGINAL IMAGES USING ALGORITHMS

1–3, FUZZY TRANSFORM, AND SUBSAMPLING

Fig. 12. Reduced images of (d) and (e) of Fig. 11 using algorithm 1, algorithm2, algorithm 3, fuzzy transform, and subsampling.

reduced image to the original dimension and to compare the re-sulting image with the original image. The method for enlarge-ment is presented in Section II-C. It is a very simple methodwith low computational cost. Moreover, this method allows usto compare visually the obtained images to their respective orig-inal image without changing the results obtained for the reducedimages.

http://ieeexploreprojects.blogspot.com

Page 11: Image reduction using means on discrete product lattices.bak

1080 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 3, MARCH 2012

Fig. 13. Reconstructed images of Fig. 12 of Algorithm 1, Trans, and Sub reduction algorithms.

Fig. 14. Reconstructed images of Fig. 12 of Algorithm 1, Trans, and Sub reduction algorithms.

To measure the differences between the reconstructed andthe original images, we use the error and similarity measures

that appear most commonly in the literature (based on Carte-sian products of similarities for each color), and a new measure

http://ieeexploreprojects.blogspot.com

Page 12: Image reduction using means on discrete product lattices.bak

BELIAKOV et al.: IMAGE REDUCTION USING MEANS ON DISCRETE PRODUCT LATTICES 1081

TABLE VIIMSE OF RECONSTRUCTIONS OF 11 IMAGES WITH SALT-AND-PEPPER NOISE

USING ALGORITHMS 1–3, FUZZY TRANSFORM, AND SUBSAMPLING

based on the arithmetic mean in product lattices: the MSE, thesimilarity measure SIM presented in [25] and [26], and the errorbased on penalty PEN defined as

MSE

SIM

PEN

In Tables IV–VI, we show the error in the reconstruction byusing MSE, SIM, and PEN, respectively. It is better to have thevalues of MSE and PEN smaller and the value of SIM larger.The best results are obtained with the algorithms that we pro-pose in this paper. Moreover, the results of the three algorithmsare very similar to each other and improve by around 18% com-pared with the results of the fuzzy transform.

In Fig. 12, we visually show the results for two of the siximages in Fig. 11 obtained by means of the five analyzed reduc-tion methods. To observe better the differences, in Fig. 13, weshow the images reconstructed to its original size and obtainedby means of Algorithm 1, fuzzy transform, and subsampling.

A. Experiments With Impulsive Noise

We now consider the same images with salt-and-pepper noise.We calculated MSE, SIM, and PEN of the reconstructed im-ages (see Fig. 14). We have changed 10% of the pixels in testimages. We present the results in Tables VII–IX. In Fig. 15,weshow the images obtained by the five considered reduction algo-rithms (Algorithms 1–3, Trans, and Sub) applied to the imagesof Fig. 11 with noise.

For images with the impulsive noise, the three proposed al-gorithms provide the best results. In particular, the results of Al-gorithm 1 are very competitive. In Section V-A-2, we have al-ready seen that when adding salt-and-pepper noise, the numberof times Algorithm 1 uses the median increases. In particular,we know that the median is very useful to suppress that kind ofnoise.

To analyze the algorithms confronting a larger amount ofnoise, in Fig. 16, we show the mean MSE, SIM, and PEN, re-spectively, (vertical axis) of the reconstructions of the 11 images

TABLE VIIISIM OF RECONSTRUCTIONS OF 11 IMAGES WITH SALT-AND-PEPPER NOISE

USING ALGORITHMS 1–3, FUZZY TRANSFORM, AND SUBSAMPLING

TABLE IXPEN OF RECONSTRUCTIONS OF 11 IMAGES WITH SALT-AND-PEPPER NOISE

USING ALGORITHMS 1–3, FUZZY TRANSFORM, AND SUBSAMPLING

TABLE XMEAN CPU TIMES OF THE REDUCTION ALGORITHMS

with noise levels of 5%, 10%, 20%, and 30% (horizontal axis).Notice that the results of the three algorithms are very similarand much better when compared with the results obtained withthe fuzzy transform and the subsampling methods, even whenthe amount of noise increases.

Finally, in Table X, we present average running times of thethree proposed algorithms and two methods we benchmarkagainst. The algorithms have been programmed in Matlab. Ob-viously, the lowest runtime corresponds with the subsamplingalgorithm because no computation on the image is required.However, the results of this algorithm are not good.

In the three presented algorithms, the computational com-plexity obviously depends on the dimension of the originalimage and on the size of the reduction block. However, thereare differences between the three algorithms. In Algorithms1 and 2, the number of mathematical operations (calculationof the aggregation functions) is the same if we take the samevalue in step 3 of Algorithms 1 and 2. However, the numberof evaluations of the penalty function is different: in Algorithm1, we evaluate it times, whereas in Algorithm 2, we evaluateit times. This increases the cost of the algorithm. Observethat the lowest runtime corresponds to Algorithm 1, whereasthe run time of Algorithm 2 is very high. On the other side, inAlgorithm 3, the number of evaluations of the penalty function

http://ieeexploreprojects.blogspot.com

Page 13: Image reduction using means on discrete product lattices.bak

1082 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 3, MARCH 2012

Fig. 15. Reduced images of Fig. 11 with salt-and-pepper noise using algorithm 1, algorithm 2, algorithm 3, fuzzy transform, and subsampling.

Fig. 16. Mean MSE, SIM, and PEN of 11 reconstructed images with 5%, 10%,20%, and 30% of salt-and-pepper noise density.

depends on the coordinate descent algorithm. In noiseless im-ages, the algorithm is able to find the minimum of the penalty

TABLE XIMEAN CPU TIMES OF ALGORITHM 3 WITH NOISY IMAGES AND

DIFFERENT INITIALIZATIONS

function in two iterations. This makes the run time smallerthan that of Algorithm 2. Moreover, the initialization valueis determinant for the run time: An initialization close to thesolution diminishes the required number of evaluations of .

We now study the CPU times of the proposed algorithms fornoisy images. We observe that the CPU times of Algorithm 3 in-crease, whereas the rest remain stable. If we add some extremevalues (impulsive noise) to the data, then the number of itera-tions increases. In addition, the step 1 of the coordinate descentalgorithm (the initial value of ) is again an important deter-minant of the number of iterations. In Table XI, we show theCPU times of Algorithm 3 with three different initializations: 1)taking as the cartesian product of arithmetic means ;2) taking as the cartesian product of the medians ;and 3) taking as the result of applying Algorithm 1 .Notice that the lowest CPU times corresponds to . As wehave illustrated before, the results of Algorithm 1 are good so-lutions (Table VII–IX) and appropriate initializations to the co-ordinate descent algorithm.

VII. CONCLUSION

Based on the representation of the classical mean and the me-dian as solutions to minimization problems, we have defined themean and the median on discrete product lattices in the sameway. We have shown that the median becomes the Cartesianproduct of the medians defined on discrete chains, and that themean is not the Cartesian product of the respective means. Weproved that penalty-based functions based on dissimilarities are

http://ieeexploreprojects.blogspot.com

Page 14: Image reduction using means on discrete product lattices.bak

BELIAKOV et al.: IMAGE REDUCTION USING MEANS ON DISCRETE PRODUCT LATTICES 1083

monotone (i.e., aggregation functions) in the case of productlattices.

The main motivation and application of this paper are the ag-gregation of colors in image processing. In this context, we havepresented three image reduction algorithms based on aggrega-tion by means of penalty functions. We have shown that, as inthe case of grayscale images, the results obtained with differenterror measures may be determined by the aggregation functionthat is used.

We have compared the proposed algorithms with some of themost commonly used reduction methods, and we found that theproposed methods are superior for color image reduction. Theyare also robust with respect to impulsive noise in the images. Wehave studied the effect of noise on the proposed algorithms, andwe found that they efficiently filter out non-Gaussian noise. Thecomputational cost of the proposed methods is relatively small.

ACKNOWLEDGMENT

The authors would like to thank the editor and the anony-mous reviewers for their valuable suggestions that have greatlyimproved this paper.

REFERENCES

[1] M. Galar, J. Fernandez, G. Beliakov, and H. Bustince, “Interval-valuedfuzzy sets applied to stereo matching of color images,” IEEE Trans.Image Process., vol. 20, no. 7, pp. 1949–1961, Jul. 2011.

[2] B. De Baets and R. Mesiar, “Triangular norms on product lattices,”Fuzzy Sets Syst., vol. 104, no. 1, pp. 61–75, May 1999.

[3] S. Jenei and B. De Baets, “On the direct decomposability of t-norms onproduct lattices,” Fuzzy Sets Syst., vol. 139, no. 3, pp. 699–707, Nov.2003.

[4] G. Mayor and J. Monreal, “Additive generators of discrete conjunctiveaggregation operations,” IEEE Trans. Fuzzy Syst., vol. 15, no. 6, pp.1046–1052, Dec. 2007.

[5] T. Calvo and G. Beliakov, “Aggregation functions based on penalties,”Fuzzy Sets Syst., vol. 161, no. 10, pp. 1420–1436, May 2010.

[6] T. Calvo, R. Mesiar, and R. Yager, “Quantitative weights and aggrega-tion,” IEEE Trans. Fuzzy Syst., vol. 12, no. 1, pp. 62–69, Feb. 2004.

[7] R. Yager and A. Rybalov, “Understanding the median as a fusion op-erator,” Int. J. Gen. Syst., vol. 26, no. 3, pp. 239–263, 1997.

[8] R. Mesiar, “Fuzzy set approach to the utility, preference relations, andaggregation operators,” Eur. J. Oper. Res., vol. 176, no. 1, pp. 414–422,Jan. 2007.

[9] G. Beliakov, A. Pradera, and T. Calvo, Aggregation Functions: A Guidefor Practitioners. Berlin, Germany: Springer-Verlag, 2007.

[10] Y. Torra and V. Narukawa, Modeling Decisions. Information Fusionand Aggregation Operators. Berlin, Germany: Springer-Verlag,2007.

[11] Aggregation Operators. New Trends and Applications, T. Calvo, G.Mayor, and R. Mesiar, Eds. Heidelberg, Germany: Physica-Verlag,2002.

[12] M. Grabisch, J.-L. Marichal, R. Mesiar, and E. Pap, Aggregation Func-tions. Cambridge, U.K.: Cambridge Univ. Press, 2009.

[13] H. Bustince, T. Calvo, B. De Baets, J. Fodor, R. Mesiar, J. Montero, D.Paternain, and A. Pradera, “A class of aggregations functions encom-passing two-dimensional OWA operators,” Inf. Sci., vol. 180, no. 10,pp. 1977–1989, May 2010.

[14] C. Gini, Le Medie. Milan, Italy: Unione Tipografico-Editorial Tori-nese, 1958, (Russian translation, Srednie Velichiny, Statistica, Moscow,1970).

[15] I. Perfilieva, “Fuzzy transforms: Theory and applications,” Fuzzy SetsSyst., vol. 157, no. 8, pp. 993–1023, Apr. 2006.

[16] H. Nobuhara, K. Hirota, S. Sessa, and W. Pedrycz, “Efficient decom-position methods of fuzzy relation and their application to image de-composition,” Appl. Soft Comput., vol. 5, no. 4, pp. 399–408, Jul. 2005.

[17] H. Kirshner and M. Porat, “On the role of exponential splines inimage interpolation,” IEEE Trans. Image Process., vol. 18, no. 10, pp.2198–2208, Oct. 2009.

[18] Y. Park and H. Park, “Arbitrary-ratio image resizing using fast DCTof composite length for DCT based transcode,” IEEE Trans. ImageProcess., vol. 15, no. 2, pp. 494–500, Feb. 2006.

[19] F. Di Martino, V. Loia, I. Perfilieva, and S. Sessa, “An image coding/decoding method based on direct and inverse fuzzy transforms,” Int. J.Approx. Reason., vol. 48, no. 1, pp. 110–131, Apr. 2008.

[20] V. Loia and S. Sessa, “Fuzzy relation equations for coding/decodingprocesses of images and videos,” Inf. Sci., vol. 171, no. 1–3, pp.145–172, Mar. 2005.

[21] A. Kanemura, S. Maeda, and S. Ishii, “Sparse bayesian learning of fil-ters for efficient image expansion,” IEEE Trans. Image Process., vol.19, no. 6, pp. 1480–1490, Jun. 2010.

[22] A. Jurio, M. Pagola, R. Mesiar, G. Beliakov, and H. Bustince, “Imagemagnification using interval information,” IEEE Trans. Image Process.,vol. 20, no. 11, pp. 3112–3123, Nov. 2011.

[23] H. Bustince, E. Barrenechea, and M. Pagola, “Restricted equivalencefunctions,” Fuzzy Sets Syst., vol. 157, no. 17, pp. 2333–2346, Sep. 2006.

[24] H. Bustince, E. Barrenechea, and M. Pagola, “Relationship betweenrestricted dissimilarity functions, restricted equivalence functions andnormal e-n-functions: Image thresholding invariant,” Pattern Recognit.Lett., vol. 29, no. 4, pp. 525–536, Mar. 2008.

[25] H. Bustince, V. Mohedano, E. Barrenechea, and M. Pagola, “Definitionand construction of fuzzy DI-subsethood measures,” Inf. Sci., vol. 176,no. 21, pp. 3190–3231, Nov. 2006.

[26] H. Bustince, M. Pagola, and E. Barrenechea, “Construction of fuzzyindexes from fuzzy DI-subsethood measures: Application to the globalcomparison of images,” Inf. Sci., vol. 177, no. 3, pp. 906–929, Feb.2007.

Gleb Beliakov (SM’08) received the Ph.D. degree inphysics and mathematics from the Russian PeoplesFriendship University, Moscow, Russia, in 1992.

He was a Lecturer and a Research Fellow withthe University of the Andes, the University of Mel-bourne, and the University of South Australia. He iscurrently an Associate Professor with the School ofIT, Deakin University, Burwood, Australia. He is theauthor of a hundred research papers in the mentionedareas and a number of software packages. Hisresearch interests include fuzzy systems, aggregation

operators, multivariate approximation, global optimization, decision supportsystems, and applications of fuzzy systems in healthcare.

Humberto Bustince (M’06) received the Ph.D.degree in mathematics from Public University ofNavarra, Pamplona, Spain, in 1994.

He is a Full Professor with the Department ofAutomatics and Computation, Public University ofNavarra. He is the author of more than 80 papersin WoS. His research interests include fuzzy logictheory, extensions of fuzzy sets (e.g., Type-2 fuzzysets, Interval-valued fuzzy sets, and Atanassov’sintuitionistic fuzzy sets), Fuzzy measures, aggre-gation operators, and fuzzy techniques for image

processing.

Daniel Paternain received the M.Sc. degree in com-puter sciences from the Public University of Navarra,Pamplona, Spain, in 2008. He is currently workingtoward the Ph.D. degree with the Department ofAutomatics and Computation, Public University ofNavarra.

He is currently a Teaching Assistant with theDepartment of Automatics and Computation, PublicUniversity of Navarra. His research interests includeimage processing, focusing on image reduction, andapplications of aggregation functions, fuzzy sets,

and extensions of fuzzy sets.

http://ieeexploreprojects.blogspot.com


Recommended