14
1 A Linear Programming Approach for Optimal Contrast-tone Mapping Xiaolin Wu, Fellow, IEEE Abstract—This paper proposes a novel algorithmic approach of image enhancement via optimal contrast-tone mapping (OCTM). In a fundamental departure from the current practice of his- togram equalization for contrast enhancement, the proposed approach maximizes expected contrast gain subject to an upper limit on tone distortion and optionally to other constraints that suppress artifacts. The underlying contrast-tone optimization problem can be solved efficiently by linear programming. This new constrained optimization approach for image enhancement is general, and the user can add and fine tune the constraints to achieve desired visual effects. Experimental results demonstrate clearly superior performance of the new approach over histogram equalization and its variants. Index term: contrast enhancement, tone reproduction, dy- namic range, histogram equalization, linear programming. I. I NTRODUCTION In most image and video applications it is human viewers that make the ultimate judgement of visual quality. They typ- ically associate high image contrast with good image quality. Indeed, a noticeable progress in image display and generation (both acquisition and synthetic rendering) technologies is the increase of dynamic range and associated image enhancement techniques [1]. The contrast of a raw image can be far less than ideal, due to various causes such as poor illumination conditions, low quality inexpensive imaging sensors, user operation errors, media deterioration (e.g., old faded prints and films), etc. For improved human interpretation of image semantics and higher perceptual quality, contrast enhancement is often performed and it has been an active research topic since early days of digital image processing, consumer electronics and computer vision. Contrast enhancement techniques can be classified into two approaches: context-sensitive (point-wise operators) and context-free (point operators). In context-sensitive approach the contrast is defined in terms of the rate of change in intensity between neighboring pixels. The contrast is increased by directly altering the local waveform on a pixel by pixel basis. For instance, edge enhancement and high-boost filtering belong to the context-sensitive approach. Although intuitively To appear in IEEE Transactions on Image Processing. Copyright (c) 2010 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected]. Xiaolin Wu is with Department of Electrical and Computer Engineer- ing, McMaster University, Hamilton, Ontario, Canada L8S 4K1, Email: [email protected] This research is sponsored by Natural Sciences and Engineering Research Council of Canada, and by Natural Science Foundation of China grant 60932006 and the 111 Project B07022. appealing, the context-sensitive techniques are prone to arti- facts such as ringing and magnified noises, and they cannot preserve the rank consistency of the altered intensity levels. The context-free contrast enhancement approach, on the other hand, does not adjust the local waveform on a pixel by pixel basis. Instead, the class of context-free contrast enhancement techniques adopt a statistical approach. They manipulate the histogram of the input image to separate the gray levels of higher probability further apart from the neighboring gray levels. In other words, the context-free techniques aim to increase the average difference between any two altered input gray levels. Compared with its context-sensitive counterpart, the context-free approach does not suffer from the ringing artifacts and it can preserve the relative ordering of altered gray levels. Despite more than half a century of research on contrast enhancement, most published techniques are largely ad hoc. Due to the lack of a rigorous analytical approach to contrast enhancement, histogram equalization seems to be a folklore synonym for contrast enhancement in the literature and in textbooks of image processing and computer vision. The jus- tification of histogram equalization as a contrast enhancement technique is heuristic, catering to an intuition. Low contrast corresponds to a biased histogram and thus can be rectified by reallocating underused dynamic range of the output device to more probable pixel values. Although this intuition is backed up by empirical observations in many cases, the relationship between histogram and contrast has not been precisely quan- tified. No mathematical basis exists for the uniformity or near uniformity of the processed histogram to be an objective of contrast enhancement in general sense. On the contrary, histogram equalization can be detrimental to image inter- pretation if carried out mechanically without care. In lack of proper constraints histogram equalization can over shoot the gradient amplitude in some narrow intensity range(s) and flatten subtle smooth shades in other ranges. It can bring unacceptable distortions to image statistics such as average intensity, energy, and covariances, generating unnatural and incoherent 2D waveforms. To alleviate these shortcomings, a number of different techniques were proposed to modify the histogram equalization algorithm [2]–[7]. This line of investigations was initiated by Pisano et al. in their work of contrast-limited adaptive histogram equalization (CLAHE) [8]. Somewhat ironically, these authors had to limit contrast while pursuing contrast enhancement. Very recently, Arici et al. proposed to generate an intermediate histogram h in between the original input histogram h i and the uniform histogram u and then performs histogram equalization of h.

A Linear Programming Approach for Optimal Contrast-tone

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Linear Programming Approach for Optimal Contrast-tone

1

A Linear Programming Approach for OptimalContrast-tone Mapping

Xiaolin Wu, Fellow, IEEE

Abstract—This paper proposes a novel algorithmic approach ofimage enhancement via optimal contrast-tone mapping (OCTM).In a fundamental departure from the current practice of his-togram equalization for contrast enhancement, the proposedapproach maximizes expected contrast gain subject to an upperlimit on tone distortion and optionally to other constraints thatsuppress artifacts. The underlying contrast-tone optimizationproblem can be solved efficiently by linear programming. Thisnew constrained optimization approach for image enhancementis general, and the user can add and fine tune the constraints toachieve desired visual effects. Experimental results demonstrateclearly superior performance of the new approach over histogramequalization and its variants.

Index term: contrast enhancement, tone reproduction, dy-namic range, histogram equalization, linear programming.

I. INTRODUCTION

In most image and video applications it is human viewersthat make the ultimate judgement of visual quality. They typ-ically associate high image contrast with good image quality.Indeed, a noticeable progress in image display and generation(both acquisition and synthetic rendering) technologies is theincrease of dynamic range and associated image enhancementtechniques [1].

The contrast of a raw image can be far less than ideal,due to various causes such as poor illumination conditions,low quality inexpensive imaging sensors, user operation errors,media deterioration (e.g., old faded prints and films), etc. Forimproved human interpretation of image semantics and higherperceptual quality, contrast enhancement is often performedand it has been an active research topic since early days ofdigital image processing, consumer electronics and computervision.

Contrast enhancement techniques can be classified intotwo approaches: context-sensitive (point-wise operators) andcontext-free (point operators). In context-sensitive approachthe contrast is defined in terms of the rate of change inintensity between neighboring pixels. The contrast is increasedby directly altering the local waveform on a pixel by pixelbasis. For instance, edge enhancement and high-boost filteringbelong to the context-sensitive approach. Although intuitively

To appear in IEEE Transactions on Image Processing. Copyright (c) 2010IEEE. Personal use of this material is permitted. However, permission to usethis material for any other purposes must be obtained from the IEEE bysending a request to [email protected].

Xiaolin Wu is with Department of Electrical and Computer Engineer-ing, McMaster University, Hamilton, Ontario, Canada L8S 4K1, Email:[email protected]

This research is sponsored by Natural Sciences and Engineering ResearchCouncil of Canada, and by Natural Science Foundation of China grant60932006 and the 111 Project B07022.

appealing, the context-sensitive techniques are prone to arti-facts such as ringing and magnified noises, and they cannotpreserve the rank consistency of the altered intensity levels.The context-free contrast enhancement approach, on the otherhand, does not adjust the local waveform on a pixel by pixelbasis. Instead, the class of context-free contrast enhancementtechniques adopt a statistical approach. They manipulate thehistogram of the input image to separate the gray levels ofhigher probability further apart from the neighboring graylevels. In other words, the context-free techniques aim toincrease the average difference between any two altered inputgray levels. Compared with its context-sensitive counterpart,the context-free approach does not suffer from the ringingartifacts and it can preserve the relative ordering of alteredgray levels.

Despite more than half a century of research on contrastenhancement, most published techniques are largely ad hoc.Due to the lack of a rigorous analytical approach to contrastenhancement, histogram equalization seems to be a folkloresynonym for contrast enhancement in the literature and intextbooks of image processing and computer vision. The jus-tification of histogram equalization as a contrast enhancementtechnique is heuristic, catering to an intuition. Low contrastcorresponds to a biased histogram and thus can be rectified byreallocating underused dynamic range of the output device tomore probable pixel values. Although this intuition is backedup by empirical observations in many cases, the relationshipbetween histogram and contrast has not been precisely quan-tified.

No mathematical basis exists for the uniformity or nearuniformity of the processed histogram to be an objectiveof contrast enhancement in general sense. On the contrary,histogram equalization can be detrimental to image inter-pretation if carried out mechanically without care. In lackof proper constraints histogram equalization can over shootthe gradient amplitude in some narrow intensity range(s) andflatten subtle smooth shades in other ranges. It can bringunacceptable distortions to image statistics such as averageintensity, energy, and covariances, generating unnatural andincoherent 2D waveforms. To alleviate these shortcomings,a number of different techniques were proposed to modifythe histogram equalization algorithm [2]–[7]. This line ofinvestigations was initiated by Pisano et al. in their workof contrast-limited adaptive histogram equalization (CLAHE)[8]. Somewhat ironically, these authors had to limit contrastwhile pursuing contrast enhancement. Very recently, Ariciet al. proposed to generate an intermediate histogram h inbetween the original input histogram hi and the uniformhistogram u and then performs histogram equalization of h.

Page 2: A Linear Programming Approach for Optimal Contrast-tone

2

The in-between histogram h is computed by minimizing aweighted distance ‖h−hi‖+ λ‖h−u‖. The authors showedthat undesirable side effects of histogram equalization can besuppressed via choosing the Lagrangian multiplier λ. Thislatest paper also gave a good synopses of existing contrastenhancement techniques. We refer the reader to [9] for asurvey of previous works, instead of reparaphrasing them here.

Compared with the aforementioned works on histogram-based contrast enhancement techniques, this paper presents amore rigorous study of the problem. We reexamine contrastenhancement in a new perspective of optimal allocation of out-put dynamic range constrained by tune continuity. This bringsabout a more principled approach of image enhancement. Ourcritique of the current practice is that directly manipulatinghistograms for contrast enhancement was ill conceived. Thehistogram is an unwieldy, obscure proxy for contrast. Thewide use of histogram equalization as a means of context-freecontrast enhancement is apparently due to the lack of a propermathematical formulation of the problem. To fill this void wedefine an expected (context-free) contrast gain of a transferfunction. This relative measure of contrast takes on its basevalue of one if the input image is left unchanged (i.e., identitytransfer function), and increases if a skewed histogram is mademore uniform. However, perceptual image quality is more thanthe single aspect of high contrast. If the output dynamic rangeis less than that of the human visual system, which is thecase for most display and printing technologies, context-freecontrast enhancement will inevitably distort subtle tones. Tobalance between tone subtlety and contrast enhancement weintroduce a counter measure of tone distortion. Based on thesaid measures of contrast gain and tone distortion, we formu-late the problem of optimal contrast-tone mapping (OCTM)that aims to achieve high contrast and subtle tone reproductionat the same time, and propose a linear programming strategyto solve the underlying constrained optimization problem.In the OCTM formulation, the optimal transfer function forimages of uniform histogram is the identify function. Althoughan image of uniform histogram cannot be further enhanced,histogram equalization does not produce OCTM solutions ingeneral for arbitrary input histograms. Instead, the proposedlinear programming-based OCTM algorithm can optimize thetransfer function such that sharp contrast and subtle toneare best balanced according to application requirements anduser preferences. The OCTM technique offers a greater andmore precise control of visual effects than existing techniquesof contrast enhancement. Common side effects of contrastenhancement, such as contours, shift of average intensity, overexaggerated gradient, etc., can be effectively suppressed byimposing appropriate constraints in the linear programmingframework.

In addition, in the OCTM framework, L input gray levelscan be mapped to an arbitrary number Ł of output gray levels,allowing Ł to be equal, less or greater than L. The OCTMtechnique is therefore suited to output conventional images onhigh dynamic range displays or high dynamic range imageson conventional displays, with perceptual quality optimizedfor device characteristics and image contents. As such, OCTMcan be useful tool in high dynamic range imaging. Moreover,

OCTM can be unified with Gamma correction.Analogously to global and local histogram equalization,

OCTM can be performed based on either global or local statis-tics. However, in order to make our technical developments inwhat follows concrete and focused, we will only discuss theproblem of contrast enhancement over an entire image insteadof adapting to local statistics of different subimages. All theresults and observations can be readily extended to locallyadaptive contrast enhancement.

The remainder of the paper is organized as follows. In thenext section we introduce some new definitions related to theintuitive notions of contrast and tone, and propose the OCTMapproach of image enhancement. In section III, we develop alinear programming algorithm to solve the OCTM problem. Insection IV we discuss how to fine tune output images accord-ing to application requirements or users’ preferences within theproposed linear programming framework. Experimental resultsare reported in section V, and they demonstrate the versatilityand superior visual quality of the new contrast enhancementtechnique.

II. CONTRAST AND TONE

Consider a gray scale image I of b bits with a histogram hof K non-zero entries, x0 < x1 < · · · < xK−1, 0 < K ≤ L =2b. Let pk be the probability of gray level xk, 0 ≤ k < K.We define the expected context-free contrast of I by

C(p) = p0(x1 − x0) +∑

1≤k<K

pk(xk − xk−1). (1)

By the definition, the maximum contrast Cmax = L−1 and itis achieved by a binary black-and-white image x0 = 0, x1 =L− 1; the minimum contrast Cmin = 0 when the image is aconstant. As long as the histogram of I is full without holes,i.e., K = L, xk−xk−1 = 1, 0 ≤ k < L, C(p) = 1 regardlessthe intensity distribution (p0, p1, · · · , pL−1). Likewise, if xk−xk−1 = d > 1, 0 ≤ k < K < L, then C(p) = d.

Contrast enhancement is to increase the difference betweentwo adjacent gray levels and it is achieved by a remappingof input gray levels to output gray levels. Such a remappingis also necessary when reproducing a digital image of L graylevels by a device of Ł gray levels, L 6= Ł. This process is aninteger-to-integer transfer function

T : {0, 1, · · · , L− 1} → {0, 1, · · · ,Ł− 1} (2)

In order not to violate physical and psychovisual commonsense, the transfer function T should be monotonically non-decreasing such that T does not reverse the order of intensities.1 In other words, we must have T (j) ≥ T (i) if j > i, andhence any transfer function T has the form

T (i) =∑

0≤j≤i

sj , 0 ≤ i < L

sj ∈ {0, 1, · · · ,Ł− 1}∑0≤j<L

sj < Ł.

(3)

1This restriction may be relaxed in locally adaptive contrast enhancement.But in each locality the monotonicity should still be imposed.

Page 3: A Linear Programming Approach for Optimal Contrast-tone

3

where sj is the increment in output intensity versus a unit stepup in input level j (i.e., xj−xj−1 = 1), and the last inequalityensures the output dynamic range not exceeded by T (i).

In (3), sj can be interpreted as context-free contrast at levelj, which is the rate of change in output intensity withoutconsidering the pixel context. Note that a transfer function iscompletely determined by the vector s = (s0, s1, · · · , sL−1),namely the set of contrasts at all L input gray levels. Havingassociated the transfer function T with context-free contrastssj’s at different levels, we induce from (3) and definition (1)a natural measure of expected contrast gain made by T :

G(s) =∑

0≤j<L

pjsj (4)

where pj is the probability that a pixel in I has input graylevel j. The above measure conveys the colloquial meaning ofcontrast enhancement. To see this let us examine some specialcases.

Proposition 1: The maximum contract gain G(s) isachieved by sk = Ł− 1 such that pk = max{pi|0 ≤ i < L},and sj = 0, j 6= k.

Proof: Assume for a contradiction that sj = n > 0, j 6=k, would achieve higher contrast gain. Due to the constraint∑

0≤j<L sj < Ł, sk equals at most Ł − 1 − n. But pjn +pk(Ł− 1−n) ≤ pk(Ł− 1), refuting the previous assumption.

Proposition 1 reflects our intuition that the highest contrastis achieved when T achieves a single step (thresholding) blackto white transition, converting the input image from gray scaleto binary. The binary threshold is set at level k such that pk =max{pi|0 ≤ i < L} to maximize contrast gain.

One can preserve the average intensity while maximizingthe contrast gain. The average-preserving maximum contrastgain is achieved by sk = Ł − 1, sj = 0, j 6= k, suchthat

∑0≤j<k pj ≈

∑k≤j<L pj . Namely, T (i) is the binary

thresholding function at the average gray level.If L = Ł (i.e., when the input and output dynamic ranges

are the same), the identity transfer function T (i) = i, namely,si = 1, 0 ≤ i < L, makes G(1) = 1 regardless the graylevel distribution of the input image. In our definition theunit contrast gain means a neutral contrast level without anyenhancement. The notion of neutral contrast can be generalizedto the cases when L 6= Ł. We call τ = Ł/L the tone scale. Ingeneral, the transfer function

T (i) =⌊

Ł− 1L− 1

i+ 0.5⌋, 0 ≤ i < L (5)

or equivalently si = τ , 0 ≤ i < L, corresponds to the state ofneutral contrast G(τ1) = τ .

High contrast by itself does not equate high image quality.Another important aspect of image fidelity is the tone conti-nuity. A single-minded approach of maximizing G(s) wouldlikely produce over-exaggerated, unnatural visual effects, asrevealed by Proposition 1. The resulting T (i) degenerates acontinuous-tone image to a binary image. This maximizesthe contrast of a particular gray level but completely ignoresaccurate tone reproduction. We begin our discussions on the

trade-off between contrast and tone by stating the followingsimple and yet informative observation.

Proposition 2: The max min{s0, s1, · · · , sL−1} isachieved if and only if G(τ1) = τ , or si = τ , 0 ≤ i < L.

As stated above, the simple linear transfer function, i.e.,doing nothing in the traditional sense of contrast enhancement,actually maximizes the minimum of context-free contrasts siof different levels 0 ≤ i < L, and the neutral contrast gainG(τ1) = τ is largest possible when satisfying this maxmincriterion.

In terms of visual effects, the reproduction of continuoustones demands the transfer function to meet the maxmincriterion of proposition 2. The collapse of distinct gray levelsinto one tends to create contours or banding artifacts. Inthis consideration, we define the tone distortion of a transferfunction T (i) by

D(s) = max1≤i,j≤L

{j − i‖T (i) = T (j); pi > 0, pj > 0} (6)

In the definition we account for the fact that the transferfunction T (i) is not a one-to-one mapping in general. Thesmaller the tone distortion D(s) the smoother the tone repro-duced by T (i). It is immediate from the definition that thesmallest achievable tone distortion is minsD(s) = 1/τ − 1.But since the dynamic range Ł of the output device is finite,the two visual quality criteria of high contrast and tonecontinuity are in mutual conflict. Therefore, the mitigation ofsuch an inherent conflict is a critical issue in designing contrastenhancement algorithms, which is seemingly overlooked in theexisting literature on the subject.

Following the discussions above, the problem of contrastenhancement manifests itself as the following optimizationproblem

maxs{G(s)− λD(s)} (7)

The OCTM objective function (7) aims for sharpness of highfrequency details and tone subtlety of smooth shades at thesame time, using the Lagrangian multiplier λ to regulate therelative importance of the two mutually conflicting fidelitymetrics.

Interestingly, the OCTM solution of (7) is s = 1 if theinput histogram of an image I is uniform. It is easy to verifythat G(s) = 1 for all s but D(s) = 0 when s = 1 forp0 = p1 = · · · = pL−1 = 1/L. In other words, no othertransfer functions can make any contrast gain over the identitytransfer function T (i) = i (or s = 1), and at the sametime the identity transfer function achieves the minimum tonedistortion D(1) = minsD(s) = 0. This concludes that animage of uniform histogram cannot be further enhanced inOCTM, lending a support for histogram equalization as a con-trast enhancement technique. For a general input histogram,however, the transfer function of histogram equalization is notnecessarily the OCTM solution, as we will appreciate in thefollowing sections.

III. CONTRAST-TONE OPTIMIZATION BY LINEARPROGRAMMING

To motivate the development of an algorithm for solving(7), it is useful to view contrast enhancement as an optimal

Page 4: A Linear Programming Approach for Optimal Contrast-tone

4

resource allocation problem with constraint. The resource isthe output dynamic range and the constraint is tone distortion.The achievable contrast gain G(s) and tone distortion D(s) arephysically confined by the output dynamic range Ł of the out-put device. In (4) the optimization variables s0, s1, · · · , sL−1

represent an allocation of Ł available output intensity levels,each competing for a larger piece of dynamic range. Whilecontrast enhancement necessarily invokes a competition fordynamic range (an insufficient resource), a highly skewedallocation of Ł output levels to L input levels can deprive someinput gray levels of necessary representations, incurring tonedistortion. This causes unwanted side effects, such as flattenedsubtle shades, unnatural contour bands, shifted average inten-sity, and etc. Such artifacts were noticed by other researchersas drawbacks of the original histogram equalization algorithm,and they proposed a number of ad hoc. techniques to alleviatethese artifacts by reshaping the original histogram prior tothe equalization process. In OCTM, however, the control ofundesired side effects of contrast enhancement is realized bythe use of constraints when maximizing contrast gain G(s).

Since the tone distortion function D(s) is not linear in s,directly solving (7) is difficult. Instead, we rewrite (7) as thefollowing constrained optimization problem:

maxs

∑0≤j<L

pjsj

subject to (a)∑

0≤j<L

sj < Ł;

(b) sj ≥ 0, 0 ≤ j < L;

(c)∑

j≤i<j+d

si ≥ 1, 0 ≤ j < L− d.

(8)

In (8), constraint (a) is to confine the output intensity levelto the available dynamic range; constraints (b) ensure thatthe transfer function T (i) be monotonically non-decreasing;constraints (c) specify the maximum tone distortion allowed,where d is an upper bound D(s) ≤ d. The objective functionand all the constraints are linear in s. The choice of d dependson user’s requirement on tone continuity. In our experiments,pleasing visual appearance is typically achieved by setting dto 2 or 3.

Computationally, the OCTM problem formulated in (8) isone of integer programming. This is because the transfer func-tion T (i) is an integer-to-integer mapping, i.e., all componentsof s are integers. But we can relax the integer constraints ons and convert (8) to a linear programming problem. By therelaxation any solver of linear programming can be used tosolve the real version of (8). The resulting real-valued solutions = (s0, s1, · · · , sL−1) can be easily converted to an integer-valued transfer function:

T (i) =

∑0≤j≤i

sj + 0.5

, 0 ≤ i < L (9)

For all practical considerations the proposed relaxation so-lution does not materially compromise the optimality. Asa beneficial side effect, the linear programming relaxationsimplifies constraint (c) in (8), and allows the contrast-tone

optimization problem to be stated as

maxs

∑0≤j<L

pjsj

subject to∑

0≤j<L

sj < Ł;

sj ≥ 1/d, 0 ≤ j < L.

(10)

IV. FINE TUNING OF VISUAL EFFECTS

The proposed OCTM technique is general and it can achievedesired visual effects by including additional constraints in(10). We demonstrate the generality and flexibility of theproposed linear programming framework for OCTM by someof many possible applications.

The first example is the integration of Gamma correctioninto contrast-tone optimization. The optimized transfer func-tion T (s) can be made close to the Gamma transfer functionby adding to (10) the following constraint

∑0≤i<L

∣∣∣∣∣∣(L− 1)−1∑

0≤j≤i

sj − [i(L− 1)−1]γ

∣∣∣∣∣∣ ≤ ∆ (11)

where γ is the Gamma parameter and ∆ is the degree ofcloseness between the resulting T (s) and the Gamma mapping[i(L− 1)−1]γ .

In applications when the enhancement process cannotchange the average intensity of the input image by certainamount ∆µ, the user can impose this restriction easily in (10)by adding another linear constraint∣∣∣∣∣∣LŁ

∑0≤i<L

pi∑

0≤j≤i

sj −∑

0≤i<L

pii

∣∣∣∣∣∣ ≤ ∆µ (12)

Besides the use of constraints in the linear programmingframework, we can incorporate context-based or semantics-based fidelity criteria directly into the OCTM objective func-tion. The contrast gain G(s) =

∑pjsj depends only on the

intensity distribution of the input image. We can augmentG(s) by weighing in the semantic or perceptual importanceof increasing the contrast at different gray levels by wj ,0 ≤ j < L.

In general, wj can be set up to reflect specific requirementsof different applications. In medical imaging, for example,the physician can read an image of L gray levels on an Ł-level monitor, Ł < L, with a certain range of gray levelsj ∈ [j0, j1] ⊂ [0, L) enhanced. Such a weighting functionpresents itself naturally if there is a preknowledge that the in-terested anatomy or lesion falls into the intensity range [j0, j1]for given imaging modality. In combining image statistics anddomain knowledge or/and user preference, we introduce aweighted contrast gain function

Gw(s) =∑

0≤j<L

pjsj + λ∑

0≤j<L

wjsj (13)

where λ is a Lagrangian multiplier to factor in user-prioritizedcontrast into the objective function.

In summarizing all discussions above we finally present thefollowing general linear programming framework for visualquality enhancement.

Page 5: A Linear Programming Approach for Optimal Contrast-tone

5

maxs

∑0≤j<L

(pj + λwj)sj

subject to∑

0≤j<L

sj < Ł;

sj ≥ 1/d, 0 ≤ j < L;∑0≤i<L

∣∣∣∣∣∣(Ł− 1)−1∑

0≤j≤i

sj − [i(L− 1)−1]γ

∣∣∣∣∣∣ ≤ ∆∣∣∣∣∣∣LŁ∑

0≤i<L

pi∑

0≤j≤i

sj −∑

0≤i<L

pii

∣∣∣∣∣∣ ≤ ∆µ.

(14)

In this paper we focus on global contrast-tone optimization.The OCTM technique can be applied separately to differentimage regions and hence made adaptive to local image statis-tics. The idea is similar to that of local histogram equalization.However, in locally adaptive histogram equalization [8], [10],each region is processed independently of others. A linearweighting scheme is typically used to fuse the results ofneighboring blocks to prevent block effects . In contrast,the proposed linear programming approach can optimize thecontrasts and tones of adjacent regions jointly while limitingthe divergence of the transfer functions of these regions.The only drawback is the increase in complexity. Furtherinvestigations in locally adaptive OCTM are underway.

V. EXPERIMENTAL RESULTS

Fig. 1 through Fig. 4 present some sample images that areenhanced by the OCTM technique in comparison with thoseproduced by conventional histogram equalization (HE). Thetransfer functions of both enhancement techniques are alsoplotted in accompany with the corresponding input histogramsto show different behaviors of the two techniques in differentimage statistics.

In image Beach (Fig. 1), the output of histogram equal-ization is too dark in overall appearance because the originalhistogram is skewed toward the bright range. But the OCTMmethod enhances the original image without introducing un-acceptable distortion in average intensity. This is partiallybecause of the constraint in linear programming that boundsthe relative difference (< 20% in this instance) between theaverage intensities of the input and output images.

Fig. 2 compares the results of histogram equalization andthe OCTM method when they are applied to a common portraitimage. In this example histogram equalization overexposesthe input image, causing an opposite side effect as in imageBeach, whereas the OCTM method obtains high contrast, tonecontinuity and small distortion in average intensity at the sametime.

Fig. 3 shows an example when the user assigns higherweights wj in (14) to gray levels j, j ∈ (a, b), where(a, b) = (100, 150) is an intensity range of interest (brainmatters in the head image). The improvement of OCTM overhistogram equalization in this typical scenario of medicalimaging is very significant.

In Fig. 4, the result of joint Gamma correction and contrast-tone optimization by the OCTM technique is shown, andcompared with those in difference stages of the separateGamma correction and histogram equalization process. Theimage quality of OCTM is clearly superior to that of theseparation method.

The new OCTM approach is also compared with thewell-known contrast-limited adaptive histogram equalization(CLAHE) [8] in visual quality. CLAHE is considered tobe one of the best contrast enhancement techniques, and italleviates many of the problems of histogram equalization,such as over- or under-exposures, tone discontinuities, andetc. Fig. 5 through Fig. 8 are side-by-side comparisons ofOCTM, CLAHE and HE. CLAHE is clearly superior to HE inperceptual quality, as well recognized in the existing literatureand among practitioners, but it is somewhat inferior to OCTMin overall image quality, particularly in the balance of sharpdetails and subtle tones. In fact, the OCTM technique wasassigned and implemented as a course project in one ofthe author’s classes. There was a consensus on the superiorsubjective quality of OCTM over HE and its variants amongmore than one hundred students.

Finally, in Figs. 9 and 10, we empirically assess the sensi-tivity of OCTM to noises in comparison with histogram-basedcontrast enhancement methods. Since contrast enhancementtends to boost high frequency signal components, it is difficultbut highly desirable for a contrast enhancement algorithm toresist noises. By inspecting the output images of differentalgorithms in Figs. 9 and 10, it should be apparent thatOCTM is more resistant to noises than HE and CLAHE. Thisis because OCTM can better balance the increase in contrastand the smoothness in tone.

VI. CONCLUSION

A new, general image enhancement technique of optimalcontrast-tone mapping is proposed. The resulting OCTM prob-lem can be solved efficiently by linear programming. TheOCTM solution can increase image contrast while preservingtone continuity, two conflicting quality criteria that were nothandled and balanced as well in the past.

REFERENCES

[1] E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, High DynamicRange Imaging: Acquisition, Display, and Image-Based Lighting. Mor-gan Kaufmann, 2005.

[2] Y. T. Kim, “Enhancement using brightness preserving bi-histogramequalization,” IEEE Trans. Consum. Electronics, vol. 43, no. 1, pp. 1–8,1997.

[3] Y. Wang, Q. Chen, and B. Zhang, “Image enhancement based on equalarea dualistic sub-image histogram equalization method,” IEEE Trans.Consum. Electronics, vol. 45, no. 1, pp. 68–75, 1999.

[4] J. M. Gauch, “Investigations of image contrast space defined by vari-ations on histogram equalization,” Proc. CVGIP: Grap. Models ImageProcessing, pp. 269–280, 1992.

[5] J. A. Stark, “Adaptive image contrast enhancement using generalizationsof histogram equalization,” IEEE Trans. Image Processing, pp. 889–896,2000.

[6] Z. Y. Chen, B. R. Abidi, D. L. Page, and M. A. Abidi, “Gray-level grouping(glg): An automatic method for optimized image contrastenhancement–part i: The basic method,” IEEE Trans. Image Processing,vol. 15, no. 8, pp. 2290–2302, 2006.

Page 6: A Linear Programming Approach for Optimal Contrast-tone

6

(a) (b)

(c) (d)

(e)Fig. 1. (a) the original; (b) the output of histogram equalization; (c) the output of the proposed OCTM method; (d) the transfer functions and the originalhistogram; (e) the histograms of the original image (left), the output image of histogram equalization (middle), and the output image of OCTM.

[7] ——, “Gray-level grouping(glg): An automatic method for optimizedimage contrast enhancement–part ii: The variations,” IEEE Trans. ImageProcessing, vol. 15, no. 8, pp. 2303–2314, 2006.

[8] E. D. Pisano, S. Zong, B. Hemminger, M. Deluca, R. E. Johnston,K. Muller, M. P. Braeuning, and S. Pizer, “Contrast limited adaptivehistogram image processing to improve the detection of simulated spic-ulations in dense mammograms,” Journal of Digital Imaging, vol. 11,no. 4, pp. 193–200, 1998.

[9] T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modificationframework and its application for image contrast enhancement,” IEEETrans. Image Processing, vol. 18, no. 9, pp. 1921–1935, 2009.

[10] S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz,T. Greer, B. H. Romeny, J. Zimmerman, and K. Zuiderveld, “Adaptivehistogram equalizatin and its variations,” Computer Vision, Graphics andImage Processing, vol. 39, pp. 355–368, 1987.

Page 7: A Linear Programming Approach for Optimal Contrast-tone

7

(a) (b) (c)

(d) (e)Fig. 2. (a) the original; (b) the output of histogram equalization; (c) the output of the proposed OCTM method; (d) the transfer functions and the originalhistogram; (e) the histograms of the original image (left), the output image of histogram equalization (middle), and the output image of OCTM.

Page 8: A Linear Programming Approach for Optimal Contrast-tone

8

(a) (b)

(c) (d)Fig. 3. (a) the original, (b) the output of histogram equalization, (c) the output of the proposed OCTM method, and (d) the transfer functions and the originalhistogram.

Page 9: A Linear Programming Approach for Optimal Contrast-tone

9

(a) (b)

(c) (d)Fig. 4. (a) the original image before Gamma correction, (b) after Gamma correction, (c) Gamma correction followed by histogram equalization, and (d)joint Gamma correction and contrast-tone optimization by the proposed OCTM method.

(a) Original image (b) HE

(c) CLAHE (d) OCTM

Fig. 5. Comparison of different methods on image Pollen.

Page 10: A Linear Programming Approach for Optimal Contrast-tone

10

(a) Original image (b) HE

(c) CLAHE (d) OCTM

Fig. 6. Comparison of different methods on image Rocks.

Page 11: A Linear Programming Approach for Optimal Contrast-tone

11

(a) Original image (b) HE

(c) CLAHE (d) OCTM

Fig. 7. Comparison of different methods on image Tree.

Page 12: A Linear Programming Approach for Optimal Contrast-tone

12

(a) Original image (b) HE

(c) CLAHE (d) OCTM

Fig. 8. Comparison of different methods on image Notre Dame.

Page 13: A Linear Programming Approach for Optimal Contrast-tone

13

(a) Noisy image (b) HE

(c) CLAHE (d) OCTM

Fig. 9. The results of different methods on image Rocks of noise, to be compared with those in Fig. 6.

Page 14: A Linear Programming Approach for Optimal Contrast-tone

14

(a) Noisy image (b) HE

(c) CLAHE (d) OCTM

Fig. 10. The results of different methods on image Notre Dame of noise, to be compared with those in Fig. 8.