12
1 STAR: A Structure and Texture Aware Retinex Model Jun Xu 1,2 , Mengyang Yu 1 , Li Liu 1 , Fan Zhu 1 , Dongwei Ren 3 , Yingkun Hou 4,* , Haoqian Wang 5 , and Ling Shao 1 1 Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates 2 College of Computer Science, Nankai University, Tianjin, China 3 College of Intelligence and Computing, Tianjin University, Tianjin, China 4 School of Information Science and Technology, Taishan University, Taian, China 5 International Graduate School, Tsinghua University, Shenzhen, China Retinex theory is developed mainly to decompose an image into the illumination and reflectance components by analyzing local image derivatives. In this theory, larger derivatives are attributed to the changes in reflectance, while smaller derivatives are emerged in the smooth illumination. In this paper, we propose to utilize the exponentiated derivatives (with an exponent γ) of an observed image to generate a structure map when being amplified with γ> 1 and a texture map when being shrank with γ< 1. To this end, we design exponential filters for the local derivatives, and present their capability on extracting accurate structure and texture maps, influenced by the choices of exponents γ on the local derivatives. The extracted structure and texture maps are employed to regularize the illumination and reflectance components in Retinex decomposition. A novel Structure and Texture Aware Retinex (STAR) model is further proposed for illumination and reflectance decomposition of a single image. We solve the STAR model in an alternating minimization manner. Each sub-problem is transformed into a vectorized least squares regression with closed-form solution. Comprehensive experiments demonstrate that, the proposed STAR model produce better quantitative and qualitative performance than previous compet- ing methods, on illumination and reflectance estimation, low-light image enhancement, and color correction. I. I NTRODUCTION T HE Retinex theory proposed by Land and McCann [1], [2] models the color perception of human vision on natural scenes. It can be viewed as a fundamental theory for intrinsic image decomposition problem [3], which aims to decomposing an image into reflectance and illumination (or shading). A simplified Retinex model involves decomposing an observed image O into an illumination component I and a reflectance component R via O = I R, where denotes the element-wise multiplication. In the observed scene O, the illumination I expresses the color of the light striking the surfaces of objects, while the reflectance R reflects the painted color of the surfaces of objects [4]. Retinex theory has been applied in many computer vision tasks, such as image enhancement [4]–[6] and image/color correction [7], [8] (please refer to Figure 1 for an example). The Retinex theory introduces a useful property of deriva- tives [1], [2], [4]: larger derivatives are often attributed to the changes in reflectance, while smaller derivatives are from the Corresponding author: Yingkun Hou (email: [email protected]). (a) Input (d) Structure/Texture (b) Illumination (e) Enhancement (c) Reflectance (f) Color Correction Fig. 1: An example to illustrate the applications of the pro- posed STAR model based on Retinex theory. (a) The input low-light/color-distorted image; (b) The estimated illumination component of (a); (c) The estimated reflectance component of (a); (d) The extracted structure and texture maps (half each) of (a); (e) The illumination enhanced low-light image of (a); (f) The color corrected image of (a). smooth illumination. With this property, the Retinex decom- position can be performed by classifying the image gradients into the reflectance component and the illumination one [9]. However, binary classification of image gradient is unreliable since reflectance and illumination changes will coincide in an intermediate region [4]. Later, several methods are proposed to classify the edges or edge junctions, instead of gradient, according to some trained classifiers [10], [11]. However, it is quite challenging to train classifiers considering all possible ranges of reflectance and illumination configurations. Besides, though these methods explicitly utilize the property of deriva- tives, they perform Retinex decomposition by analyzing the arXiv:1906.06690v3 [cs.CV] 30 Jun 2019

arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

1

STAR: A Structure and Texture Aware Retinex Model

Jun Xu1,2, Mengyang Yu1, Li Liu1, Fan Zhu1, Dongwei Ren3, Yingkun Hou4,*, Haoqian Wang5, and Ling Shao1

1 Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates2 College of Computer Science, Nankai University, Tianjin, China

3 College of Intelligence and Computing, Tianjin University, Tianjin, China4School of Information Science and Technology, Taishan University, Taian, China

5International Graduate School, Tsinghua University, Shenzhen, China

Retinex theory is developed mainly to decompose an imageinto the illumination and reflectance components by analyzinglocal image derivatives. In this theory, larger derivatives areattributed to the changes in reflectance, while smaller derivativesare emerged in the smooth illumination. In this paper, we proposeto utilize the exponentiated derivatives (with an exponent γ) of anobserved image to generate a structure map when being amplifiedwith γ > 1 and a texture map when being shrank with γ < 1. Tothis end, we design exponential filters for the local derivatives,and present their capability on extracting accurate structureand texture maps, influenced by the choices of exponents γon the local derivatives. The extracted structure and texturemaps are employed to regularize the illumination and reflectancecomponents in Retinex decomposition. A novel Structure andTexture Aware Retinex (STAR) model is further proposed forillumination and reflectance decomposition of a single image. Wesolve the STAR model in an alternating minimization manner.Each sub-problem is transformed into a vectorized least squaresregression with closed-form solution. Comprehensive experimentsdemonstrate that, the proposed STAR model produce betterquantitative and qualitative performance than previous compet-ing methods, on illumination and reflectance estimation, low-lightimage enhancement, and color correction.

I. INTRODUCTION

THE Retinex theory proposed by Land and McCann [1],[2] models the color perception of human vision on

natural scenes. It can be viewed as a fundamental theory forintrinsic image decomposition problem [3], which aims todecomposing an image into reflectance and illumination (orshading). A simplified Retinex model involves decomposingan observed image O into an illumination component I anda reflectance component R via O = I R, where denotesthe element-wise multiplication. In the observed scene O,the illumination I expresses the color of the light strikingthe surfaces of objects, while the reflectance R reflects thepainted color of the surfaces of objects [4]. Retinex theoryhas been applied in many computer vision tasks, such asimage enhancement [4]–[6] and image/color correction [7],[8] (please refer to Figure 1 for an example).

The Retinex theory introduces a useful property of deriva-tives [1], [2], [4]: larger derivatives are often attributed to thechanges in reflectance, while smaller derivatives are from the

Corresponding author: Yingkun Hou (email: [email protected]).

(a) Input

(d) Structure/Texture

(b) Illumination

(e) Enhancement

(c) Reflectance

(f) Color Correction

Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The inputlow-light/color-distorted image; (b) The estimated illuminationcomponent of (a); (c) The estimated reflectance component of(a); (d) The extracted structure and texture maps (half each)of (a); (e) The illumination enhanced low-light image of (a);(f) The color corrected image of (a).

smooth illumination. With this property, the Retinex decom-position can be performed by classifying the image gradientsinto the reflectance component and the illumination one [9].However, binary classification of image gradient is unreliablesince reflectance and illumination changes will coincide in anintermediate region [4]. Later, several methods are proposedto classify the edges or edge junctions, instead of gradient,according to some trained classifiers [10], [11]. However, it isquite challenging to train classifiers considering all possibleranges of reflectance and illumination configurations. Besides,though these methods explicitly utilize the property of deriva-tives, they perform Retinex decomposition by analyzing the

arX

iv:1

906.

0669

0v3

[cs

.CV

] 3

0 Ju

n 20

19

Page 2: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

2

gradients of a scene in a local manner, while ignoring theglobal consistency of the structure in the scene. To solvethis drawback, several methods [4]–[8] perform global de-composition with the consideration of different regularizations.However, these methods ignore the property of derivatives andcannot separate well relectance and illumination.

In this paper, we introduce novel exponentiated local deriva-tives to better exploit the property of derivatives in a globalmanner. The exponentiated derivatives are determined by anintroduced exponents γ on local derivatives, and generalizethe trivial derivatives to extract structure and texture maps.Given an observed scene (e.g., Figure 1 (a)), its derivativesare exponentiated by γ to generate a structure map (Figure 1(d) up) when being amplified with γ > 1 and a texture map(Figure 1 (d) down) when being shrank with γ < 1. The ex-tracted structure and texture maps are employed to regularizethe illumination (Figure 1 (b)) and reflectance (Figure 1 (c))components in Retinex decomposition, respectively. With theaccurate structure and texture maps, we propose a Structureand Texture Aware Retinex (STAR) model to accurately esti-mate the illumination and reflectance components. We solvethe STAR model in an alternating minimization manner. Eachsub-problem is transformed into a vectorized least squaresregression with closed-form solution. Comprehensive exper-iments demonstrate that, the proposed STAR model producesbetter quantitative and qualitative performance than previouscompeting methods, on illumination and reflectance estima-tion, low-light image enhancement, and color correction. Insummary, the contribution of this work are three-fold:• We propose to employ exponentiated local derivatives to

better extract the structure and texture maps.• We propose a novel Structure and Texture Aware Retinex

(STAR) model to accurately estimate the illuminationand reflectance components, and exploit the property ofderivatives in a global manner.

• Experimental results show that the proposed STAR modelproduces better quantitative and qualitative performancethan previous competing methods on Retinex decomposi-tion, low-light image enhancement, and color correction.

The remaining paper is organized as follows. In §II, wereview the related work in this work. In §III, we intro-duce the proposed structure and texture awareness basedweighting scheme. The proposed structure and texture awareRetinex model is proposed in §IV. §V describes the detailedexperiments on Retinex decomposition of illumination andreflectance. §VI describes the proposed STAR model to twoother image processing applications: low-light image enhance-ment and color correction. We conclude this paper in §VII.

II. RELATED WORK

A. Retinex model

The Retinex model has been extensively studied in liter-ature [4]–[31], which can be roughly divided into classicalones [12]–[19] and variational ones [4]–[11], [20]–[31].

Classical Retinex Methods include path-based meth-ods [12]–[15], Partial Differential Equation (PDE)-based meth-ods [16], [17], center/surround methods [18], [19]. Early path-

based methods [12], [13] are developed based on the assump-tion that, the reflectance component can be computed by theproduct of ratios along some random paths. These methodsdemand careful parameter tuning and incur high computationalcosts. To improve the efficiency, later path-based methodsof [14], [15] employ recursive matrix computation techniquesto replace previous random path computation. However, theirperformance is largely influenced by the number of recursiveiterations, and unstable for real applications. PDE-based algo-rithms [16], [17] employ partial differential equation (PDE)to estimate the reflectance component, and can be solvedefficiently by the fast Fourier transform (FFT). However, thestructure of the illumination component will be degraded,since gradients derived by a divergence-free vector field oftenloss the expected piece-wise smoothness. The center/surroundmethods include the famous single-scale Retinex (SSR) [18]and multi-scale Retinex with color restoration (MSRCR) [19].These methods often restrict the illumination component tobe smooth, and the reflectance component to be non-smooth.However, due to lack of a structure-preserving restriction,SSR/MSRCR tend to generate halo artifacts around edges.

Variational Methods [4]–[11], [20]–[31] have been pro-posed for Retinex based illumination and reflectance esti-mation. In [9], the smooth assumption is introduced into avariational model to estimate the illumination. But this methodis slow and ignores to regularize the reflectance. Later, an `1variational model is proposed in [24] to focus on estimating thereflectance. But this method ignores to regularize the illumina-tion. The logarithmic transformation is also employed in [20]as a pre-processing step to suppresses the variation of gradientmagnitude in bright regions, but the reflectance estimatedwith logarithmic regularizations tends to be over-smoothed.To consider both reflectance and illumination regularizations,a total variation (TV) model based method is proposed in [23].But similar to [20], the reflectance is over-smoothed due tothe side-effect of the logarithmic transformation. Recently, Fuet al. [32] developed a probabilistic method for simultaneousillumination and reflectance estimation (SIRE) in linear space.This method can preserve well the details and avoid over-smoothness of reflectance compared to the previous methodsperforming in the logarithmic space. To alleviate the detail lossproblem of the reflectance component in logarithmic space, Fuet al. [7] proposed a weighted variational model (WVM) toenhance the variation of gradient magnitude in bright regions.However, the illumination may instead be damaged by the un-constrained isotropic smoothness assumption. By consideringthe properties of 3D objects, Cai et al. [8] proposed a Jointintrinsic-extrinsic Prior (JieP) model for Retinex decomposi-tion. However, this model is prone to over-smoothing boththe illumination and reflectance of a scene. In [31], Li et al.proposed the robust Retinex model considering an additionalnoise map, but this work is proposed only for low-light imagesaccompanied by intensive noise.

B. Intrinsic Image Decomposition

The Retinex model is in similar spirit with the intrinsicimage decomposition model [33]–[39], which decomposes

Page 3: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

3

(a) Input (b) γ = 0.5 (c) γ = 1.0 (d) γ = 1.5 (e) γ = 2

Fig. 2: Comparisons on the exponentiated total variation (ETV) and exponentiated mean local variance (EMLV) filters onstructure and texture extraction. For an input RGB image (a), V refer to its Value channel in HSV space.

an observed image into Lambertian shading and reflectance(ignoring the specularity). The major goal of intrinsic imagedecomposition is to recover the shading and relectance termsfrom an observed scene, while the specularity term can beignored without performance degradation [33]. However, thereflectance recovered in this problem usually loses the visualcontent of the scene [6], and hence can hardly be used for si-multaneous illumination and reflectance estimation. Therefore,intrinsic image decomposition does not satisfy the purposeof Retinex decomposition for low-light image enhancement,in which the objective is to preserve the visual contents ofdark regions as well as keep its visual realism [6]. For moredifference between Retinex decomposition and intrinsic imagedecomposition, please refer to [6].

III. STRUCTURE AND TEXTURE AWARENESS

In this section, we first present the simplified Retinexmodel, and then introduce structure and texture awareness forillumination and reflectance regularization.

A. Simplified Retinex Model

The Retinex model [2] is a color perception model ofthe human vision system. Its physical goal is to decomposean observed image O ∈ Rn×m into its illumination andreflectance components, i.e.,

O = I R, (1)

where I ∈ Rn×m means the illumination map of the scenerepresenting the brightness of objects, R ∈ Rn×m denotesthe surface reflection of the scene representing its physicalcharacteristics, and means element-wise multiplication. Theillumination I and reflectance R can be recovered by alterna-tively estimating them via

I = O R, R = O I, (2)

where means element-wise division. In fact, we employI = O (R + ε) and R = O (I + ε) to avoid zerodenominators, where ε = 10−8.

To solve this inverse problem (2), previous methods usuallyemploy an objective function that estimates illumination andreflectance components by

minI,R‖O − I R‖2F +R1(I) +R2(R), (3)

where R1 and R2 are two different regularization functionsfor illumination I and reflectance R, respectively. One imple-mentation choice ofR1 andR2 is the total variation (TV) [40],which is widely used in previous methods [7], [23].

B. Structure and Texture Estimator

The Retinex model (1) decomposes an observed scene intoits illumination and reflectance components. This problem ishighly ill-posed, and proper priors of illumination and re-flectance should be considered to regularize the solution space.Qualitatively speaking, the illumination should be piece-wiselysmooth, capturing the structure of the objects in the scene,while reflectance should present the physical characteristicsof the observed scene, capturing its texture information. Here,texture refers to the patterns in object surface, which aresimilar in local statistics [41].

Previous structure-texture decomposition methods often en-force the TV regularizers to preserve edges [23], [28], [42].These TV regularizers simply enforce gradient similarity ofthe scene and extract the structure of the objects. Thereare two ways for structure-texture decomposition. One is todirectly derive structure using structure-preserving techniques,such as edge-aware filters [43], [44] and optimization basedmethods [8]. The other way is to extract structure from theestimated texture weights [42]. However, these techniques [8],[42]–[44] are vulnerable to textures and produce ringing effectnear edges. Moreover, the method [42] cannot extract scenes

Page 4: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

4

structures, whose appearances are similar to those of theunderlying textures.

To better understand the power of these techniques forstructure/texture extraction, we study two typical kinds offilters. The first is the TV filter [40], which computes thegradients of an input image as a guidance map:

fTV (O) = |∇O|. (4)

The second one is the mean local variance (MLV) [8], whichcan also be utilized for structure map estimation:

fMLV (O) =

∣∣∣∣∣ 1

|Ω|∑Ω

∇O

∣∣∣∣∣ , (5)

where Ω is the local patch around each pixel of O, and itssize is set as 3× 3 in all our experiments.

To support our idea that these TV and MLV filters cancapture the structure of the scene, we visualize the effect ofthe two filters performed on extracting the structure/texturefrom an observed image. Here, the input RGB image Figure 2(a), up) is first transformed into the Hue-Satuation-Value(HSV) domain. Since the Value (V) channel (Figure 2 (a)down) reflects the illumination and reflectance information,we process this channel for the input image. It can be seenfrom Figure 2 (c) that, the TV and MLV filters can basiclyreflect the main structures of the input image. This point canbe further validated by comparing the similarity of the twofiltered image (Figure 2 (c)) with the edge extracted imageof Figure 2 (a). To this end, we resort to a recently publishededge detection method [45] to extract the main structure of theinput image. By comparing the TV filtered image (Figure 3(b)), MLV filtered image (Figure 3 (d)), and the edge extractedimage (Figure 3 (c)), one can see that the TV and MLV filteredimages already reflect the structure of the input image.

C. Proposed Structure and Texture Awareness

Existing TV and MLV filters described in Eqns. (4) and (5)cannot be directly employed in our problem, since theyare prone to capture structural information. As described inRetinex theory, larger derivatives are attributed to the changesin reflectance, while smaller derivatives are emerged in thesmooth illumination. Therefore, by exponential growth ordecay, these local derivatives will . Here, we introduce anexponential version of them for flexible structure and textureestimation. Specifically, we add an exponent to the TV andMLV filters. In this way, we can make the two filters moreflexible for separate structure and texture extraction. To thisend, we propose the exponentiated TV (ETV) filter as

fETV (O) = fγTV (O) = |∇O|γ , (6)

and the exponentiated MLV (EMLV) filter as

fEMLV (O) = fγMLV (O) =

∣∣∣∣∣ 1

|Ω|∑Ω

∇O

∣∣∣∣∣γ

, (7)

where γ is the exponent determining the sensitivity to thegradients of O. Note that we evaluate the two exponentiatedfilters Eqns. (6) and (7) by visualizing their effects on a

(a) Input

(c) Edge detection of (a)

(b) TV filtered (a)

(d) MLV filtered (a)

Fig. 3: Comparison of TV filtered image (b), MLV filteredimage (d), and the edge extracted image (c) of the input image(a). It can be seen that the TV and MLV filtered images canroughly reflect the structure of the input image.

test image (i.e., Figure 2 (a), top). This RGB image is firsttransformed into the Hue-Satuation-Value (HSV) domain, andthe decomposition is performed in the V channel. In Figure 2(b)-(e), we plot the filtered images for the V channel of theinput image. It is noteworthy that, with γ = 0.5, the ETV andEMLV filters will reveal the textures of the test image, whilewith γ ∈ 1, 1.5, 2, the ETV and EMLV filters tend to extractthe structural edges.

Motivated by this observation, we introduce a structureand texture aware scheme for illumination and reflectancedecomposition. Specifically, we set I0 = R0 = O0.5, theETV based weighting matrix as

S0 =1

|∇I0|γs + ε, T0 =

1

|∇R0|γt + ε, (8)

and the EMLV based weighting matrix as:

S0 =1∣∣∣ 1

|Ω|∑

Ω∇I0

∣∣∣γs + ε, T0 =

1∣∣∣ 1|Ω|∑

Ω∇R0

∣∣∣γt + ε,

(9)where γs > 1 and γt < 1 are two exponential parametersto adjust the structure and texture awareness for illuminationand reflectance decomposition. As will be demonstrated in §V,the values of γs and γt influence the effect of the Retinexdecomposition performance. Due to considering local varianceinformation, the EMLV filter (Eqn. 9) can reveal details andpreserve structures better than the ETV filter (Figure 2). Thispoint will also be validated in §V.

Page 5: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

5

Algorithm 1: Solve the STAR Model (10)Input: observed image O, parameters α, β,K;Initialization: I0 = O0.5, R0 = O0.5, set S0,T0 by (9);for (k = 0, ...,K − 1) do1. Update Ik+1 by Eqn. (13);2. Update Rk+1 by Eqn. (16);

if (Converged)3. Stop;

end ifend forOutput: Illuminance IK and Reflectance RK .

IV. STRUCTURE AND TEXTURE AWARE RETINEX MODEL

A. Proposed Model

In this section, we propose a Structure and Texture AwareRetinex (STAR) model to estimate the illumination I andthe reflectance R of an observed image O, simultaneously.To make the STAR model as simple as possible, we adoptthe TV `2-norm to regularize the illumination and reflectancecomponents. The proposed STAR model is formulated as

minI,R‖O−IR‖2F +α‖S0∇I‖2F +β‖T0∇R‖2F . (10)

where S0 and T0 are the two matrices defined in (9), indicatingthe structure map of the illumination and the texture map ofthe reflectance, respectively. The structure should be smallenough to preserve the edges of objects in the scene, whilelarge enough to suppress the details (as the inverse of Figure 2(d,e)). The texture map should be small enough to reveal thedetails (as the inverse of Figure 2 (b,c)).

B. Optimization Algorithm

Since the objective function is separable w.r.t. the twovariables I and R, the problem (10) can be solved viaan alternating direction method of multipliers (ADMM) al-gorithm [46]. The two separated sub-problems are convexand alternatively solved. We initialize the matrix variablesI0 = R0 = O0.5. Denote Ik and Rk as the illuminanceand reflectance variables at the k-th (k = 0, 1, 2, ...) iteration,respectively, and L is the iteration number. By optimizing onevariable at a time while fixing the other, we can alternativelyupdate the two variables as follows.

a) Update I while fixing R:: With Rk in the k-thiteration, the optimization problem with respect to I becomes:

Ik+1 = arg minI‖O − I Rk‖2F + α‖S0 ∇I‖2F . (11)

To solve the problem (11), we reformulate it into a vectorizedformat. To this end, with the vectorization operator vec(·), wedenote vectors o = vec(O), i = vec(I), rk = vec(Rk),s0 = vec(S0), which are of length nm. Denote by Gthe Toeplitz matrix from the discrete gradient operator withforward difference, then we have Gi = vec(∇I). Denote byDrk

= diag(rk), Ds0 = diag(s0) ∈ Rnm×nm the matriceswith rk, s0 lying on the main diagonals, respectively. Then,the problem (11) is transformed into a standard least squaresregression problem:

ik+1 = arg mini‖o−Drk

i‖22 + α‖Ds0Gi‖22. (12)

Algorithm 2: Alternative Updating SchemeInput: observed image O, parameters α, β, L,K;Initialization: estimated I0

K ,R0K by Algorithm 1;

for (l = 0, ..., L− 1) do1. Update Sl+1 = (| 1

|Ω|∑

Ω∇IlK |γs + ε)−1;

2. Update Tl+1 = (| 1|Ω|

∑Ω∇Rl

K |γt + ε)−1;

3. Solve the STAR model (10) and obtain Il+1K and

Rl+1K by Algorithm 1;

if (Converged)4. Stop;

end ifend forOutput: Final Illuminance ILK and Reflectance RL

K .

By differentiating problem (12) with respect to i, and settingthe derivative to 0, we have the following solution

ik+1 = (D>rkDrk

+ αG>D>s0Ds0

G)−1D>rko. (13)

We then reformulate the obtained ik+1 into matrix format viathe inverse vectorization Ik+1 = vec−1(ik+1).

b) Update R while fixing I:: After acquiring Ik+1 fromthe above solution, the optimization problem (10) with respectto R is similar to that of I:

Rk+1 = arg minR‖O− Ik+1 R‖2F + β‖T0 ∇R‖2F . (14)

Similarly, we reformulate the problem (14) into a vector-ized format. Additionally, we denote r=vec(R), t0=vec(T0).which are of length nm. We also have Gr = vec(∇R). De-note by Dik+1

= diag(ik+1), Dt0 = diag(t0) ∈ Rnm×nm thematrices with rk, s0 lying on the main diagonals, respectively.Then, the problem (14) is also transformed into a standard leastsquares problem:

rk+1 = arg minr‖o−Dik+1

r‖22 + β‖Dt0Gr‖22. (15)

By differentiating problem (15) with respect to r, and settingthe derivative to 0, we have the following solution

rk+1 = (D>ik+1Dik+1

+ βG>D>t0Dt0G)−1D>ik+1o. (16)

We then reformulate the obtained rk+1 into matrix format viainverse vectorization Rk+1 = vec−1(rk+1).

The above alternative updating steps are repeated until theconvergence condition is satisfied or the number of iterationsexceeds a preset threshold. The convergence condition of theADMM algorithm is: ‖Ik+1−Ik‖ ≤ ε and ‖Rk+1−Rk‖ ≤ εare simultaneously satisfied, or the maximum iteration numberK is achieved. We set ε = 10−2 and K = 20 in ourexperiments. Since there are only two variables in problem(10), and each sub-problem has closed-form solution, it canbe efficiently solved with convergence.

C. Updating Structure and Texture Awareness

Until now, we have obtained the decomposition of O =I R. To achieve better estimation on illumination andreflectance, we update the structure and texture aware mapsS and T , and then solve the renewed problem (10). Thealternative updating of (S, T ) and (I , R) are repeated for Literations. The convergence condition of for this algorithm is:‖Sl+1 − Sl‖ ≤ ε and ‖Tl+1 − Tl‖ ≤ ε are simultaneously

Page 6: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

6

satisfied, or the maximum updating iteration number L isachieved. We set L = 4 to balance the speed-accuracytradeoff of the proposed STAR model in our experiments. Wesummarize the updating procedures in Algorithm 2.

V. EXPERIMENTS

In this section, we evaluate the qualitative and quantitativeperformance of the proposed Structure and Texture AwareRetinex (STAR) model, on Retinex decomposition (§V-B).In §V-C, we also perform an ablation study on illuminationand reflectance decomposition to gain deeper insights into theproposed STAR Retinex model.

A. Implementation Details

The input RGB-color image is first transformed into theHue-Satuation-Value (HSV) space. Since the Value (V) chan-nel reflects the illumination and reflectance information, weonly process this channel, and transform the processed imagefrom the HSV space to RGB-color space, similar to [7],[8]. In our experiments, we empirically set the parametersas α = 0.001, β = 0.0001, γs = 1.5, γt = 0.5. Dueto considering local variance information, the EMLV filter(Eqn. 9) can reveal details and preserve structures better thanthe ETV filter (Figure 2). We will perform ablation study onthis point in §V-C.

B. Retinex Decomposition

The Retinex decomposition includes illumination and re-flectance estimation. Accurate illumination estimation shouldnot distort the structure, while being spatially smooth. Mean-while, accurate reflectance should reveal the details of theobserved scene. The ground truth for the illumination andreflectance is difficult to generate, and hence quantitativeevaluation of the estimation is problematic.

To evaluate the effectiveness of the proposed STAR model,we perform qualitative comparisons on both illumination andreflectance estimation with two state-of-the-art Retinex mod-els, including the Weighted Variation Model (WVM) [7], andthe Joint intrinsic-extrinsic Prior (JieP) model [8]. Similar tothese methods, we perform Retinex decomposition on the Vchannel of the HSV space, and transform the decomposedimage back to the RGB space. Some visual results on severalcommon test images are shown in Figure 4. It can be observedthat, for the proposed STAR model, the structure awarenessscheme enforces piece-wise smoothness, while the textureawareness scheme preserves details across the image. As canbe seen in the Figures 4 (b)-(d) and (e)-(g), the proposed STARmethod preserves better the structure of the three black regionson the white car, and reveals more details of the texture on thewall, than the other two methods of WVM [7] and JieP [8].

C. Validation of the Proposed STAR Model

We conduct a detailed examination of our proposed STARmodel for Retinex decomposition. We assess 1) the influenceof the weighting scheme (ETV or EMLV) on STAR; 2) theimportance of structure and texture awareness to STAR; 3) the

influence of the parameters γs, γt on STAR; 4) the necessityof updating structure S and texture T to STAR.1. The influence of the weighting scheme (ETV or EMLV)on STAR. To study the the weighting scheme (ETV orEMLV) on STAR, we emply the ETV filter 8 and set S0 =1/(|∇I0|+ε) and T0 = 1/(|∇O0|+ε) in (10) and update themas Algorithm 2 describes, and thus have another STAR model:STAR-ETV. The default STAR model can be termed as STAR-EMLV. From Figures 5, one can see that, the STAR-ETVmodel tends to provide little structure in illumination, whilelosing texture information in reflectance. By employing EMLVfilter as the weighting matrix, the proposed STAR (STAR-EMLV) method maintains the structure/texture better than theSTAR-ETV model.2. Is structure and texture awareness important? To answerthis question, we set S0 = 1/(|∇I0|+ε) or T0 = 1/(|∇O0|+ε) in (10) and update them as Algorithm 2 describes, andthus have two baselines: STAR w/o Structure and STAR w/oTexture. Note that if we set S0 or T0 in (10) as comfortableidentity matrix, the performance of the corresponding STARmodel is very bad. From Figure 6, one can see that, STARw/o Structure tends to provide little structural information inillumination, while STAR w/o Texture influence little in illu-mination and reflectance. By considering both, the proposedSTAR decompose the structure/texture components accurately.3. How do the parameters γs and γt influence STAR?The γs, γt are key parameters for the structure and textureawareness of STAR. In Figure 7, one can see that STARwith γs = 1.5, γt = 0.5 ((d) and (h)) produces reasonableresults, STAR with γs = 1, γt = 1 ((c) and (g)) can barelydistinguish the illumination and reflectance, while STAR withγs = 0.5, γt = 1.5 ((b) and (f)) confuses illumination andreflectance to a great extent. Since we regularize more on I(α = 0.001, β = 0.0001), I and R in (f) are not exactly thesame as R and I in (b), respectively.4. Is Updating S,T Necessary? We also study the effectof the updating iteration number L on STAR. To do so, wesimply set L = 1, 2, 4 in STAR and evaluate its Retinexdecomposition performance. From Figure 8, one can see thatthe illumination becomes more structual while reflectancereflects more details with more iterations.

VI. OTHER APPLICATIONS

In this section, we apply the proposed STAR model ontwo other image processing applications: low-light imageenhancement (§VI-A) and color correction (§VI-B).

A. Low-light Image Enhancement

Capturing images in low-light environments suffers fromunavoidable problems, such as low visibility and heavy noisedegradation. Low-light image enhancement aims to alleviatethis problem by improving the visibility and contrast ofthe observed images. To preserve the color information, theRetinex model based low-light image enhancement is oftenperformed in the Value (V) channel of the Hue-Saturation-Value (HSV) domain.

Page 7: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

7

(a) Input (b) I by WVM [7] (c) I by JieP [8] (d) I by STAR (e) R by WVM [7] (f) R by JieP [8] (g) R by STAR

Fig. 4: Comparisons of illuminance and reflectance components by different Retinex decomposition methods. I and R indicateilluminance and reflectance, respectively.

(a) Input

(d) V channel of (a)

(b) Illumination by STAR-ETV

(e) Illumination by STAR-EMLV

(c) Reflectance by STAR-ETV

(f) Reflectance by STAR-EMLV

Fig. 5: Comparison of illumination and reflectance components by the proposed STAR model with ETV or EMLV weightingschemes. For better comparisons, we illustrate the components in the RGB channels instead of V channel.

Page 8: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

8

(a) Input

(e) V channel of (a)

(b) Illumination by STAR w/o S

(f) Reflectance by STAR w/o S

(c) Illumination by STAR w/o T

(g) Reflectance by STAR w/o T

(d) Illumination by STAR

(h) Reflectance by STAR

Fig. 6: Ablation study of structure or texture map on the illumination and reflectance decomposition performance of theproposed STAR model. For better comparisons, we illustrate the components in the RGB channels instead of V channel.

(a) Input

(e) V channel of (a)

(b) Illumination with γs = 0.5, γt = 1.5

(f) Reflectance with γs = 0.5, γt = 1.5

(c) Illumination with γs = 1, γt = 1

(g) Reflectance with γs = 1, γt = 1

(d) Illumination with γs = 1.5, γt = 0.5

(h) Reflectance with γs = 1.5, γt = 0.5

Fig. 7: Ablation study of the parameters (γs, γt) on the illumination and reflectance decomposition performance of the proposedSTAR model. For better comparisons, we illustrate the components in the RGB channels instead of V channel.

Comparison Methods and Datasets. We compare the pro-posed STAR model with previous competing low-light im-age enhancement methods, including HE [47], MSRCR [19],Contextual and Variational Contrast (CVC) [48], NaturalnessPreserved Enhancement (NPE) [5], LDR [49], SIRE [32],MF [50], WVM [7], LIME [6], and JieP [8]. We evaluate thesemethods on 35 images collected from [6]–[8], [32], [50], andon the 200 images in [5].

Objective Metrics. We qualitatively and quantitatively evalu-ate these methods on the subjective and objective quality met-rics of enhanced images, respectively. The compared methodsare evaluated on two commonly used metrics, one being theno-reference image quality assessment (IQA) metric Natural

Image Quality Evaluator (NIQE) [51], and the other beingthe full-reference IAQ metric Visual Information Fidelity(VIF) [52]. A lower NIQE value indicates better image quality,and a higher VIF value indicates better visual quality. Thereason we employ VIF is that it is widely considered to capturevisual quality better than Peak Signal-to-Noise Ratio (PSNR)and Structural Similarity Index (SSIM) [53], which cannot beused in this task since no “ground truth” images are available.

Results. We compare the proposed STAR model on the twosets of images previously mentioned. From Table I, one cansee that the proposed STAR achieves lower NIQE and higherVIF results than the other competing methods. This indicatesthat the images enhanced by STAR present better visual quality

Page 9: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

9

(a) Input

(e) V channel of (a)

(b) Illumination with L = 1

(f) Reflectance with L = 1

(c) Illumination with L = 2

(g) Reflectance with L = 2

(d) Illumination with L = 4

(h) Reflectance with L = 4

Fig. 8: Ablation study of updating iterations L on the illumination and reflectance decomposition performance of the proposedSTAR model. For better comparisons, we illustrate the components in the RGB channels instead of V channel.

Dataset 35 Images 200 ImagesMetric NIQE ↓ VIF ↑ NIQE ↓ VIF ↑Input 3.74 1.00 3.45 1.00HE [47] 3.24 1.34 3.28 1.19MSRCR [19] 2.98 1.84 3.21 1.11CVC [48] 3.03 2.04 3.01 1.63NPE [5] 3.10 2.48 3.12 1.62LDR [49] 3.12 2.36 2.96 1.66SIRE [32] 3.06 2.09 2.98 1.57MF [50] 3.19 2.23 3.26 1.71WVM [7] 2.98 2.22 2.99 1.68LIME [6] 3.24 2.76 3.32 1.84JieP [8] 3.06 2.67 3.18 1.82STAR w/o S 3.18 2.64 3.22 1.77STAR w/o T 3.09 2.78 3.01 1.82STAR 2.93 2.96 2.86 1.92

TABLE I: Average NIQE [51] and VIF [52] results of differentmethods on 35 low-light images collected from [6]–[8], [32],[50] and 200 low-light images provided in [5].

than those of other methods. Besides, without the structure ortexture weighting scheme, the proposed STAR model producesinferior performance on these two objective metrics. Thisdemonstrates the effectiveness of the proposed structure andtexture aware component for low-light image enhancement.In Figure 9, we compare the visual quality of state-of-the-art methods [5]–[8]. As can be seen, on all the six cases,STAR achieves visually clear content while enhancing theillumination naturally, in agreement with our objective results.Besides, from the second row of Figure 9, one can observe thatthe proposed STAR also performs better than the competingmethods [6]–[8], [50] on noise suppression.

B. Color Correction

In Retinex theory, if the estimation is performed in eachchannel of the RGB-color space, the estimated reflectancecontains the original color information of the observed scene.Therefore, the Retinex model can be applied to color cor-

rection tasks. To demonstrate the estimation accuracy of theillumination and reflectance components, we evaluate the colorcorrection performance of the proposed STAR model and thecompeting methods [7], [8], [13], [32], [54]–[59].

We first compare the performance of the proposed STARwith three leading Retinex methods: SIRE [32], WVM [7] andJieP [8]. The original images and color corrected images aredownloaded from the Color Constancy Website. In Figure 10,we provide some visual results of color correction usingdifferent methods. One can see that, the color of the wall andbooks in the 1st row, and the color of the orange bottle inthe 3rd row. All these methods achieve satisfactory qualitativeperformance, when compared with the ground truth images inthe 2nd and 4th rows of Figure 10 (a). To verify the accuracyof color correction using these methods, we employ the S-CIELAB color metric [60] to measure the color errors onspatial processing. The S-CIELAB errors between the groundtruth and corrected images of different methods are shown inthe 2nd and 4th rows of Figure 10 (b)-(e). As can be seen, thespatial locations of the errors, i.e., the green areas, of the STARcorrected images are much smaller than other methods. Thisindicates that the results of STAR are closer to the ground truthimages (Figure 10 (a)) than other methods for color correction.

Furthermore, we perform a quantitative comparison of theproposed STAR with several leading color constancy meth-ods [8], [13], [54]–[59] on the Color-Checker dataset [57].This dataset contains totally 568 images of indoor and outdoorscenes taken with two high quality DSLR cameras (Canon 5Dand Canon1D). Each image contains a MacBeth color-checkerfor accuracy reference. The average illumination across eachchannel is computed in the RGB-color space separately, asthe estimated illumination for that channel. The results interms of Mean Angular Error (MAE, lower is better) betweenthe corrected image and the ground truth image are listed inTable II. As can be seen, the proposed STAR method achieveslower MAE results than the competing methods on the color

Page 10: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

10

(a) Input (b) NPE [5] (c) WVM [7] (d) LIME [6] (e) JieP [8] (f) STAR

Fig. 9: Comparisons of different Retinex methods on low-light image enhancement.

Method White-Patch [13] Gray-Edge [56] Shades-Gray [55]MAE↓ 7.55 5.13 4.93Method Bayesian [57] CNNs [58] Gray-World [54]MAE↓ 4.82 4.73 4.66Method Gray-Pixel [59] JieP [8] STARMAE↓ 4.60 4.32 4.11

TABLE II: Comparisons of color constancy with Mean Angu-lar Errors (MAE) on the Color-Checker dataset [5].

constancy problem.

VII. CONCLUSION

In this paper, we proposed a Structure and Texture AwareRetinex (STAR) model for illumination and reflectance de-composition. We first introduced an Exponentialized MeanLocal Variance (EMLV) filter to extract the structure andtexture maps from the observed image. The extracted mapswere employed to regularize the illumination and reflectancecomponents. In addition, we proposed to alternatively up-date the structure/texture maps, and estimate the illumina-tion/reflectance for better Retinex decomposition performance.The proposed STAR model is efficiently solved by a standardADMM algorithm. Comprehensive experiments on Retinex

Page 11: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

11

(a) Input/Ground Truth (b) SIRE [32] (c) WVM [7] (d) JieP [8] (e) STAR

Fig. 10: Comparisons of different methods on color correction. Note that the proposed STAR method achieves more accuratecorrection performance when processing color-distorted images.

decomposition, low-light image enhancement, and color cor-rection demonstrated that the proposed STAR model achievesbetter quantitative and qualitative performance than previousstate-of-the-art Retinex decomposition methods.

REFERENCES

[1] E. H. Land and J. J. McCann. Lightness and retinex theory. Josa,61(1):1–11, 1971. 1

[2] E. H. Land. The Retinex theory of color vision. Scientific American,237(6):108–129, 1977. 1, 3

[3] Harry Barrow, J Tenenbaum, A Hanson, and E Riseman. Recoveringintrinsic scene characteristics. Comput. Vis. Syst, 2:3–26, 1978. 1

[4] Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin. A closed-formsolution to retinex with nonlocal texture constraints. IEEE Transactionson Pattern Analysis and Machine Intelligence, 34(7):1437–1444, 2012.1, 2

[5] S. Wang, J. Zheng, H. Hu, and B. Li. Naturalness preserved enhancementalgorithm for non-uniform illumination images. IEEE Transactions onImage Processing, 22(9):3538–3548, 2013. 1, 2, 8, 9, 10

[6] X. Guo, Y. Li, and H. Ling. Lime: Low-light image enhancement viaillumination map estimation. IEEE Transactions on Image Processing,26(2):982–993, Feb 2017. 1, 2, 3, 8, 9, 10

[7] X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding. A weighted vari-ational model for simultaneous reflectance and illumination estimation.In CVPR, pages 2782–2790, 2016. 1, 2, 3, 6, 7, 8, 9, 10, 11

[8] B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao. A joint intrinsic-extrinsic prior model for Retinex. In ICCV, pages 4000–4009, 2017. 1,2, 3, 4, 6, 7, 8, 9, 10, 11

[9] R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel. A variationalframework for Retinex. International Journal of Computer Vision,52(1):7–23, 2003. 1, 2

[10] M. Bell and E. T. Freeman. Learning local evidence for shading andreflectance. In ICCV, volume 1, pages 670–677. IEEE, 2001. 1, 2

[11] M. F. Tappen, W. T. Freeman, and E. H. Adelson. Recovering intrinsicimages from a single image. IEEE Transactions on Pattern Analysisand Machine Intelligence, 27(9):1459–1472, 2005. 1, 2

[12] E. H. Land. Recent advances in Retinex theory and some implicationsfor cortical computations: color vision and the natural image. Proceed-ings of the National Academy of Sciences, 80(16):5163–5169, 1983. 2

[13] D. H. Brainard and B. A. Wandell. Analysis of the Retinex theory ofcolor vision. Journal of the Optical Society of America A, 3(10):1651–1661, 1986. 2, 9, 10

[14] J. A. Frankle and J. J. McCann. Method and apparatus for lightnessimaging, 1983. US Patent 4,384,336. 2

[15] B. Funt, F. Ciurea, and J. McCann. Retinex in matlab. Journal of theElectronic Imaging, pages 48–57, 2004. 2

[16] B. K. P. Horn. Determining lightness from an image. Computer Graphicsand Image Processing, 3(4):277 – 299, 1974. 2

[17] J. M. Morel, A. B. Petro, and C. Sbert. A pde formalization of Retinextheory. IEEE Transactions on Image Processing, 19(11):2825–2837,2010. 2

[18] D. J. Jobson, Z. Rahman, and G. A. Woodell. Properties and performanceof a center/surround Retinex. IEEE Transactions on Image Processing,6(3):451–462, 1997. 2

[19] D. J. Jobson, Z. Rahman, and G. A. Woodell. A multiscale Retinex forbridging the gap between color images and the human observation ofscenes. IEEE Transactions on Image Processing, 6(7):965–976, 1997.2, 8, 9

[20] E. Provenzi, L. D. Carli, A. Rizzi, and D. Marini. Mathematicaldefinition and analysis of the Retinex algorithm. Journal of the OpticalSociety of America A, 22(12):2613–2621, Dec 2005. 2

[21] M. Bertalmıo, V. Caselles, and E. Provenzi. Issues about Retinex theoryand contrast enhancement. International Journal of Computer Vision,83(1):101–119, 2009. 2

[22] R. Palma-Amestoy, E. Provenzi, M. Bertalmıo, and V. Caselles. A per-ceptually inspired variational framework for color enhancement. IEEETransactions on Pattern Analysis and Machine Intelligence, 31(3):458–474, 2009. 2

[23] M. K. Ng and W. Wang. A total variation model for Retinex. SIAMJournal on Imaging Sciences, 4(1):345–365, 2011. 2, 3

[24] W. Ma, J. M. Morel, S. Osher, and A. Chien. An `1-based variationalmodel for Retinex theory and its application to medical images. InCVPR, pages 153–160. IEEE, 2011. 2

Page 12: arXiv:1906.06690v3 [cs.CV] 30 Jun 2019(f) Color Correction Fig. 1: An example to illustrate the applications of the pro-posed STAR model based on Retinex theory. (a) The input low-light/color-distorted

12

[25] W. Ma and S. Osher. A TV Bregman iterative model of Retinex theory.Inverse Problems and Imaging, 6(4):697–708, 2012. 2

[26] H. Li, L. Zhang, and H. Shen. A perceptually inspired variationalmethod for the uneven intensity correction of remote sensing images.IEEE Transactions on Geoscience and Remote Sensing, 50(8):3053–3065, 2012. 2

[27] L. Wang, L. Xiao, H. Liu, and Z. Wei. Variational bayesian method forRetinex. IEEE Transactions on Image Processing, 23(8):3381–3396,2014. 2

[28] J. Liang and X. Zhang. Retinex by higher order total variation `1decomposition. Journal of Mathematical Imaging and Vision, 52(3):345–355, 2015. 2, 3

[29] J. M. Morel, A. B. Petro, and C. Sbert. A pde formalization of Retinextheory. IEEE Transactions on Image Processing, 19(11):2825–2837,2010. 2

[30] D. Zosso, G. Tran, and S. Osher. Non-local Retinex—a unifyingframework and beyond. SIAM Journal on Imaging Sciences, 8(2):787–826, 2015. 2

[31] M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo. Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactionson Image Processing, 27(6):2828–2841, 2018. 2

[32] X. Fu, Y. Liao, D. Zeng, Y. Huang, X. Zhang, and X. Ding. A proba-bilistic method for image enhancement with simultaneous illuminationand reflectance estimation. IEEE Transactions on Image Processing,24(12):4965–4977, Dec 2015. 2, 8, 9, 11

[33] R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman. Groundtruth dataset and baseline evaluations for intrinsic image algorithms. InICCV, pages 2335–2342, 2009. 3

[34] J. Shen, X. Yang, Y. Jia, and X. Li. Intrinsic images using optimization.In CVPR, 2011. 3

[35] J. T. Barron and Jitendra Malik. Color constancy, intrinsic images, andshape estimation. In ECCV, 2012. 3

[36] J. T. Barron and Jitendra Malik. Intrinsic scene properties from a singlergb-d image. In CVPR, 2013. 3

[37] Y. Li and M. S. Brown. Single image layer separation using relativesmoothness. In CVPR, June 2014. 3

[38] J. T. Barron. Convolutional color constancy. In Proceedings of the IEEEInternational Conference on Computer Vision, pages 379–387, 2015. 3

[39] J. T. Barron and Y. Tsai. Fast fourier color constancy. In Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition,pages 886–894, 2017. 3

[40] L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noiseremoval algorithms. Physica D: Nonlinear Phenomena, 60(1-4):259–268, 1992. 3, 4

[41] L. Wei, S. Lefebvre, V. Kwatra, and G. Turk. State of the art in example-based texture synthesis. In Eurographics 2009, State of the Art Report.Eurographics Association, 2009. 3

[42] L. Xu, Q. Yan, Y. Xia, and J. Jia. Structure extraction from texturevia relative total variation. ACM Transaction on Graphics, 31(6):139:1–139:10, 2012. 3, 4

[43] R. Manduchi and C. Tomasi. Bilateral filtering for gray and color images.In ICCV, 1998. 3, 4

[44] Q. Zhang, X. Shen, L. Xu, and J. Jia. Rolling guidance filter. In ECCV,pages 815–830, 2014. 3, 4

[45] Y. Liu, M.-M. Cheng, X. Hu, J.g Bian, L. Zhang, X. Bai, and J. Tang.Richer convolutional features for edge detection. IEEE Trans. PatternAnal. Mach. Intell., 2019. 4

[46] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributedoptimization and statistical learning via the alternating direction methodof multipliers. Foundations and Trends R© in Machine Learning, 3(1):1–122, 2011. 5

[47] H. Cheng and X. Shi. A simple and effective histogram equalizationapproach to image enhancement. Digital Signal Processing, 14(2):158–170, 2004. 8, 9

[48] T. Celik and T. Tjahjadi. Contextual and variational contrast enhance-ment. IEEE Transactions on Image Processing, 20(12):3431–3441,2011. 8, 9

[49] C. Lee, C. Lee, and C. Kim. Contrast enhancement based on layereddifference representation of 2d histograms. IEEE Transactions on ImageProcessing, 22(12):5372–5384, 2013. 8, 9

[50] X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley. Afusion-based enhancing method for weakly illuminated images. SignalProcessing, 129:82–96, 2016. 8, 9

[51] A. Mittal, R. Soundararajan, and A. C. Bovik. Making a “com-pletely blind” image quality analyzer. IEEE Signal Processing Letters,20(3):209–212, March. 8, 9

[52] H. R. Sheikh and A. C. Bovik. Image information and visual quality.IEEE Transactions on Image Processing, 15(2):430–444, Feb 2006. 8,9

[53] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Imagequality assessment: from error visibility to structural similarity. IEEETransactions on Image Processing, 13(4):600–612, 2004. 8

[54] G. Buchsbaum. A spatial processor model for object colour perception.Journal of the Franklin Institute, 310(1):1–26, 1980. 9, 10

[55] G. D. Finlayson and E. Trezzi. Shades of gray and colour constancy.In Color and Imaging Conference, volume 2004, pages 37–41, 2004. 9,10

[56] J. Van De Weijer, T. Gevers, and A. Gijsenij. Edge-based colorconstancy. IEEE Transactions on Image Processing, 16(9):2207–2214,2007. 9, 10

[57] P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp. Bayesiancolor constancy revisited. In CVPR, pages 1–8. IEEE, 2008. 9, 10

[58] S. Bianco, C. Cusano, and R. Schettini. Color constancy using CNNs.In CVPR Workshops, pages 81–89, 2015. 9, 10

[59] K. Yang, S. Gao, and Y. Li. Efficient illuminant estimation for colorconstancy using grey pixels. In CVPR, pages 2254–2263, 2015. 9, 10

[60] X. Zhang and B. A. Wandell. A spatial extension of cielab for digitalcolor image reproduction. In SID International Symposium Digest ofTechnical Papers, volume 27, pages 731–734. Citeseer, 1996. 9