Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Research ArticleA Novel Approach for Detail-Enhanced ExposureFusion Using Guided Filter
Harbinder Singh1 Vinay Kumar2 and Sunil Bhooshan3
1 Department of Electronics and Communication Engineering Baddi University of Emerging Sciences and TechnologyBaddi Solan 173205 India
2 Grupo de Procesado Multimedia Departamento de Teorıa de la Senal y Comunicaciones Universidad Carlo III de MadridLeganes Madrid 28911 Spain
3 Department of Electronics and Communication Engineering Jaypee University of Information Technology WaknaghatSolan 173215 India
Correspondence should be addressed to Harbinder Singh harbinderecegmailcom
Received 29 August 2013 Accepted 24 December 2013 Published 9 February 2014
Academic Editors T Stathaki K Teh and L Yuan
Copyright copy 2014 Harbinder Singh et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited
In this paper we propose a novel detail-enhancing exposure fusion approach using nonlinear translation-variant filter (NTF) Withthe captured Standard Dynamic Range (SDR) images under different exposure settings first the fine details are extracted basedon guided filter Next the base layers (ie images obtained from NTF) across all input images are fused using multiresolutionpyramid Exposure contrast and saturation measures are considered to generate a mask that guides the fusion process of the baselayers Finally the fused base layer is combined with the extracted fine details to obtain detail-enhanced fused imageThe goal is topreserve details in both very dark and extremely bright regions without High Dynamic Range Image (HDRI) representation andtone mapping step Moreover we have demonstrated that the proposed method is also suitable for the multifocus image fusionwithout introducing artifacts
1 Introduction
In single exposure normal digital camera can collect limitedluminance variations from the real world scene which istermed as low dynamic range (LDR) image To circumventthis problem modern digital photography offers the conceptof exposure time variation to capture details in very darkor extremely bright regions which control the amount oflight allowed to fall on the sensor Different LDR images arecaptured to collect complete luminance variations in rapidsuccessions at different exposure settings known as exposurebracketing However each exposure will handle the smallportion of the luminance variation in the entire scene Shortexposure can capture details from the bright regions (iehighlights) and long exposure can capture details from darkregions (ie shadows) (see Figure 1)
In the past decade two solutions have been proposedto handle large luminance variations present in the natural
scenes The first option is the HDR representation To datemany HDRI representation [1 2] techniques have beenproposed which extend dynamic range by compositing dif-ferently exposed images of the same scene HDR images gen-erally encode intensity variations with more than 8-bits andpixel values that are proportional to the true scene radiancetransformed by a nonlinear mapping called the cameraresponse function Four bytes ldquohdrrdquo format was developed toencode radiancemapsThe second option to encode radiancemap is ldquofloating point tiffrdquo which uses 12 bytes to encode 79orders of magnitude approximately Currently used standarddisplay devices have smaller contrast ratio (ie 1 100) andthe contrast ratio of LCDmonitors can reach 1 400 Recentlydeveloped HDR display device prototypes [3] can representhigh contrast ratio (ie 1 25000) which are still not availablein the market for the routine customers Therefore the HDRimage needs to be tone-mapped first to appear on stan-dard display device Various local and global tone mapping
Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 659217 8 pageshttpdxdoiorg1011552014659217
2 The Scientific World Journal
(a) Input images captured at different exposure settings
(b) Our detail enhanced fusion results
Figure 1 Results of proposed detail-enhanced exposure fusion framework using edge preserving filter
Inputexposures
Base layers
Guided filter
Detaillayers
Pyramid-basedbase layers
fusion
Detail layersmanipulation and
fusion
Detail-enhanced fused image
Figure 2 Proposed detail-enhanced exposure fusion framework
methods [2] have been proposed to display HDR images onstandard display devices Local light adaption property ofhuman visual system (HVS) is adopted in the local operatorsto correspond to the visual impression that an observer hadwhen watching the original scene while the global operatorsare spatially invariant and are less effective than the localoperators
The recently proposed second option is the ldquoexposurefusionrdquoThe fundamental goal of the exposure fusion is to pre-serve details in both very dark and extremely bright regionswithout HDRI representation and tone mapping step Theunderlying idea of various exposure fusion approaches [4ndash7]is based on the utilization of different local measures to gen-erate weight map to preserve details present in the differentexposures
The present work draws inspiration from imaging tech-niques that combine information from two or more imagescaptured at different exposure settings but with different
goals The block diagrammatic representation of the presentdetail enhanced framework is shown in Figure 2 We seek toenhance fine details in the fused image by using edge preserv-ing filter [8] Edge preserving filters have been utilized in sev-eral image processing applications such as edge detection [9]image enhancement and noise reduction [10] Recently jointbilateral filter [11] has been proposed which is effective fordetecting and reducing large artifacts such as reflections usinggradient projections More recently anisotropic diffusion [9]has been utilized for detail enhancement in exposure fusion[12] in which texture features are used to control the contri-bution of pixels from the input exposures In our approachthe guided filter is preferred over other existing approachesbecause the gradients present near the edges are preservedaccurately We use guided filter [8] for base layer and detaillayer extractions which is more effective for enhancingtexture details and reducing gradient reversal artifacts nearthe strong edges in the fused imageMultiresolution approach
The Scientific World Journal 3
(a) Mertens (b) Burt (c) Shen (d) Proposed
Figure 3 Comparison results to other recent exposure fusion techniques (a) Mertens et al [4] (b) Burt and Adelson [13] (c) Shen et al [7]and (d) results of our new exposure fusion method Note that our method yields enhanced texture and edge features Input image sequenceis courtesy of TomMertens
(a) (b) (c)
Figure 4 Our results for differentmultiexposure sequences (a) Hermes (two input exposures) (b) Chairs (five exposures) and (c) Syn (seveninput exposures)
is used to fuse computed base layers across all of the inputimages The detail layers extracted from input exposures aremanipulated and fused separately The final detail enhancedfused image (see Figures 3 and 4) is obtained by integratingthe fused base layer and the fused detail layer The detaileddescription of the proposed approach is given in the forth-coming section It is worth pointing out that our methodessentially differs from [4] which aims at enhancing thetexture and contrast details in the fused image with a nonlin-ear edge preserving filter (ie the guided filter) Moreover itis demonstrated that the proposed approach fuses the multi-focus images effectively and produces the result of rich visualdetails
2 Guided Image Filtering forBase Layer Computation
21 Edge Preserving Guided Filter In this section we firstdescribe the ability of the guided filter [8] derived from locallinear model to preserve edges and then show how it avoidsgradient reversal artifacts near the strong edges that mayappear in fused image after detail layer enhancementWe seekto maintain the shape of strong edges in the fused image thatappears due to exposure time variation across input images
Guided filter was developed by He et al [8] in 2010 as analternative to bilateral filter [11] It is an edge-preserving filterwhere the filtering output is a local linear model between theguidance 119868 and the filter output 119902 The selection of guidance
image 119868will depend on the application [11] In our implemen-tation an input image 119901 and guidance image 119868 are identicalThe output of the guided filter for a pixel 119894 is computed as aweighted averages as follows
119902119894= sum
119895
119882119894119895 (119868) 119901119894 (1)
where (119894119895) are pixel indexes and119882119894119895is the filter kernel that is a
function of guidance image 119868 and independent of input image119901 Let 119902 be a linear transform of 119868 in a window centered at thepixel 119896 as follows
119902119894= 119886119896119868119894+ 119887119896 forall119894 isin 120596
119896 (2)
where (119886119896 119887119896) are the linear coefficients assumed to be
constant in120596119896and calculated in a small square image window
of a radius (2119903+1)times(2119903+1)The local linearmodel (2) ensuresthat 119902 has an edge (ie discontinuities) only if 119868 has an edgebecause nabla119902 = 119886nabla119868 Here 119886
119896and 119887119896are computed within to
minimize the following cost function
119864 (119886119896 119887119896) = sum
119894isin120596119896
((119886119896119868119894+ 119887119896minus 119901119894)2+ 1205761198862
119896) (3)
where 120576 is the regularization term on linear coefficient a fornumerical stabilityThe significance and relation of 120576with thebilateral kernel [11] are given in [8] In our implementationwe use 119903 = 2 (ie 5 times 5 square window) and 120576 = 001
4 The Scientific World Journal
The linear coefficients used to minimize the cost functionin (3) are determined by linear regression [15] as follows
119886119896=
(1 |120596|) sum119894isin120596119896119868119894119901119894minus 120583119896119901119896
1205902
119896+ 120576
119887119896= 119901119896minus 119886119896120583119896
119901119896=1
|120596|sum
119894isin120596119896
119901119894
(4)
where 120583119896and 1205902
119896are the mean and variance of 119868 in 120596
119896 |120596| is
the number of pixels in 120596119896 and 119901
119896is the mean of 119901 in 120596
119896
The linear coefficients 119886119896and 119887
119896are computed for all
patches 120596119896in the entire image However a pixel 119894 is involved
in all windows 120596119896that contains 119894 so the value of 119902
119894in (2) will
be different for different windows So after taking the averageof all the possible value of 119902
119894 the filtered output is determined
as
119902119894=1
|120596|sum
119896119894isin120596119896
(119886119896119868119894+ 119887119896) (5)
= (119886119894119868119894+ 119887119894) (6)
Here 119886119894and 119887119894are computed as
119886119894=1
|120596|sum
119896isin120596119894
119886119896 119887
119894=1
|120596|sum
119896isin120596119894
119887119896 (7)
In practice it is found that 119886119894and 119887
119894in (7) are varying
spatially to preserve strong edges of 119868 in 119902 that is nabla119902 asymp 119886119894nabla119868
Therefore 119902119894computed in (6) preserves the strongest edges in
119868 while smoothing small changes in intensityLet 119887119870(1198941015840 1198951015840) be the base layer computed from (6) (ie
119887119870(1198941015840 1198951015840) = 119902119894and 1 le 119870 le 119873) for 119870th input image denoted
by 119868119870(1198941015840 1198951015840) The detail layer is defined as the difference
between the guided filter output and the input image whichis defined as
119889119870(1198941015840 1198951015840) = 119868119870(1198941015840 1198951015840) minus 119887119870(1198941015840 1198951015840) (8)
22 Computation of Laplacian and Gaussian PyramidResearchers have attempted to synthesize and manipulatethe features at several spatial resolutions that avoid theintroduction of seam and artifacts such as contrast reversalor black halos In the proposed algorithm the band-pass [13]components at different resolutions aremanipulated based onweight map that determine the pixel value in the recon-structed fused base layer The pyramid representationexpresses an image as a sum of spatially band-passed imageswhile retaining local spatial information in each band Apyramid is created by lowpass-filtering an image 119866
0with a
compact two-dimensional filter The filtered image is thensubsampled by removing every other pixel and every otherrow to obtain a reduced image 119866
1 This process is repeated to
form a Gaussian pyramid 1198660 1198661 1198662 1198663 119866
119889
119866119897(119894 119895) = sum
119898
sum
119899
119866119897minus1(2119894 + 119898 2119895 + 119899) 119897 = 1 119889 (9)
where 119897 (0 lt 119897 lt 119889) refers to the number of levels in thepyramid
Expanding 1198661to the same size as 119866
0and subtracting
yields the band-passed image 1198710 A Laplacian pyramid
1198710 1198711 1198712 119871
119889minus1 can be built containing band-passed
images of decreasing size and spatial frequency
119871119897= 119866119897minus 119866119897+1 119897 = 1 119889 minus 1 (10)
where the expanded image 119866119897+1
is given by
119866119897+1= 4sum
119898
sum
119899
119908 (119898 119899) [119866119897 (2119894 +119898
2 2119895 +
119899
2)] (11)
The original image can be reconstructed from the expandedband-pass images
1198660= 1198710+ 1198711+ 1198712+ sdot sdot sdot + 119871
119889minus1+ 119866119889 (12)
The Gaussian pyramid contains low-passed versions of theoriginal 119866
0 at progressively lower spatial frequencies This
effect is clearly seen when the Gaussian pyramid ldquolevelsrdquo areexpanded to the same size as119866
0 The Laplacian pyramid con-
sists of band-passed copies of 1198660 Each Laplacian level con-
tains the ldquoedgesrdquo of a certain size and spans approximately anoctave in spatial frequency
23 Base Layer Fusion Based on Multiresolution Pyramid Inour framework the fused base layer 119887
119891(1198941015840 1198951015840) is computed
as the weighted sum of the base layers 1198871(1198941015840 1198951015840) 1198872(1198941015840 1198951015840)
119887119873(1198941015840 1198951015840) obtained across 119873 input exposures We use the
pyramid approach proposed by Burt and Adelson [13] whichgenerates Laplacian pyramid of the base layers 119871119887
119870(1198941015840 1198951015840)119897
andGaussian pyramid of weightmap functions119866119882119870(1198941015840 1198951015840)119897
estimated from three quality measures (ie saturation119878119870(1198941015840 1198951015840) contrast 119862
119870(1198941015840 1198951015840) and exposure 119864
119870(1198941015840 1198951015840)) Here
119897 (0 lt 119897 lt 119889) refers to the number of levels in the pyramidand119870 (1 lt 119870 lt 119873) refers to the number of input imagesTheweight map is computed as the product of these three qualitymetrics (ie119882
119870(1198941015840 1198951015840) = 119878119870(1198941015840 1198951015840) sdot 119862119870(1198941015840 1198951015840) sdot 119864119870(1198941015840 1198951015840)) The
119871119887119870(1198941015840 1198951015840)119897multipliedwith the corresponding119866119882
119870(1198941015840 1198951015840)119897
and summing over 119870 yield modified Laplacian pyramid119871119897(1198941015840 1198951015840) as follows
119871119897(1198941015840 1198951015840) =
119873
sum
119870=1
119871 119887119897
119870(1198941015840 1198951015840)119866 119882
119897
119870(1198941015840 1198951015840) (13)
The 119887119891(1198941015840 1198951015840) that contains well exposed pixels is recon-
structed by expanding each level and then summing all thelevels of the Laplacian pyramid
119887119891(1198941015840 1198951015840) =
119889
sum
119897=0
119871119897(1198941015840 1198951015840) (14)
24 Detail Layer Fusion and Manipulation The detail layerscomputed in (8) across all the input exposures are linearlycombined to produce fused detail layer 119889
119891(1198941015840 1198951015840) that yields
combined texture information as follows
119889119891(1198941015840 1198951015840) =
sum119873
119870=0120574119891119870(119889119870(1198941015840 1198951015840))
119873 (15)
The Scientific World Journal 5
(a) (b) (c)
Figure 5 ((a) (b)) Two partially focused images (focused on different targets) (c) image generated by the proposed approach whichillustrates that the fused image extracts more information from the original images Input sequence is courtesy of Adu and Wang
(a) In1 (b) In2 (c) ANI (d) BLT (e) WLS (f) Mertens (g) Proposed
Figure 6 Color-coded map comparison (dark blue color indicates overexposed region and pure white indicates underexposed region)Comparison results to other classic edge preserving and exposure fusion techniques (a) Source exposure 1 (b) source exposure 2 (c)anisotropic filter [9] (d) bilateral filter [11] (e) weighted least square filter [14] (f) Mertens [4] (g) results of our new exposure fusionmethodbased on guided filter Note that our method yields enhanced texture and edge features Input image sequence courtesy of TomMertens
where 120574 is the user defined parameter to control amplificationof texture details (typically set to 5) and 119891
119870(sdot) is the nonlinear
function to achieve detail enhancement while reducing noiseand artifacts near strong edges due to overenhancement Wefollow the approach of [10] to reduce noise across all detaillayers The nonlinear function 119891
119870(sdot) is defined as
119891119870(1198941015840 1198951015840) = 120591(119889
119870(1198941015840 1198951015840))120572
+ (1 minus 120591) 119889119870 (1198941015840 1198951015840) (16)
where 120591 is a smooth step function equal to 0 if 119889119870(1198941015840 1198951015840) is
less than 1 of the maximum intensity 1 if it is more than 2with a smooth transition in between and the parameter 120572 isused to control contrast in the detail layers We have foundthat 120572 = 02 is a good default setting for all experiments
Finally the detail enhanced fused image 119892(1198941015840 1198951015840) is easilycomputed by simply adding up the fused base layer 119887
119891(1198941015840 1198951015840)
computed in (14) and the manipulated fused detail layer119889119891(1198941015840 1198951015840) in (15) as follows
119892 (1198941015840 1198951015840) = 119889119891(1198941015840 1198951015840) + 119887119891(1198941015840 1198951015840) (17)
3 Experimental Results and Analysis
31 Comparison with Other Exposure Fusion Methods Fig-ures 1 3 and 4 depict examples of fused images from themul-tiexposure images It is noticed that the proposed approachenhances texture details while preventing halos near strongedges As shown in Figure 1(b) the details from all of theinput images are perfectly combined and none of the fourinput exposures (see Figure 1(a)) reveals fine textures on thechair that are present in the fused image In Figures 3(a)ndash3(d)we compare our results to the recently proposed approachesFigures 3(a) and 3(b) show the fusion results using the mul-tiresolution pyramid based approach The result of Mertenset al [4] (see Figure 3(a)) appears blurry and loses texturedetails while in our results (see Figure 3(d)) the wall textureand painting on the window glass are emphasized which aredifficult to be visible in Figure 3(a) Clearly this is suboptimalas it removes Pixel-to-pixel correlations by subtracting alow-pass filtered copy of the image from the image itself togenerate a Laplacian pyramid and the result is a texture andedge details reduction in the fused image Figure 3(b) shows
6 The Scientific World Journal
001 0015 002 0025 003 0035 004 0045 005 0055 006055
06
065
07
075
08
085
09
095Q
ualit
y m
etric
s
EntropyQabfVIFF
120576
(a)
3 4 5 6 7 8 9 10 11 12 13055
06
065
07
075
08
085
09
095
Qua
lity
met
rics
r
EntropyQabfVIFF
(b)
5 55 6 65 7 75 8 85 9 95 1004
045
05
055
06
065
07
075
08
085
09
Qua
lity
met
rics
120574
EntropyQabfVIFF
(c)
Figure 7 Analysis of different free parameters used in the algorithm Maximum quality score and entropy are only observed when 120576 = 001120574 = 5 and 119903 = 2 (which are set as default parameters) It is observed that VIFF increases as 120576 120574 and 119903 increases but the larger values areresponsible for overenhancement (a) Effectiveness of 120576 on metrics (b) effectiveness of 119903 on metrics (c) effectiveness of 120574 on metrics
the results using pyramid approach [13] which reveals manydetails but losses contrast and color information Generalizedrandom walks based exposure fusion is shown in Figure 3(c)which depicts less texture and color details in brightlyilluminated regions (ie lamp and window glass) Note thatFigure 3(d) retains colors sharp edges and details while alsomaintaining an overall reduction in high frequency artifactsnear strong edges
Figure 4 shows our results for different image sequencescaptured at variable exposure settings (see Figure 4(a) Her-mes Figure 4(b) Chairs and Figure 4(c) Syn (input images
are courtesy of Jacques Joffre and ShreeNayar))Note that thestrong edges and fine texture details are accurately preservedin the fused image without introducing halo artifacts Thehalo artifacts will stand out if the detail layer undergoes asubstantial boost
Moreover in Figure 5 it is demonstrated that the pro-posed method is also suitable for multifocus image fusionto yield rich contrast As illustrated in Figure 5(c) the edgesand textures are relatively better than those of input imagesBecause our approach excludes fine textures from the baselayers we can significantly preserve and enhance fine details
The Scientific World Journal 7
(a) (b) (c)
Figure 8 Visual inspection the effect of free parameter 119903 on detail enhancement We have found that 119903 = 2 is sufficient for fine detailsextraction and gives better results for most cases Higher value of 119903 brings in artifacts near strong edges (a) 119903 = 1 (b) 119903 = 3 and (c) 119903 = 6
separately However multiresolution pyramid approach canbe accurately used for retaining strong edges and texturedetails enhancement in multifocus image fusion problem
32 Implementation and Comparison of Various Classic Edge-Preserving Filters Figures 6(a) and 6(b) depict the color-coded maps of underexposed and overexposed imagesrespectively Dark blue color indicates overexposed regionand pure white color indicates underexposed region Fig-ures 6(c)ndash6(f) illustrate the comparisons of color-codedmapsof three edge preserving filters based detail enhancementand the results obtained by Mertens et al [4] on the Cathe-dral sequence We accepted the default parameter settingssuggested by the different edge preserving filters [9 11 14]Figures 6(c) and 6(f) show respectively the fusion resultsusing the anisotropic diffusion [9] based approach and themultiresolution pyramid based exposure fusion approach [4]which are both clearly close to the results obtained usingguided filter (see Figure 6(g)) but overall they yield less tex-ture and edge details The texture detail enhancement usingbilateral filter [11] and weighted least square filter [14] shownin Figures 6(d) and 6(e) respectively depicts overenhance-ment near strong edges and less color details As shown in theclose-up view in Figure 6(g) the proposed method based onguided filter can enhance the image texture details whilepreserving the strong edges without over enhancement
33 Analysis of Free Parameters and Fusion PerformanceMetrics To analyze the effect of epsilon gamma andwindowsize on quality score (Qabf) [16] entropy and visual informa-tion fidelity for fusion (VIFF) [17] we have illustrated threeplots (see Figures 7(a)ndash7(c) resp) for input image sequenceof ldquoCathedralrdquo To assess the effect of epsilon gamma andwindow size on fusion performance the Qabf entropy andVIFF were adopted in all experiments executed on a PCwith 22GHz i5 processor and 2GB of RAM VIFF [17] first
decomposes the source and fused images into blocks ThenVIFF utilizes the models in VIF (GSM model distortionmodel and HVS model) to capture visual information fromthe two source-fused pairsWith the help of an effective visualinformation index VIFF measures the effective visual infor-mation of the fusion in all blocks in each subband Finallythe assessment result is calculated by integrating all the infor-mation in each subband Qabf [16] evaluates the amount ofedge information transferred from input images to the fusedimage A Sobel operator is applied to yield the edge strengthand orientation information for each pixel
First to analyze the effect of 120576 onQabf entropy andVIFFthe square window parameter (119903) and texture amplificationparameter (120574) were set to 2 and 5 respectively As shownin Figure 7(a) the quality score and entropy decreases as120576 increases and VIFF increases as 120576 increases It should benoticed in Figure 7(b) that the VIFF and entropy increase as 119903increases and Qabf decreases as 119903 increases It is preferred tohave a small filter size (119903) to reduce computational time Inthe analysis of 119903 the other parameters are set to 120576 = 001and 120574 = 5 The visual inspection of effect of 119903 on ldquoCathedralrdquosequence is depicted in Figure 8 It can easily be noticed (seeFigures 8(a)ndash8(c)) that as 119903 increases the strong edges andtextures get overenhanced and therefore leads to artifacts Toanalyze the influence of 120574 it should be noticed that entropyand Qabf decrease as 120574 increases and VIFF increases as 120574increases In order to obtain optimal detail enhancement andlow computational time we have concluded that the bestresults were obtained with 120576 = 001 120574 = 5 and 119903 = 2 whichyield reasonably good results for all cases
4 Conclusions
We proposed a method to construct a detail enhanced imagefrom a set ofmultiexposure images by using amultiresolutiondecomposition technique When compared with the existing
8 The Scientific World Journal
techniques which use multiresolution and single resolutionanalysis for exposure fusion the current proposed methodperforms better in terms of enhancement of texture detailsin the fused image The framework is inspired by the edge-preserving property of guided filter that has better responsenear strong edges The two layer decomposition based onguided filter is used to extract fine textures for detail enhance-ment Moreover we have demonstrated that the presentmethod can also be applied to fuse multifocus images (ieimages focused on different targets) More importantly theinformation in the resultant image can be controlled with thehelp of the proposed free parameters
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] P E Debevec and J Malik ldquoRecovering high dynamic rangeradiance maps from photographsrdquo in Proceedings of the 24thACM Annual Conference on Computer Graphics and Interactivetechniques (SIGGRAPH rsquo97) pp 369ndash378 Los Angeles CalifUSA August 1997
[2] M Song D Tao C Chen J Bu J Luo and C Zhang ldquoProba-bilistic exposure fusionrdquo IEEETransactions on Image Processingvol 21 no 1 pp 341ndash357 2012
[3] H Seetzen W Heidrich W Stuerzlinger et al ldquoHigh dynamicrange display systemrdquoACMTransaction onGraphics vol 23 no3 pp 760ndash768
[4] T Mertens J Kautz and F Van Reeth ldquoExposure fusion a sim-ple and practical alternative to high dynamic range photogra-phyrdquo Computer Graphics Forum vol 28 no 1 pp 161ndash171 2009
[5] S Li and X Kang ldquoFast multi-exposure image fusion withmedian filter and recursive filterrdquo IEEE Transactions on Con-sumer Electronics vol 58 no 2 pp 626ndash632 2012
[6] W Zhang and W-K Cham ldquoGradient-directed multiexposurecompositionrdquo IEEE Transactions on Image Processing vol 21no 4 pp 2318ndash2323 2012
[7] R Shen I Cheng J Shi and A Basu ldquoGeneralized randomwalks for fusion of multi-exposure imagesrdquo IEEE Transactionson Image Processing vol 20 no 12 pp 3634ndash3646 2011
[8] K He J Sun and X Tang ldquoGuided image filteringrdquo in Proceed-ings of the 11th European Conference on Computer Vision 2010
[9] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990
[10] S Paris S W Hasinoff and J Kautz ldquoLocal laplacian filtersedge-aware image processing with a laplacian pyramidrdquo ACMTransactions on Graphics vol 30 no 4 article 68 2011
[11] A Agrawal R Raskar S K Nayar and Y Li ldquoRemoving pho-tography artifacts using gradient projection and flash-exposuresamplingrdquoACMTransaction onGraphics vol 24 no 3 pp 828ndash835
[12] H Singh V Kumar and S Bhooshan ldquoAnisotropic diffusionfor details enhancement in multiexposure image fusionrdquo ISRNSignal Processing vol 2013 Article ID 928971 18 pages 2013
[13] P J Burt and E H Adelson ldquoThe Laplacian pyramid as a com-pact image coderdquo IEEE Transactions on Communications vol31 no 4 pp 532ndash540 1983
[14] Z Farbman R Fattal D Lischinski and R Szeliski ldquoEdge-preserving decompositions for multi-scale tone and detailmanipulationrdquo ACM Transactions on Graphics vol 27 no 3article 67 2008
[15] N Draper and H Smith Applied Regression Analysis JohnWiley 2nd edition 1981
[16] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000
[17] Y Han Y Cai Y Cao and X Xu ldquoA new image fusion perfor-mance metric based on visual information fidelityrdquo InformationFusion vol 14 no 2 pp 127ndash135 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
2 The Scientific World Journal
(a) Input images captured at different exposure settings
(b) Our detail enhanced fusion results
Figure 1 Results of proposed detail-enhanced exposure fusion framework using edge preserving filter
Inputexposures
Base layers
Guided filter
Detaillayers
Pyramid-basedbase layers
fusion
Detail layersmanipulation and
fusion
Detail-enhanced fused image
Figure 2 Proposed detail-enhanced exposure fusion framework
methods [2] have been proposed to display HDR images onstandard display devices Local light adaption property ofhuman visual system (HVS) is adopted in the local operatorsto correspond to the visual impression that an observer hadwhen watching the original scene while the global operatorsare spatially invariant and are less effective than the localoperators
The recently proposed second option is the ldquoexposurefusionrdquoThe fundamental goal of the exposure fusion is to pre-serve details in both very dark and extremely bright regionswithout HDRI representation and tone mapping step Theunderlying idea of various exposure fusion approaches [4ndash7]is based on the utilization of different local measures to gen-erate weight map to preserve details present in the differentexposures
The present work draws inspiration from imaging tech-niques that combine information from two or more imagescaptured at different exposure settings but with different
goals The block diagrammatic representation of the presentdetail enhanced framework is shown in Figure 2 We seek toenhance fine details in the fused image by using edge preserv-ing filter [8] Edge preserving filters have been utilized in sev-eral image processing applications such as edge detection [9]image enhancement and noise reduction [10] Recently jointbilateral filter [11] has been proposed which is effective fordetecting and reducing large artifacts such as reflections usinggradient projections More recently anisotropic diffusion [9]has been utilized for detail enhancement in exposure fusion[12] in which texture features are used to control the contri-bution of pixels from the input exposures In our approachthe guided filter is preferred over other existing approachesbecause the gradients present near the edges are preservedaccurately We use guided filter [8] for base layer and detaillayer extractions which is more effective for enhancingtexture details and reducing gradient reversal artifacts nearthe strong edges in the fused imageMultiresolution approach
The Scientific World Journal 3
(a) Mertens (b) Burt (c) Shen (d) Proposed
Figure 3 Comparison results to other recent exposure fusion techniques (a) Mertens et al [4] (b) Burt and Adelson [13] (c) Shen et al [7]and (d) results of our new exposure fusion method Note that our method yields enhanced texture and edge features Input image sequenceis courtesy of TomMertens
(a) (b) (c)
Figure 4 Our results for differentmultiexposure sequences (a) Hermes (two input exposures) (b) Chairs (five exposures) and (c) Syn (seveninput exposures)
is used to fuse computed base layers across all of the inputimages The detail layers extracted from input exposures aremanipulated and fused separately The final detail enhancedfused image (see Figures 3 and 4) is obtained by integratingthe fused base layer and the fused detail layer The detaileddescription of the proposed approach is given in the forth-coming section It is worth pointing out that our methodessentially differs from [4] which aims at enhancing thetexture and contrast details in the fused image with a nonlin-ear edge preserving filter (ie the guided filter) Moreover itis demonstrated that the proposed approach fuses the multi-focus images effectively and produces the result of rich visualdetails
2 Guided Image Filtering forBase Layer Computation
21 Edge Preserving Guided Filter In this section we firstdescribe the ability of the guided filter [8] derived from locallinear model to preserve edges and then show how it avoidsgradient reversal artifacts near the strong edges that mayappear in fused image after detail layer enhancementWe seekto maintain the shape of strong edges in the fused image thatappears due to exposure time variation across input images
Guided filter was developed by He et al [8] in 2010 as analternative to bilateral filter [11] It is an edge-preserving filterwhere the filtering output is a local linear model between theguidance 119868 and the filter output 119902 The selection of guidance
image 119868will depend on the application [11] In our implemen-tation an input image 119901 and guidance image 119868 are identicalThe output of the guided filter for a pixel 119894 is computed as aweighted averages as follows
119902119894= sum
119895
119882119894119895 (119868) 119901119894 (1)
where (119894119895) are pixel indexes and119882119894119895is the filter kernel that is a
function of guidance image 119868 and independent of input image119901 Let 119902 be a linear transform of 119868 in a window centered at thepixel 119896 as follows
119902119894= 119886119896119868119894+ 119887119896 forall119894 isin 120596
119896 (2)
where (119886119896 119887119896) are the linear coefficients assumed to be
constant in120596119896and calculated in a small square image window
of a radius (2119903+1)times(2119903+1)The local linearmodel (2) ensuresthat 119902 has an edge (ie discontinuities) only if 119868 has an edgebecause nabla119902 = 119886nabla119868 Here 119886
119896and 119887119896are computed within to
minimize the following cost function
119864 (119886119896 119887119896) = sum
119894isin120596119896
((119886119896119868119894+ 119887119896minus 119901119894)2+ 1205761198862
119896) (3)
where 120576 is the regularization term on linear coefficient a fornumerical stabilityThe significance and relation of 120576with thebilateral kernel [11] are given in [8] In our implementationwe use 119903 = 2 (ie 5 times 5 square window) and 120576 = 001
4 The Scientific World Journal
The linear coefficients used to minimize the cost functionin (3) are determined by linear regression [15] as follows
119886119896=
(1 |120596|) sum119894isin120596119896119868119894119901119894minus 120583119896119901119896
1205902
119896+ 120576
119887119896= 119901119896minus 119886119896120583119896
119901119896=1
|120596|sum
119894isin120596119896
119901119894
(4)
where 120583119896and 1205902
119896are the mean and variance of 119868 in 120596
119896 |120596| is
the number of pixels in 120596119896 and 119901
119896is the mean of 119901 in 120596
119896
The linear coefficients 119886119896and 119887
119896are computed for all
patches 120596119896in the entire image However a pixel 119894 is involved
in all windows 120596119896that contains 119894 so the value of 119902
119894in (2) will
be different for different windows So after taking the averageof all the possible value of 119902
119894 the filtered output is determined
as
119902119894=1
|120596|sum
119896119894isin120596119896
(119886119896119868119894+ 119887119896) (5)
= (119886119894119868119894+ 119887119894) (6)
Here 119886119894and 119887119894are computed as
119886119894=1
|120596|sum
119896isin120596119894
119886119896 119887
119894=1
|120596|sum
119896isin120596119894
119887119896 (7)
In practice it is found that 119886119894and 119887
119894in (7) are varying
spatially to preserve strong edges of 119868 in 119902 that is nabla119902 asymp 119886119894nabla119868
Therefore 119902119894computed in (6) preserves the strongest edges in
119868 while smoothing small changes in intensityLet 119887119870(1198941015840 1198951015840) be the base layer computed from (6) (ie
119887119870(1198941015840 1198951015840) = 119902119894and 1 le 119870 le 119873) for 119870th input image denoted
by 119868119870(1198941015840 1198951015840) The detail layer is defined as the difference
between the guided filter output and the input image whichis defined as
119889119870(1198941015840 1198951015840) = 119868119870(1198941015840 1198951015840) minus 119887119870(1198941015840 1198951015840) (8)
22 Computation of Laplacian and Gaussian PyramidResearchers have attempted to synthesize and manipulatethe features at several spatial resolutions that avoid theintroduction of seam and artifacts such as contrast reversalor black halos In the proposed algorithm the band-pass [13]components at different resolutions aremanipulated based onweight map that determine the pixel value in the recon-structed fused base layer The pyramid representationexpresses an image as a sum of spatially band-passed imageswhile retaining local spatial information in each band Apyramid is created by lowpass-filtering an image 119866
0with a
compact two-dimensional filter The filtered image is thensubsampled by removing every other pixel and every otherrow to obtain a reduced image 119866
1 This process is repeated to
form a Gaussian pyramid 1198660 1198661 1198662 1198663 119866
119889
119866119897(119894 119895) = sum
119898
sum
119899
119866119897minus1(2119894 + 119898 2119895 + 119899) 119897 = 1 119889 (9)
where 119897 (0 lt 119897 lt 119889) refers to the number of levels in thepyramid
Expanding 1198661to the same size as 119866
0and subtracting
yields the band-passed image 1198710 A Laplacian pyramid
1198710 1198711 1198712 119871
119889minus1 can be built containing band-passed
images of decreasing size and spatial frequency
119871119897= 119866119897minus 119866119897+1 119897 = 1 119889 minus 1 (10)
where the expanded image 119866119897+1
is given by
119866119897+1= 4sum
119898
sum
119899
119908 (119898 119899) [119866119897 (2119894 +119898
2 2119895 +
119899
2)] (11)
The original image can be reconstructed from the expandedband-pass images
1198660= 1198710+ 1198711+ 1198712+ sdot sdot sdot + 119871
119889minus1+ 119866119889 (12)
The Gaussian pyramid contains low-passed versions of theoriginal 119866
0 at progressively lower spatial frequencies This
effect is clearly seen when the Gaussian pyramid ldquolevelsrdquo areexpanded to the same size as119866
0 The Laplacian pyramid con-
sists of band-passed copies of 1198660 Each Laplacian level con-
tains the ldquoedgesrdquo of a certain size and spans approximately anoctave in spatial frequency
23 Base Layer Fusion Based on Multiresolution Pyramid Inour framework the fused base layer 119887
119891(1198941015840 1198951015840) is computed
as the weighted sum of the base layers 1198871(1198941015840 1198951015840) 1198872(1198941015840 1198951015840)
119887119873(1198941015840 1198951015840) obtained across 119873 input exposures We use the
pyramid approach proposed by Burt and Adelson [13] whichgenerates Laplacian pyramid of the base layers 119871119887
119870(1198941015840 1198951015840)119897
andGaussian pyramid of weightmap functions119866119882119870(1198941015840 1198951015840)119897
estimated from three quality measures (ie saturation119878119870(1198941015840 1198951015840) contrast 119862
119870(1198941015840 1198951015840) and exposure 119864
119870(1198941015840 1198951015840)) Here
119897 (0 lt 119897 lt 119889) refers to the number of levels in the pyramidand119870 (1 lt 119870 lt 119873) refers to the number of input imagesTheweight map is computed as the product of these three qualitymetrics (ie119882
119870(1198941015840 1198951015840) = 119878119870(1198941015840 1198951015840) sdot 119862119870(1198941015840 1198951015840) sdot 119864119870(1198941015840 1198951015840)) The
119871119887119870(1198941015840 1198951015840)119897multipliedwith the corresponding119866119882
119870(1198941015840 1198951015840)119897
and summing over 119870 yield modified Laplacian pyramid119871119897(1198941015840 1198951015840) as follows
119871119897(1198941015840 1198951015840) =
119873
sum
119870=1
119871 119887119897
119870(1198941015840 1198951015840)119866 119882
119897
119870(1198941015840 1198951015840) (13)
The 119887119891(1198941015840 1198951015840) that contains well exposed pixels is recon-
structed by expanding each level and then summing all thelevels of the Laplacian pyramid
119887119891(1198941015840 1198951015840) =
119889
sum
119897=0
119871119897(1198941015840 1198951015840) (14)
24 Detail Layer Fusion and Manipulation The detail layerscomputed in (8) across all the input exposures are linearlycombined to produce fused detail layer 119889
119891(1198941015840 1198951015840) that yields
combined texture information as follows
119889119891(1198941015840 1198951015840) =
sum119873
119870=0120574119891119870(119889119870(1198941015840 1198951015840))
119873 (15)
The Scientific World Journal 5
(a) (b) (c)
Figure 5 ((a) (b)) Two partially focused images (focused on different targets) (c) image generated by the proposed approach whichillustrates that the fused image extracts more information from the original images Input sequence is courtesy of Adu and Wang
(a) In1 (b) In2 (c) ANI (d) BLT (e) WLS (f) Mertens (g) Proposed
Figure 6 Color-coded map comparison (dark blue color indicates overexposed region and pure white indicates underexposed region)Comparison results to other classic edge preserving and exposure fusion techniques (a) Source exposure 1 (b) source exposure 2 (c)anisotropic filter [9] (d) bilateral filter [11] (e) weighted least square filter [14] (f) Mertens [4] (g) results of our new exposure fusionmethodbased on guided filter Note that our method yields enhanced texture and edge features Input image sequence courtesy of TomMertens
where 120574 is the user defined parameter to control amplificationof texture details (typically set to 5) and 119891
119870(sdot) is the nonlinear
function to achieve detail enhancement while reducing noiseand artifacts near strong edges due to overenhancement Wefollow the approach of [10] to reduce noise across all detaillayers The nonlinear function 119891
119870(sdot) is defined as
119891119870(1198941015840 1198951015840) = 120591(119889
119870(1198941015840 1198951015840))120572
+ (1 minus 120591) 119889119870 (1198941015840 1198951015840) (16)
where 120591 is a smooth step function equal to 0 if 119889119870(1198941015840 1198951015840) is
less than 1 of the maximum intensity 1 if it is more than 2with a smooth transition in between and the parameter 120572 isused to control contrast in the detail layers We have foundthat 120572 = 02 is a good default setting for all experiments
Finally the detail enhanced fused image 119892(1198941015840 1198951015840) is easilycomputed by simply adding up the fused base layer 119887
119891(1198941015840 1198951015840)
computed in (14) and the manipulated fused detail layer119889119891(1198941015840 1198951015840) in (15) as follows
119892 (1198941015840 1198951015840) = 119889119891(1198941015840 1198951015840) + 119887119891(1198941015840 1198951015840) (17)
3 Experimental Results and Analysis
31 Comparison with Other Exposure Fusion Methods Fig-ures 1 3 and 4 depict examples of fused images from themul-tiexposure images It is noticed that the proposed approachenhances texture details while preventing halos near strongedges As shown in Figure 1(b) the details from all of theinput images are perfectly combined and none of the fourinput exposures (see Figure 1(a)) reveals fine textures on thechair that are present in the fused image In Figures 3(a)ndash3(d)we compare our results to the recently proposed approachesFigures 3(a) and 3(b) show the fusion results using the mul-tiresolution pyramid based approach The result of Mertenset al [4] (see Figure 3(a)) appears blurry and loses texturedetails while in our results (see Figure 3(d)) the wall textureand painting on the window glass are emphasized which aredifficult to be visible in Figure 3(a) Clearly this is suboptimalas it removes Pixel-to-pixel correlations by subtracting alow-pass filtered copy of the image from the image itself togenerate a Laplacian pyramid and the result is a texture andedge details reduction in the fused image Figure 3(b) shows
6 The Scientific World Journal
001 0015 002 0025 003 0035 004 0045 005 0055 006055
06
065
07
075
08
085
09
095Q
ualit
y m
etric
s
EntropyQabfVIFF
120576
(a)
3 4 5 6 7 8 9 10 11 12 13055
06
065
07
075
08
085
09
095
Qua
lity
met
rics
r
EntropyQabfVIFF
(b)
5 55 6 65 7 75 8 85 9 95 1004
045
05
055
06
065
07
075
08
085
09
Qua
lity
met
rics
120574
EntropyQabfVIFF
(c)
Figure 7 Analysis of different free parameters used in the algorithm Maximum quality score and entropy are only observed when 120576 = 001120574 = 5 and 119903 = 2 (which are set as default parameters) It is observed that VIFF increases as 120576 120574 and 119903 increases but the larger values areresponsible for overenhancement (a) Effectiveness of 120576 on metrics (b) effectiveness of 119903 on metrics (c) effectiveness of 120574 on metrics
the results using pyramid approach [13] which reveals manydetails but losses contrast and color information Generalizedrandom walks based exposure fusion is shown in Figure 3(c)which depicts less texture and color details in brightlyilluminated regions (ie lamp and window glass) Note thatFigure 3(d) retains colors sharp edges and details while alsomaintaining an overall reduction in high frequency artifactsnear strong edges
Figure 4 shows our results for different image sequencescaptured at variable exposure settings (see Figure 4(a) Her-mes Figure 4(b) Chairs and Figure 4(c) Syn (input images
are courtesy of Jacques Joffre and ShreeNayar))Note that thestrong edges and fine texture details are accurately preservedin the fused image without introducing halo artifacts Thehalo artifacts will stand out if the detail layer undergoes asubstantial boost
Moreover in Figure 5 it is demonstrated that the pro-posed method is also suitable for multifocus image fusionto yield rich contrast As illustrated in Figure 5(c) the edgesand textures are relatively better than those of input imagesBecause our approach excludes fine textures from the baselayers we can significantly preserve and enhance fine details
The Scientific World Journal 7
(a) (b) (c)
Figure 8 Visual inspection the effect of free parameter 119903 on detail enhancement We have found that 119903 = 2 is sufficient for fine detailsextraction and gives better results for most cases Higher value of 119903 brings in artifacts near strong edges (a) 119903 = 1 (b) 119903 = 3 and (c) 119903 = 6
separately However multiresolution pyramid approach canbe accurately used for retaining strong edges and texturedetails enhancement in multifocus image fusion problem
32 Implementation and Comparison of Various Classic Edge-Preserving Filters Figures 6(a) and 6(b) depict the color-coded maps of underexposed and overexposed imagesrespectively Dark blue color indicates overexposed regionand pure white color indicates underexposed region Fig-ures 6(c)ndash6(f) illustrate the comparisons of color-codedmapsof three edge preserving filters based detail enhancementand the results obtained by Mertens et al [4] on the Cathe-dral sequence We accepted the default parameter settingssuggested by the different edge preserving filters [9 11 14]Figures 6(c) and 6(f) show respectively the fusion resultsusing the anisotropic diffusion [9] based approach and themultiresolution pyramid based exposure fusion approach [4]which are both clearly close to the results obtained usingguided filter (see Figure 6(g)) but overall they yield less tex-ture and edge details The texture detail enhancement usingbilateral filter [11] and weighted least square filter [14] shownin Figures 6(d) and 6(e) respectively depicts overenhance-ment near strong edges and less color details As shown in theclose-up view in Figure 6(g) the proposed method based onguided filter can enhance the image texture details whilepreserving the strong edges without over enhancement
33 Analysis of Free Parameters and Fusion PerformanceMetrics To analyze the effect of epsilon gamma andwindowsize on quality score (Qabf) [16] entropy and visual informa-tion fidelity for fusion (VIFF) [17] we have illustrated threeplots (see Figures 7(a)ndash7(c) resp) for input image sequenceof ldquoCathedralrdquo To assess the effect of epsilon gamma andwindow size on fusion performance the Qabf entropy andVIFF were adopted in all experiments executed on a PCwith 22GHz i5 processor and 2GB of RAM VIFF [17] first
decomposes the source and fused images into blocks ThenVIFF utilizes the models in VIF (GSM model distortionmodel and HVS model) to capture visual information fromthe two source-fused pairsWith the help of an effective visualinformation index VIFF measures the effective visual infor-mation of the fusion in all blocks in each subband Finallythe assessment result is calculated by integrating all the infor-mation in each subband Qabf [16] evaluates the amount ofedge information transferred from input images to the fusedimage A Sobel operator is applied to yield the edge strengthand orientation information for each pixel
First to analyze the effect of 120576 onQabf entropy andVIFFthe square window parameter (119903) and texture amplificationparameter (120574) were set to 2 and 5 respectively As shownin Figure 7(a) the quality score and entropy decreases as120576 increases and VIFF increases as 120576 increases It should benoticed in Figure 7(b) that the VIFF and entropy increase as 119903increases and Qabf decreases as 119903 increases It is preferred tohave a small filter size (119903) to reduce computational time Inthe analysis of 119903 the other parameters are set to 120576 = 001and 120574 = 5 The visual inspection of effect of 119903 on ldquoCathedralrdquosequence is depicted in Figure 8 It can easily be noticed (seeFigures 8(a)ndash8(c)) that as 119903 increases the strong edges andtextures get overenhanced and therefore leads to artifacts Toanalyze the influence of 120574 it should be noticed that entropyand Qabf decrease as 120574 increases and VIFF increases as 120574increases In order to obtain optimal detail enhancement andlow computational time we have concluded that the bestresults were obtained with 120576 = 001 120574 = 5 and 119903 = 2 whichyield reasonably good results for all cases
4 Conclusions
We proposed a method to construct a detail enhanced imagefrom a set ofmultiexposure images by using amultiresolutiondecomposition technique When compared with the existing
8 The Scientific World Journal
techniques which use multiresolution and single resolutionanalysis for exposure fusion the current proposed methodperforms better in terms of enhancement of texture detailsin the fused image The framework is inspired by the edge-preserving property of guided filter that has better responsenear strong edges The two layer decomposition based onguided filter is used to extract fine textures for detail enhance-ment Moreover we have demonstrated that the presentmethod can also be applied to fuse multifocus images (ieimages focused on different targets) More importantly theinformation in the resultant image can be controlled with thehelp of the proposed free parameters
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] P E Debevec and J Malik ldquoRecovering high dynamic rangeradiance maps from photographsrdquo in Proceedings of the 24thACM Annual Conference on Computer Graphics and Interactivetechniques (SIGGRAPH rsquo97) pp 369ndash378 Los Angeles CalifUSA August 1997
[2] M Song D Tao C Chen J Bu J Luo and C Zhang ldquoProba-bilistic exposure fusionrdquo IEEETransactions on Image Processingvol 21 no 1 pp 341ndash357 2012
[3] H Seetzen W Heidrich W Stuerzlinger et al ldquoHigh dynamicrange display systemrdquoACMTransaction onGraphics vol 23 no3 pp 760ndash768
[4] T Mertens J Kautz and F Van Reeth ldquoExposure fusion a sim-ple and practical alternative to high dynamic range photogra-phyrdquo Computer Graphics Forum vol 28 no 1 pp 161ndash171 2009
[5] S Li and X Kang ldquoFast multi-exposure image fusion withmedian filter and recursive filterrdquo IEEE Transactions on Con-sumer Electronics vol 58 no 2 pp 626ndash632 2012
[6] W Zhang and W-K Cham ldquoGradient-directed multiexposurecompositionrdquo IEEE Transactions on Image Processing vol 21no 4 pp 2318ndash2323 2012
[7] R Shen I Cheng J Shi and A Basu ldquoGeneralized randomwalks for fusion of multi-exposure imagesrdquo IEEE Transactionson Image Processing vol 20 no 12 pp 3634ndash3646 2011
[8] K He J Sun and X Tang ldquoGuided image filteringrdquo in Proceed-ings of the 11th European Conference on Computer Vision 2010
[9] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990
[10] S Paris S W Hasinoff and J Kautz ldquoLocal laplacian filtersedge-aware image processing with a laplacian pyramidrdquo ACMTransactions on Graphics vol 30 no 4 article 68 2011
[11] A Agrawal R Raskar S K Nayar and Y Li ldquoRemoving pho-tography artifacts using gradient projection and flash-exposuresamplingrdquoACMTransaction onGraphics vol 24 no 3 pp 828ndash835
[12] H Singh V Kumar and S Bhooshan ldquoAnisotropic diffusionfor details enhancement in multiexposure image fusionrdquo ISRNSignal Processing vol 2013 Article ID 928971 18 pages 2013
[13] P J Burt and E H Adelson ldquoThe Laplacian pyramid as a com-pact image coderdquo IEEE Transactions on Communications vol31 no 4 pp 532ndash540 1983
[14] Z Farbman R Fattal D Lischinski and R Szeliski ldquoEdge-preserving decompositions for multi-scale tone and detailmanipulationrdquo ACM Transactions on Graphics vol 27 no 3article 67 2008
[15] N Draper and H Smith Applied Regression Analysis JohnWiley 2nd edition 1981
[16] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000
[17] Y Han Y Cai Y Cao and X Xu ldquoA new image fusion perfor-mance metric based on visual information fidelityrdquo InformationFusion vol 14 no 2 pp 127ndash135 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
The Scientific World Journal 3
(a) Mertens (b) Burt (c) Shen (d) Proposed
Figure 3 Comparison results to other recent exposure fusion techniques (a) Mertens et al [4] (b) Burt and Adelson [13] (c) Shen et al [7]and (d) results of our new exposure fusion method Note that our method yields enhanced texture and edge features Input image sequenceis courtesy of TomMertens
(a) (b) (c)
Figure 4 Our results for differentmultiexposure sequences (a) Hermes (two input exposures) (b) Chairs (five exposures) and (c) Syn (seveninput exposures)
is used to fuse computed base layers across all of the inputimages The detail layers extracted from input exposures aremanipulated and fused separately The final detail enhancedfused image (see Figures 3 and 4) is obtained by integratingthe fused base layer and the fused detail layer The detaileddescription of the proposed approach is given in the forth-coming section It is worth pointing out that our methodessentially differs from [4] which aims at enhancing thetexture and contrast details in the fused image with a nonlin-ear edge preserving filter (ie the guided filter) Moreover itis demonstrated that the proposed approach fuses the multi-focus images effectively and produces the result of rich visualdetails
2 Guided Image Filtering forBase Layer Computation
21 Edge Preserving Guided Filter In this section we firstdescribe the ability of the guided filter [8] derived from locallinear model to preserve edges and then show how it avoidsgradient reversal artifacts near the strong edges that mayappear in fused image after detail layer enhancementWe seekto maintain the shape of strong edges in the fused image thatappears due to exposure time variation across input images
Guided filter was developed by He et al [8] in 2010 as analternative to bilateral filter [11] It is an edge-preserving filterwhere the filtering output is a local linear model between theguidance 119868 and the filter output 119902 The selection of guidance
image 119868will depend on the application [11] In our implemen-tation an input image 119901 and guidance image 119868 are identicalThe output of the guided filter for a pixel 119894 is computed as aweighted averages as follows
119902119894= sum
119895
119882119894119895 (119868) 119901119894 (1)
where (119894119895) are pixel indexes and119882119894119895is the filter kernel that is a
function of guidance image 119868 and independent of input image119901 Let 119902 be a linear transform of 119868 in a window centered at thepixel 119896 as follows
119902119894= 119886119896119868119894+ 119887119896 forall119894 isin 120596
119896 (2)
where (119886119896 119887119896) are the linear coefficients assumed to be
constant in120596119896and calculated in a small square image window
of a radius (2119903+1)times(2119903+1)The local linearmodel (2) ensuresthat 119902 has an edge (ie discontinuities) only if 119868 has an edgebecause nabla119902 = 119886nabla119868 Here 119886
119896and 119887119896are computed within to
minimize the following cost function
119864 (119886119896 119887119896) = sum
119894isin120596119896
((119886119896119868119894+ 119887119896minus 119901119894)2+ 1205761198862
119896) (3)
where 120576 is the regularization term on linear coefficient a fornumerical stabilityThe significance and relation of 120576with thebilateral kernel [11] are given in [8] In our implementationwe use 119903 = 2 (ie 5 times 5 square window) and 120576 = 001
4 The Scientific World Journal
The linear coefficients used to minimize the cost functionin (3) are determined by linear regression [15] as follows
119886119896=
(1 |120596|) sum119894isin120596119896119868119894119901119894minus 120583119896119901119896
1205902
119896+ 120576
119887119896= 119901119896minus 119886119896120583119896
119901119896=1
|120596|sum
119894isin120596119896
119901119894
(4)
where 120583119896and 1205902
119896are the mean and variance of 119868 in 120596
119896 |120596| is
the number of pixels in 120596119896 and 119901
119896is the mean of 119901 in 120596
119896
The linear coefficients 119886119896and 119887
119896are computed for all
patches 120596119896in the entire image However a pixel 119894 is involved
in all windows 120596119896that contains 119894 so the value of 119902
119894in (2) will
be different for different windows So after taking the averageof all the possible value of 119902
119894 the filtered output is determined
as
119902119894=1
|120596|sum
119896119894isin120596119896
(119886119896119868119894+ 119887119896) (5)
= (119886119894119868119894+ 119887119894) (6)
Here 119886119894and 119887119894are computed as
119886119894=1
|120596|sum
119896isin120596119894
119886119896 119887
119894=1
|120596|sum
119896isin120596119894
119887119896 (7)
In practice it is found that 119886119894and 119887
119894in (7) are varying
spatially to preserve strong edges of 119868 in 119902 that is nabla119902 asymp 119886119894nabla119868
Therefore 119902119894computed in (6) preserves the strongest edges in
119868 while smoothing small changes in intensityLet 119887119870(1198941015840 1198951015840) be the base layer computed from (6) (ie
119887119870(1198941015840 1198951015840) = 119902119894and 1 le 119870 le 119873) for 119870th input image denoted
by 119868119870(1198941015840 1198951015840) The detail layer is defined as the difference
between the guided filter output and the input image whichis defined as
119889119870(1198941015840 1198951015840) = 119868119870(1198941015840 1198951015840) minus 119887119870(1198941015840 1198951015840) (8)
22 Computation of Laplacian and Gaussian PyramidResearchers have attempted to synthesize and manipulatethe features at several spatial resolutions that avoid theintroduction of seam and artifacts such as contrast reversalor black halos In the proposed algorithm the band-pass [13]components at different resolutions aremanipulated based onweight map that determine the pixel value in the recon-structed fused base layer The pyramid representationexpresses an image as a sum of spatially band-passed imageswhile retaining local spatial information in each band Apyramid is created by lowpass-filtering an image 119866
0with a
compact two-dimensional filter The filtered image is thensubsampled by removing every other pixel and every otherrow to obtain a reduced image 119866
1 This process is repeated to
form a Gaussian pyramid 1198660 1198661 1198662 1198663 119866
119889
119866119897(119894 119895) = sum
119898
sum
119899
119866119897minus1(2119894 + 119898 2119895 + 119899) 119897 = 1 119889 (9)
where 119897 (0 lt 119897 lt 119889) refers to the number of levels in thepyramid
Expanding 1198661to the same size as 119866
0and subtracting
yields the band-passed image 1198710 A Laplacian pyramid
1198710 1198711 1198712 119871
119889minus1 can be built containing band-passed
images of decreasing size and spatial frequency
119871119897= 119866119897minus 119866119897+1 119897 = 1 119889 minus 1 (10)
where the expanded image 119866119897+1
is given by
119866119897+1= 4sum
119898
sum
119899
119908 (119898 119899) [119866119897 (2119894 +119898
2 2119895 +
119899
2)] (11)
The original image can be reconstructed from the expandedband-pass images
1198660= 1198710+ 1198711+ 1198712+ sdot sdot sdot + 119871
119889minus1+ 119866119889 (12)
The Gaussian pyramid contains low-passed versions of theoriginal 119866
0 at progressively lower spatial frequencies This
effect is clearly seen when the Gaussian pyramid ldquolevelsrdquo areexpanded to the same size as119866
0 The Laplacian pyramid con-
sists of band-passed copies of 1198660 Each Laplacian level con-
tains the ldquoedgesrdquo of a certain size and spans approximately anoctave in spatial frequency
23 Base Layer Fusion Based on Multiresolution Pyramid Inour framework the fused base layer 119887
119891(1198941015840 1198951015840) is computed
as the weighted sum of the base layers 1198871(1198941015840 1198951015840) 1198872(1198941015840 1198951015840)
119887119873(1198941015840 1198951015840) obtained across 119873 input exposures We use the
pyramid approach proposed by Burt and Adelson [13] whichgenerates Laplacian pyramid of the base layers 119871119887
119870(1198941015840 1198951015840)119897
andGaussian pyramid of weightmap functions119866119882119870(1198941015840 1198951015840)119897
estimated from three quality measures (ie saturation119878119870(1198941015840 1198951015840) contrast 119862
119870(1198941015840 1198951015840) and exposure 119864
119870(1198941015840 1198951015840)) Here
119897 (0 lt 119897 lt 119889) refers to the number of levels in the pyramidand119870 (1 lt 119870 lt 119873) refers to the number of input imagesTheweight map is computed as the product of these three qualitymetrics (ie119882
119870(1198941015840 1198951015840) = 119878119870(1198941015840 1198951015840) sdot 119862119870(1198941015840 1198951015840) sdot 119864119870(1198941015840 1198951015840)) The
119871119887119870(1198941015840 1198951015840)119897multipliedwith the corresponding119866119882
119870(1198941015840 1198951015840)119897
and summing over 119870 yield modified Laplacian pyramid119871119897(1198941015840 1198951015840) as follows
119871119897(1198941015840 1198951015840) =
119873
sum
119870=1
119871 119887119897
119870(1198941015840 1198951015840)119866 119882
119897
119870(1198941015840 1198951015840) (13)
The 119887119891(1198941015840 1198951015840) that contains well exposed pixels is recon-
structed by expanding each level and then summing all thelevels of the Laplacian pyramid
119887119891(1198941015840 1198951015840) =
119889
sum
119897=0
119871119897(1198941015840 1198951015840) (14)
24 Detail Layer Fusion and Manipulation The detail layerscomputed in (8) across all the input exposures are linearlycombined to produce fused detail layer 119889
119891(1198941015840 1198951015840) that yields
combined texture information as follows
119889119891(1198941015840 1198951015840) =
sum119873
119870=0120574119891119870(119889119870(1198941015840 1198951015840))
119873 (15)
The Scientific World Journal 5
(a) (b) (c)
Figure 5 ((a) (b)) Two partially focused images (focused on different targets) (c) image generated by the proposed approach whichillustrates that the fused image extracts more information from the original images Input sequence is courtesy of Adu and Wang
(a) In1 (b) In2 (c) ANI (d) BLT (e) WLS (f) Mertens (g) Proposed
Figure 6 Color-coded map comparison (dark blue color indicates overexposed region and pure white indicates underexposed region)Comparison results to other classic edge preserving and exposure fusion techniques (a) Source exposure 1 (b) source exposure 2 (c)anisotropic filter [9] (d) bilateral filter [11] (e) weighted least square filter [14] (f) Mertens [4] (g) results of our new exposure fusionmethodbased on guided filter Note that our method yields enhanced texture and edge features Input image sequence courtesy of TomMertens
where 120574 is the user defined parameter to control amplificationof texture details (typically set to 5) and 119891
119870(sdot) is the nonlinear
function to achieve detail enhancement while reducing noiseand artifacts near strong edges due to overenhancement Wefollow the approach of [10] to reduce noise across all detaillayers The nonlinear function 119891
119870(sdot) is defined as
119891119870(1198941015840 1198951015840) = 120591(119889
119870(1198941015840 1198951015840))120572
+ (1 minus 120591) 119889119870 (1198941015840 1198951015840) (16)
where 120591 is a smooth step function equal to 0 if 119889119870(1198941015840 1198951015840) is
less than 1 of the maximum intensity 1 if it is more than 2with a smooth transition in between and the parameter 120572 isused to control contrast in the detail layers We have foundthat 120572 = 02 is a good default setting for all experiments
Finally the detail enhanced fused image 119892(1198941015840 1198951015840) is easilycomputed by simply adding up the fused base layer 119887
119891(1198941015840 1198951015840)
computed in (14) and the manipulated fused detail layer119889119891(1198941015840 1198951015840) in (15) as follows
119892 (1198941015840 1198951015840) = 119889119891(1198941015840 1198951015840) + 119887119891(1198941015840 1198951015840) (17)
3 Experimental Results and Analysis
31 Comparison with Other Exposure Fusion Methods Fig-ures 1 3 and 4 depict examples of fused images from themul-tiexposure images It is noticed that the proposed approachenhances texture details while preventing halos near strongedges As shown in Figure 1(b) the details from all of theinput images are perfectly combined and none of the fourinput exposures (see Figure 1(a)) reveals fine textures on thechair that are present in the fused image In Figures 3(a)ndash3(d)we compare our results to the recently proposed approachesFigures 3(a) and 3(b) show the fusion results using the mul-tiresolution pyramid based approach The result of Mertenset al [4] (see Figure 3(a)) appears blurry and loses texturedetails while in our results (see Figure 3(d)) the wall textureand painting on the window glass are emphasized which aredifficult to be visible in Figure 3(a) Clearly this is suboptimalas it removes Pixel-to-pixel correlations by subtracting alow-pass filtered copy of the image from the image itself togenerate a Laplacian pyramid and the result is a texture andedge details reduction in the fused image Figure 3(b) shows
6 The Scientific World Journal
001 0015 002 0025 003 0035 004 0045 005 0055 006055
06
065
07
075
08
085
09
095Q
ualit
y m
etric
s
EntropyQabfVIFF
120576
(a)
3 4 5 6 7 8 9 10 11 12 13055
06
065
07
075
08
085
09
095
Qua
lity
met
rics
r
EntropyQabfVIFF
(b)
5 55 6 65 7 75 8 85 9 95 1004
045
05
055
06
065
07
075
08
085
09
Qua
lity
met
rics
120574
EntropyQabfVIFF
(c)
Figure 7 Analysis of different free parameters used in the algorithm Maximum quality score and entropy are only observed when 120576 = 001120574 = 5 and 119903 = 2 (which are set as default parameters) It is observed that VIFF increases as 120576 120574 and 119903 increases but the larger values areresponsible for overenhancement (a) Effectiveness of 120576 on metrics (b) effectiveness of 119903 on metrics (c) effectiveness of 120574 on metrics
the results using pyramid approach [13] which reveals manydetails but losses contrast and color information Generalizedrandom walks based exposure fusion is shown in Figure 3(c)which depicts less texture and color details in brightlyilluminated regions (ie lamp and window glass) Note thatFigure 3(d) retains colors sharp edges and details while alsomaintaining an overall reduction in high frequency artifactsnear strong edges
Figure 4 shows our results for different image sequencescaptured at variable exposure settings (see Figure 4(a) Her-mes Figure 4(b) Chairs and Figure 4(c) Syn (input images
are courtesy of Jacques Joffre and ShreeNayar))Note that thestrong edges and fine texture details are accurately preservedin the fused image without introducing halo artifacts Thehalo artifacts will stand out if the detail layer undergoes asubstantial boost
Moreover in Figure 5 it is demonstrated that the pro-posed method is also suitable for multifocus image fusionto yield rich contrast As illustrated in Figure 5(c) the edgesand textures are relatively better than those of input imagesBecause our approach excludes fine textures from the baselayers we can significantly preserve and enhance fine details
The Scientific World Journal 7
(a) (b) (c)
Figure 8 Visual inspection the effect of free parameter 119903 on detail enhancement We have found that 119903 = 2 is sufficient for fine detailsextraction and gives better results for most cases Higher value of 119903 brings in artifacts near strong edges (a) 119903 = 1 (b) 119903 = 3 and (c) 119903 = 6
separately However multiresolution pyramid approach canbe accurately used for retaining strong edges and texturedetails enhancement in multifocus image fusion problem
32 Implementation and Comparison of Various Classic Edge-Preserving Filters Figures 6(a) and 6(b) depict the color-coded maps of underexposed and overexposed imagesrespectively Dark blue color indicates overexposed regionand pure white color indicates underexposed region Fig-ures 6(c)ndash6(f) illustrate the comparisons of color-codedmapsof three edge preserving filters based detail enhancementand the results obtained by Mertens et al [4] on the Cathe-dral sequence We accepted the default parameter settingssuggested by the different edge preserving filters [9 11 14]Figures 6(c) and 6(f) show respectively the fusion resultsusing the anisotropic diffusion [9] based approach and themultiresolution pyramid based exposure fusion approach [4]which are both clearly close to the results obtained usingguided filter (see Figure 6(g)) but overall they yield less tex-ture and edge details The texture detail enhancement usingbilateral filter [11] and weighted least square filter [14] shownin Figures 6(d) and 6(e) respectively depicts overenhance-ment near strong edges and less color details As shown in theclose-up view in Figure 6(g) the proposed method based onguided filter can enhance the image texture details whilepreserving the strong edges without over enhancement
33 Analysis of Free Parameters and Fusion PerformanceMetrics To analyze the effect of epsilon gamma andwindowsize on quality score (Qabf) [16] entropy and visual informa-tion fidelity for fusion (VIFF) [17] we have illustrated threeplots (see Figures 7(a)ndash7(c) resp) for input image sequenceof ldquoCathedralrdquo To assess the effect of epsilon gamma andwindow size on fusion performance the Qabf entropy andVIFF were adopted in all experiments executed on a PCwith 22GHz i5 processor and 2GB of RAM VIFF [17] first
decomposes the source and fused images into blocks ThenVIFF utilizes the models in VIF (GSM model distortionmodel and HVS model) to capture visual information fromthe two source-fused pairsWith the help of an effective visualinformation index VIFF measures the effective visual infor-mation of the fusion in all blocks in each subband Finallythe assessment result is calculated by integrating all the infor-mation in each subband Qabf [16] evaluates the amount ofedge information transferred from input images to the fusedimage A Sobel operator is applied to yield the edge strengthand orientation information for each pixel
First to analyze the effect of 120576 onQabf entropy andVIFFthe square window parameter (119903) and texture amplificationparameter (120574) were set to 2 and 5 respectively As shownin Figure 7(a) the quality score and entropy decreases as120576 increases and VIFF increases as 120576 increases It should benoticed in Figure 7(b) that the VIFF and entropy increase as 119903increases and Qabf decreases as 119903 increases It is preferred tohave a small filter size (119903) to reduce computational time Inthe analysis of 119903 the other parameters are set to 120576 = 001and 120574 = 5 The visual inspection of effect of 119903 on ldquoCathedralrdquosequence is depicted in Figure 8 It can easily be noticed (seeFigures 8(a)ndash8(c)) that as 119903 increases the strong edges andtextures get overenhanced and therefore leads to artifacts Toanalyze the influence of 120574 it should be noticed that entropyand Qabf decrease as 120574 increases and VIFF increases as 120574increases In order to obtain optimal detail enhancement andlow computational time we have concluded that the bestresults were obtained with 120576 = 001 120574 = 5 and 119903 = 2 whichyield reasonably good results for all cases
4 Conclusions
We proposed a method to construct a detail enhanced imagefrom a set ofmultiexposure images by using amultiresolutiondecomposition technique When compared with the existing
8 The Scientific World Journal
techniques which use multiresolution and single resolutionanalysis for exposure fusion the current proposed methodperforms better in terms of enhancement of texture detailsin the fused image The framework is inspired by the edge-preserving property of guided filter that has better responsenear strong edges The two layer decomposition based onguided filter is used to extract fine textures for detail enhance-ment Moreover we have demonstrated that the presentmethod can also be applied to fuse multifocus images (ieimages focused on different targets) More importantly theinformation in the resultant image can be controlled with thehelp of the proposed free parameters
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] P E Debevec and J Malik ldquoRecovering high dynamic rangeradiance maps from photographsrdquo in Proceedings of the 24thACM Annual Conference on Computer Graphics and Interactivetechniques (SIGGRAPH rsquo97) pp 369ndash378 Los Angeles CalifUSA August 1997
[2] M Song D Tao C Chen J Bu J Luo and C Zhang ldquoProba-bilistic exposure fusionrdquo IEEETransactions on Image Processingvol 21 no 1 pp 341ndash357 2012
[3] H Seetzen W Heidrich W Stuerzlinger et al ldquoHigh dynamicrange display systemrdquoACMTransaction onGraphics vol 23 no3 pp 760ndash768
[4] T Mertens J Kautz and F Van Reeth ldquoExposure fusion a sim-ple and practical alternative to high dynamic range photogra-phyrdquo Computer Graphics Forum vol 28 no 1 pp 161ndash171 2009
[5] S Li and X Kang ldquoFast multi-exposure image fusion withmedian filter and recursive filterrdquo IEEE Transactions on Con-sumer Electronics vol 58 no 2 pp 626ndash632 2012
[6] W Zhang and W-K Cham ldquoGradient-directed multiexposurecompositionrdquo IEEE Transactions on Image Processing vol 21no 4 pp 2318ndash2323 2012
[7] R Shen I Cheng J Shi and A Basu ldquoGeneralized randomwalks for fusion of multi-exposure imagesrdquo IEEE Transactionson Image Processing vol 20 no 12 pp 3634ndash3646 2011
[8] K He J Sun and X Tang ldquoGuided image filteringrdquo in Proceed-ings of the 11th European Conference on Computer Vision 2010
[9] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990
[10] S Paris S W Hasinoff and J Kautz ldquoLocal laplacian filtersedge-aware image processing with a laplacian pyramidrdquo ACMTransactions on Graphics vol 30 no 4 article 68 2011
[11] A Agrawal R Raskar S K Nayar and Y Li ldquoRemoving pho-tography artifacts using gradient projection and flash-exposuresamplingrdquoACMTransaction onGraphics vol 24 no 3 pp 828ndash835
[12] H Singh V Kumar and S Bhooshan ldquoAnisotropic diffusionfor details enhancement in multiexposure image fusionrdquo ISRNSignal Processing vol 2013 Article ID 928971 18 pages 2013
[13] P J Burt and E H Adelson ldquoThe Laplacian pyramid as a com-pact image coderdquo IEEE Transactions on Communications vol31 no 4 pp 532ndash540 1983
[14] Z Farbman R Fattal D Lischinski and R Szeliski ldquoEdge-preserving decompositions for multi-scale tone and detailmanipulationrdquo ACM Transactions on Graphics vol 27 no 3article 67 2008
[15] N Draper and H Smith Applied Regression Analysis JohnWiley 2nd edition 1981
[16] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000
[17] Y Han Y Cai Y Cao and X Xu ldquoA new image fusion perfor-mance metric based on visual information fidelityrdquo InformationFusion vol 14 no 2 pp 127ndash135 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
4 The Scientific World Journal
The linear coefficients used to minimize the cost functionin (3) are determined by linear regression [15] as follows
119886119896=
(1 |120596|) sum119894isin120596119896119868119894119901119894minus 120583119896119901119896
1205902
119896+ 120576
119887119896= 119901119896minus 119886119896120583119896
119901119896=1
|120596|sum
119894isin120596119896
119901119894
(4)
where 120583119896and 1205902
119896are the mean and variance of 119868 in 120596
119896 |120596| is
the number of pixels in 120596119896 and 119901
119896is the mean of 119901 in 120596
119896
The linear coefficients 119886119896and 119887
119896are computed for all
patches 120596119896in the entire image However a pixel 119894 is involved
in all windows 120596119896that contains 119894 so the value of 119902
119894in (2) will
be different for different windows So after taking the averageof all the possible value of 119902
119894 the filtered output is determined
as
119902119894=1
|120596|sum
119896119894isin120596119896
(119886119896119868119894+ 119887119896) (5)
= (119886119894119868119894+ 119887119894) (6)
Here 119886119894and 119887119894are computed as
119886119894=1
|120596|sum
119896isin120596119894
119886119896 119887
119894=1
|120596|sum
119896isin120596119894
119887119896 (7)
In practice it is found that 119886119894and 119887
119894in (7) are varying
spatially to preserve strong edges of 119868 in 119902 that is nabla119902 asymp 119886119894nabla119868
Therefore 119902119894computed in (6) preserves the strongest edges in
119868 while smoothing small changes in intensityLet 119887119870(1198941015840 1198951015840) be the base layer computed from (6) (ie
119887119870(1198941015840 1198951015840) = 119902119894and 1 le 119870 le 119873) for 119870th input image denoted
by 119868119870(1198941015840 1198951015840) The detail layer is defined as the difference
between the guided filter output and the input image whichis defined as
119889119870(1198941015840 1198951015840) = 119868119870(1198941015840 1198951015840) minus 119887119870(1198941015840 1198951015840) (8)
22 Computation of Laplacian and Gaussian PyramidResearchers have attempted to synthesize and manipulatethe features at several spatial resolutions that avoid theintroduction of seam and artifacts such as contrast reversalor black halos In the proposed algorithm the band-pass [13]components at different resolutions aremanipulated based onweight map that determine the pixel value in the recon-structed fused base layer The pyramid representationexpresses an image as a sum of spatially band-passed imageswhile retaining local spatial information in each band Apyramid is created by lowpass-filtering an image 119866
0with a
compact two-dimensional filter The filtered image is thensubsampled by removing every other pixel and every otherrow to obtain a reduced image 119866
1 This process is repeated to
form a Gaussian pyramid 1198660 1198661 1198662 1198663 119866
119889
119866119897(119894 119895) = sum
119898
sum
119899
119866119897minus1(2119894 + 119898 2119895 + 119899) 119897 = 1 119889 (9)
where 119897 (0 lt 119897 lt 119889) refers to the number of levels in thepyramid
Expanding 1198661to the same size as 119866
0and subtracting
yields the band-passed image 1198710 A Laplacian pyramid
1198710 1198711 1198712 119871
119889minus1 can be built containing band-passed
images of decreasing size and spatial frequency
119871119897= 119866119897minus 119866119897+1 119897 = 1 119889 minus 1 (10)
where the expanded image 119866119897+1
is given by
119866119897+1= 4sum
119898
sum
119899
119908 (119898 119899) [119866119897 (2119894 +119898
2 2119895 +
119899
2)] (11)
The original image can be reconstructed from the expandedband-pass images
1198660= 1198710+ 1198711+ 1198712+ sdot sdot sdot + 119871
119889minus1+ 119866119889 (12)
The Gaussian pyramid contains low-passed versions of theoriginal 119866
0 at progressively lower spatial frequencies This
effect is clearly seen when the Gaussian pyramid ldquolevelsrdquo areexpanded to the same size as119866
0 The Laplacian pyramid con-
sists of band-passed copies of 1198660 Each Laplacian level con-
tains the ldquoedgesrdquo of a certain size and spans approximately anoctave in spatial frequency
23 Base Layer Fusion Based on Multiresolution Pyramid Inour framework the fused base layer 119887
119891(1198941015840 1198951015840) is computed
as the weighted sum of the base layers 1198871(1198941015840 1198951015840) 1198872(1198941015840 1198951015840)
119887119873(1198941015840 1198951015840) obtained across 119873 input exposures We use the
pyramid approach proposed by Burt and Adelson [13] whichgenerates Laplacian pyramid of the base layers 119871119887
119870(1198941015840 1198951015840)119897
andGaussian pyramid of weightmap functions119866119882119870(1198941015840 1198951015840)119897
estimated from three quality measures (ie saturation119878119870(1198941015840 1198951015840) contrast 119862
119870(1198941015840 1198951015840) and exposure 119864
119870(1198941015840 1198951015840)) Here
119897 (0 lt 119897 lt 119889) refers to the number of levels in the pyramidand119870 (1 lt 119870 lt 119873) refers to the number of input imagesTheweight map is computed as the product of these three qualitymetrics (ie119882
119870(1198941015840 1198951015840) = 119878119870(1198941015840 1198951015840) sdot 119862119870(1198941015840 1198951015840) sdot 119864119870(1198941015840 1198951015840)) The
119871119887119870(1198941015840 1198951015840)119897multipliedwith the corresponding119866119882
119870(1198941015840 1198951015840)119897
and summing over 119870 yield modified Laplacian pyramid119871119897(1198941015840 1198951015840) as follows
119871119897(1198941015840 1198951015840) =
119873
sum
119870=1
119871 119887119897
119870(1198941015840 1198951015840)119866 119882
119897
119870(1198941015840 1198951015840) (13)
The 119887119891(1198941015840 1198951015840) that contains well exposed pixels is recon-
structed by expanding each level and then summing all thelevels of the Laplacian pyramid
119887119891(1198941015840 1198951015840) =
119889
sum
119897=0
119871119897(1198941015840 1198951015840) (14)
24 Detail Layer Fusion and Manipulation The detail layerscomputed in (8) across all the input exposures are linearlycombined to produce fused detail layer 119889
119891(1198941015840 1198951015840) that yields
combined texture information as follows
119889119891(1198941015840 1198951015840) =
sum119873
119870=0120574119891119870(119889119870(1198941015840 1198951015840))
119873 (15)
The Scientific World Journal 5
(a) (b) (c)
Figure 5 ((a) (b)) Two partially focused images (focused on different targets) (c) image generated by the proposed approach whichillustrates that the fused image extracts more information from the original images Input sequence is courtesy of Adu and Wang
(a) In1 (b) In2 (c) ANI (d) BLT (e) WLS (f) Mertens (g) Proposed
Figure 6 Color-coded map comparison (dark blue color indicates overexposed region and pure white indicates underexposed region)Comparison results to other classic edge preserving and exposure fusion techniques (a) Source exposure 1 (b) source exposure 2 (c)anisotropic filter [9] (d) bilateral filter [11] (e) weighted least square filter [14] (f) Mertens [4] (g) results of our new exposure fusionmethodbased on guided filter Note that our method yields enhanced texture and edge features Input image sequence courtesy of TomMertens
where 120574 is the user defined parameter to control amplificationof texture details (typically set to 5) and 119891
119870(sdot) is the nonlinear
function to achieve detail enhancement while reducing noiseand artifacts near strong edges due to overenhancement Wefollow the approach of [10] to reduce noise across all detaillayers The nonlinear function 119891
119870(sdot) is defined as
119891119870(1198941015840 1198951015840) = 120591(119889
119870(1198941015840 1198951015840))120572
+ (1 minus 120591) 119889119870 (1198941015840 1198951015840) (16)
where 120591 is a smooth step function equal to 0 if 119889119870(1198941015840 1198951015840) is
less than 1 of the maximum intensity 1 if it is more than 2with a smooth transition in between and the parameter 120572 isused to control contrast in the detail layers We have foundthat 120572 = 02 is a good default setting for all experiments
Finally the detail enhanced fused image 119892(1198941015840 1198951015840) is easilycomputed by simply adding up the fused base layer 119887
119891(1198941015840 1198951015840)
computed in (14) and the manipulated fused detail layer119889119891(1198941015840 1198951015840) in (15) as follows
119892 (1198941015840 1198951015840) = 119889119891(1198941015840 1198951015840) + 119887119891(1198941015840 1198951015840) (17)
3 Experimental Results and Analysis
31 Comparison with Other Exposure Fusion Methods Fig-ures 1 3 and 4 depict examples of fused images from themul-tiexposure images It is noticed that the proposed approachenhances texture details while preventing halos near strongedges As shown in Figure 1(b) the details from all of theinput images are perfectly combined and none of the fourinput exposures (see Figure 1(a)) reveals fine textures on thechair that are present in the fused image In Figures 3(a)ndash3(d)we compare our results to the recently proposed approachesFigures 3(a) and 3(b) show the fusion results using the mul-tiresolution pyramid based approach The result of Mertenset al [4] (see Figure 3(a)) appears blurry and loses texturedetails while in our results (see Figure 3(d)) the wall textureand painting on the window glass are emphasized which aredifficult to be visible in Figure 3(a) Clearly this is suboptimalas it removes Pixel-to-pixel correlations by subtracting alow-pass filtered copy of the image from the image itself togenerate a Laplacian pyramid and the result is a texture andedge details reduction in the fused image Figure 3(b) shows
6 The Scientific World Journal
001 0015 002 0025 003 0035 004 0045 005 0055 006055
06
065
07
075
08
085
09
095Q
ualit
y m
etric
s
EntropyQabfVIFF
120576
(a)
3 4 5 6 7 8 9 10 11 12 13055
06
065
07
075
08
085
09
095
Qua
lity
met
rics
r
EntropyQabfVIFF
(b)
5 55 6 65 7 75 8 85 9 95 1004
045
05
055
06
065
07
075
08
085
09
Qua
lity
met
rics
120574
EntropyQabfVIFF
(c)
Figure 7 Analysis of different free parameters used in the algorithm Maximum quality score and entropy are only observed when 120576 = 001120574 = 5 and 119903 = 2 (which are set as default parameters) It is observed that VIFF increases as 120576 120574 and 119903 increases but the larger values areresponsible for overenhancement (a) Effectiveness of 120576 on metrics (b) effectiveness of 119903 on metrics (c) effectiveness of 120574 on metrics
the results using pyramid approach [13] which reveals manydetails but losses contrast and color information Generalizedrandom walks based exposure fusion is shown in Figure 3(c)which depicts less texture and color details in brightlyilluminated regions (ie lamp and window glass) Note thatFigure 3(d) retains colors sharp edges and details while alsomaintaining an overall reduction in high frequency artifactsnear strong edges
Figure 4 shows our results for different image sequencescaptured at variable exposure settings (see Figure 4(a) Her-mes Figure 4(b) Chairs and Figure 4(c) Syn (input images
are courtesy of Jacques Joffre and ShreeNayar))Note that thestrong edges and fine texture details are accurately preservedin the fused image without introducing halo artifacts Thehalo artifacts will stand out if the detail layer undergoes asubstantial boost
Moreover in Figure 5 it is demonstrated that the pro-posed method is also suitable for multifocus image fusionto yield rich contrast As illustrated in Figure 5(c) the edgesand textures are relatively better than those of input imagesBecause our approach excludes fine textures from the baselayers we can significantly preserve and enhance fine details
The Scientific World Journal 7
(a) (b) (c)
Figure 8 Visual inspection the effect of free parameter 119903 on detail enhancement We have found that 119903 = 2 is sufficient for fine detailsextraction and gives better results for most cases Higher value of 119903 brings in artifacts near strong edges (a) 119903 = 1 (b) 119903 = 3 and (c) 119903 = 6
separately However multiresolution pyramid approach canbe accurately used for retaining strong edges and texturedetails enhancement in multifocus image fusion problem
32 Implementation and Comparison of Various Classic Edge-Preserving Filters Figures 6(a) and 6(b) depict the color-coded maps of underexposed and overexposed imagesrespectively Dark blue color indicates overexposed regionand pure white color indicates underexposed region Fig-ures 6(c)ndash6(f) illustrate the comparisons of color-codedmapsof three edge preserving filters based detail enhancementand the results obtained by Mertens et al [4] on the Cathe-dral sequence We accepted the default parameter settingssuggested by the different edge preserving filters [9 11 14]Figures 6(c) and 6(f) show respectively the fusion resultsusing the anisotropic diffusion [9] based approach and themultiresolution pyramid based exposure fusion approach [4]which are both clearly close to the results obtained usingguided filter (see Figure 6(g)) but overall they yield less tex-ture and edge details The texture detail enhancement usingbilateral filter [11] and weighted least square filter [14] shownin Figures 6(d) and 6(e) respectively depicts overenhance-ment near strong edges and less color details As shown in theclose-up view in Figure 6(g) the proposed method based onguided filter can enhance the image texture details whilepreserving the strong edges without over enhancement
33 Analysis of Free Parameters and Fusion PerformanceMetrics To analyze the effect of epsilon gamma andwindowsize on quality score (Qabf) [16] entropy and visual informa-tion fidelity for fusion (VIFF) [17] we have illustrated threeplots (see Figures 7(a)ndash7(c) resp) for input image sequenceof ldquoCathedralrdquo To assess the effect of epsilon gamma andwindow size on fusion performance the Qabf entropy andVIFF were adopted in all experiments executed on a PCwith 22GHz i5 processor and 2GB of RAM VIFF [17] first
decomposes the source and fused images into blocks ThenVIFF utilizes the models in VIF (GSM model distortionmodel and HVS model) to capture visual information fromthe two source-fused pairsWith the help of an effective visualinformation index VIFF measures the effective visual infor-mation of the fusion in all blocks in each subband Finallythe assessment result is calculated by integrating all the infor-mation in each subband Qabf [16] evaluates the amount ofedge information transferred from input images to the fusedimage A Sobel operator is applied to yield the edge strengthand orientation information for each pixel
First to analyze the effect of 120576 onQabf entropy andVIFFthe square window parameter (119903) and texture amplificationparameter (120574) were set to 2 and 5 respectively As shownin Figure 7(a) the quality score and entropy decreases as120576 increases and VIFF increases as 120576 increases It should benoticed in Figure 7(b) that the VIFF and entropy increase as 119903increases and Qabf decreases as 119903 increases It is preferred tohave a small filter size (119903) to reduce computational time Inthe analysis of 119903 the other parameters are set to 120576 = 001and 120574 = 5 The visual inspection of effect of 119903 on ldquoCathedralrdquosequence is depicted in Figure 8 It can easily be noticed (seeFigures 8(a)ndash8(c)) that as 119903 increases the strong edges andtextures get overenhanced and therefore leads to artifacts Toanalyze the influence of 120574 it should be noticed that entropyand Qabf decrease as 120574 increases and VIFF increases as 120574increases In order to obtain optimal detail enhancement andlow computational time we have concluded that the bestresults were obtained with 120576 = 001 120574 = 5 and 119903 = 2 whichyield reasonably good results for all cases
4 Conclusions
We proposed a method to construct a detail enhanced imagefrom a set ofmultiexposure images by using amultiresolutiondecomposition technique When compared with the existing
8 The Scientific World Journal
techniques which use multiresolution and single resolutionanalysis for exposure fusion the current proposed methodperforms better in terms of enhancement of texture detailsin the fused image The framework is inspired by the edge-preserving property of guided filter that has better responsenear strong edges The two layer decomposition based onguided filter is used to extract fine textures for detail enhance-ment Moreover we have demonstrated that the presentmethod can also be applied to fuse multifocus images (ieimages focused on different targets) More importantly theinformation in the resultant image can be controlled with thehelp of the proposed free parameters
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] P E Debevec and J Malik ldquoRecovering high dynamic rangeradiance maps from photographsrdquo in Proceedings of the 24thACM Annual Conference on Computer Graphics and Interactivetechniques (SIGGRAPH rsquo97) pp 369ndash378 Los Angeles CalifUSA August 1997
[2] M Song D Tao C Chen J Bu J Luo and C Zhang ldquoProba-bilistic exposure fusionrdquo IEEETransactions on Image Processingvol 21 no 1 pp 341ndash357 2012
[3] H Seetzen W Heidrich W Stuerzlinger et al ldquoHigh dynamicrange display systemrdquoACMTransaction onGraphics vol 23 no3 pp 760ndash768
[4] T Mertens J Kautz and F Van Reeth ldquoExposure fusion a sim-ple and practical alternative to high dynamic range photogra-phyrdquo Computer Graphics Forum vol 28 no 1 pp 161ndash171 2009
[5] S Li and X Kang ldquoFast multi-exposure image fusion withmedian filter and recursive filterrdquo IEEE Transactions on Con-sumer Electronics vol 58 no 2 pp 626ndash632 2012
[6] W Zhang and W-K Cham ldquoGradient-directed multiexposurecompositionrdquo IEEE Transactions on Image Processing vol 21no 4 pp 2318ndash2323 2012
[7] R Shen I Cheng J Shi and A Basu ldquoGeneralized randomwalks for fusion of multi-exposure imagesrdquo IEEE Transactionson Image Processing vol 20 no 12 pp 3634ndash3646 2011
[8] K He J Sun and X Tang ldquoGuided image filteringrdquo in Proceed-ings of the 11th European Conference on Computer Vision 2010
[9] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990
[10] S Paris S W Hasinoff and J Kautz ldquoLocal laplacian filtersedge-aware image processing with a laplacian pyramidrdquo ACMTransactions on Graphics vol 30 no 4 article 68 2011
[11] A Agrawal R Raskar S K Nayar and Y Li ldquoRemoving pho-tography artifacts using gradient projection and flash-exposuresamplingrdquoACMTransaction onGraphics vol 24 no 3 pp 828ndash835
[12] H Singh V Kumar and S Bhooshan ldquoAnisotropic diffusionfor details enhancement in multiexposure image fusionrdquo ISRNSignal Processing vol 2013 Article ID 928971 18 pages 2013
[13] P J Burt and E H Adelson ldquoThe Laplacian pyramid as a com-pact image coderdquo IEEE Transactions on Communications vol31 no 4 pp 532ndash540 1983
[14] Z Farbman R Fattal D Lischinski and R Szeliski ldquoEdge-preserving decompositions for multi-scale tone and detailmanipulationrdquo ACM Transactions on Graphics vol 27 no 3article 67 2008
[15] N Draper and H Smith Applied Regression Analysis JohnWiley 2nd edition 1981
[16] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000
[17] Y Han Y Cai Y Cao and X Xu ldquoA new image fusion perfor-mance metric based on visual information fidelityrdquo InformationFusion vol 14 no 2 pp 127ndash135 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
The Scientific World Journal 5
(a) (b) (c)
Figure 5 ((a) (b)) Two partially focused images (focused on different targets) (c) image generated by the proposed approach whichillustrates that the fused image extracts more information from the original images Input sequence is courtesy of Adu and Wang
(a) In1 (b) In2 (c) ANI (d) BLT (e) WLS (f) Mertens (g) Proposed
Figure 6 Color-coded map comparison (dark blue color indicates overexposed region and pure white indicates underexposed region)Comparison results to other classic edge preserving and exposure fusion techniques (a) Source exposure 1 (b) source exposure 2 (c)anisotropic filter [9] (d) bilateral filter [11] (e) weighted least square filter [14] (f) Mertens [4] (g) results of our new exposure fusionmethodbased on guided filter Note that our method yields enhanced texture and edge features Input image sequence courtesy of TomMertens
where 120574 is the user defined parameter to control amplificationof texture details (typically set to 5) and 119891
119870(sdot) is the nonlinear
function to achieve detail enhancement while reducing noiseand artifacts near strong edges due to overenhancement Wefollow the approach of [10] to reduce noise across all detaillayers The nonlinear function 119891
119870(sdot) is defined as
119891119870(1198941015840 1198951015840) = 120591(119889
119870(1198941015840 1198951015840))120572
+ (1 minus 120591) 119889119870 (1198941015840 1198951015840) (16)
where 120591 is a smooth step function equal to 0 if 119889119870(1198941015840 1198951015840) is
less than 1 of the maximum intensity 1 if it is more than 2with a smooth transition in between and the parameter 120572 isused to control contrast in the detail layers We have foundthat 120572 = 02 is a good default setting for all experiments
Finally the detail enhanced fused image 119892(1198941015840 1198951015840) is easilycomputed by simply adding up the fused base layer 119887
119891(1198941015840 1198951015840)
computed in (14) and the manipulated fused detail layer119889119891(1198941015840 1198951015840) in (15) as follows
119892 (1198941015840 1198951015840) = 119889119891(1198941015840 1198951015840) + 119887119891(1198941015840 1198951015840) (17)
3 Experimental Results and Analysis
31 Comparison with Other Exposure Fusion Methods Fig-ures 1 3 and 4 depict examples of fused images from themul-tiexposure images It is noticed that the proposed approachenhances texture details while preventing halos near strongedges As shown in Figure 1(b) the details from all of theinput images are perfectly combined and none of the fourinput exposures (see Figure 1(a)) reveals fine textures on thechair that are present in the fused image In Figures 3(a)ndash3(d)we compare our results to the recently proposed approachesFigures 3(a) and 3(b) show the fusion results using the mul-tiresolution pyramid based approach The result of Mertenset al [4] (see Figure 3(a)) appears blurry and loses texturedetails while in our results (see Figure 3(d)) the wall textureand painting on the window glass are emphasized which aredifficult to be visible in Figure 3(a) Clearly this is suboptimalas it removes Pixel-to-pixel correlations by subtracting alow-pass filtered copy of the image from the image itself togenerate a Laplacian pyramid and the result is a texture andedge details reduction in the fused image Figure 3(b) shows
6 The Scientific World Journal
001 0015 002 0025 003 0035 004 0045 005 0055 006055
06
065
07
075
08
085
09
095Q
ualit
y m
etric
s
EntropyQabfVIFF
120576
(a)
3 4 5 6 7 8 9 10 11 12 13055
06
065
07
075
08
085
09
095
Qua
lity
met
rics
r
EntropyQabfVIFF
(b)
5 55 6 65 7 75 8 85 9 95 1004
045
05
055
06
065
07
075
08
085
09
Qua
lity
met
rics
120574
EntropyQabfVIFF
(c)
Figure 7 Analysis of different free parameters used in the algorithm Maximum quality score and entropy are only observed when 120576 = 001120574 = 5 and 119903 = 2 (which are set as default parameters) It is observed that VIFF increases as 120576 120574 and 119903 increases but the larger values areresponsible for overenhancement (a) Effectiveness of 120576 on metrics (b) effectiveness of 119903 on metrics (c) effectiveness of 120574 on metrics
the results using pyramid approach [13] which reveals manydetails but losses contrast and color information Generalizedrandom walks based exposure fusion is shown in Figure 3(c)which depicts less texture and color details in brightlyilluminated regions (ie lamp and window glass) Note thatFigure 3(d) retains colors sharp edges and details while alsomaintaining an overall reduction in high frequency artifactsnear strong edges
Figure 4 shows our results for different image sequencescaptured at variable exposure settings (see Figure 4(a) Her-mes Figure 4(b) Chairs and Figure 4(c) Syn (input images
are courtesy of Jacques Joffre and ShreeNayar))Note that thestrong edges and fine texture details are accurately preservedin the fused image without introducing halo artifacts Thehalo artifacts will stand out if the detail layer undergoes asubstantial boost
Moreover in Figure 5 it is demonstrated that the pro-posed method is also suitable for multifocus image fusionto yield rich contrast As illustrated in Figure 5(c) the edgesand textures are relatively better than those of input imagesBecause our approach excludes fine textures from the baselayers we can significantly preserve and enhance fine details
The Scientific World Journal 7
(a) (b) (c)
Figure 8 Visual inspection the effect of free parameter 119903 on detail enhancement We have found that 119903 = 2 is sufficient for fine detailsextraction and gives better results for most cases Higher value of 119903 brings in artifacts near strong edges (a) 119903 = 1 (b) 119903 = 3 and (c) 119903 = 6
separately However multiresolution pyramid approach canbe accurately used for retaining strong edges and texturedetails enhancement in multifocus image fusion problem
32 Implementation and Comparison of Various Classic Edge-Preserving Filters Figures 6(a) and 6(b) depict the color-coded maps of underexposed and overexposed imagesrespectively Dark blue color indicates overexposed regionand pure white color indicates underexposed region Fig-ures 6(c)ndash6(f) illustrate the comparisons of color-codedmapsof three edge preserving filters based detail enhancementand the results obtained by Mertens et al [4] on the Cathe-dral sequence We accepted the default parameter settingssuggested by the different edge preserving filters [9 11 14]Figures 6(c) and 6(f) show respectively the fusion resultsusing the anisotropic diffusion [9] based approach and themultiresolution pyramid based exposure fusion approach [4]which are both clearly close to the results obtained usingguided filter (see Figure 6(g)) but overall they yield less tex-ture and edge details The texture detail enhancement usingbilateral filter [11] and weighted least square filter [14] shownin Figures 6(d) and 6(e) respectively depicts overenhance-ment near strong edges and less color details As shown in theclose-up view in Figure 6(g) the proposed method based onguided filter can enhance the image texture details whilepreserving the strong edges without over enhancement
33 Analysis of Free Parameters and Fusion PerformanceMetrics To analyze the effect of epsilon gamma andwindowsize on quality score (Qabf) [16] entropy and visual informa-tion fidelity for fusion (VIFF) [17] we have illustrated threeplots (see Figures 7(a)ndash7(c) resp) for input image sequenceof ldquoCathedralrdquo To assess the effect of epsilon gamma andwindow size on fusion performance the Qabf entropy andVIFF were adopted in all experiments executed on a PCwith 22GHz i5 processor and 2GB of RAM VIFF [17] first
decomposes the source and fused images into blocks ThenVIFF utilizes the models in VIF (GSM model distortionmodel and HVS model) to capture visual information fromthe two source-fused pairsWith the help of an effective visualinformation index VIFF measures the effective visual infor-mation of the fusion in all blocks in each subband Finallythe assessment result is calculated by integrating all the infor-mation in each subband Qabf [16] evaluates the amount ofedge information transferred from input images to the fusedimage A Sobel operator is applied to yield the edge strengthand orientation information for each pixel
First to analyze the effect of 120576 onQabf entropy andVIFFthe square window parameter (119903) and texture amplificationparameter (120574) were set to 2 and 5 respectively As shownin Figure 7(a) the quality score and entropy decreases as120576 increases and VIFF increases as 120576 increases It should benoticed in Figure 7(b) that the VIFF and entropy increase as 119903increases and Qabf decreases as 119903 increases It is preferred tohave a small filter size (119903) to reduce computational time Inthe analysis of 119903 the other parameters are set to 120576 = 001and 120574 = 5 The visual inspection of effect of 119903 on ldquoCathedralrdquosequence is depicted in Figure 8 It can easily be noticed (seeFigures 8(a)ndash8(c)) that as 119903 increases the strong edges andtextures get overenhanced and therefore leads to artifacts Toanalyze the influence of 120574 it should be noticed that entropyand Qabf decrease as 120574 increases and VIFF increases as 120574increases In order to obtain optimal detail enhancement andlow computational time we have concluded that the bestresults were obtained with 120576 = 001 120574 = 5 and 119903 = 2 whichyield reasonably good results for all cases
4 Conclusions
We proposed a method to construct a detail enhanced imagefrom a set ofmultiexposure images by using amultiresolutiondecomposition technique When compared with the existing
8 The Scientific World Journal
techniques which use multiresolution and single resolutionanalysis for exposure fusion the current proposed methodperforms better in terms of enhancement of texture detailsin the fused image The framework is inspired by the edge-preserving property of guided filter that has better responsenear strong edges The two layer decomposition based onguided filter is used to extract fine textures for detail enhance-ment Moreover we have demonstrated that the presentmethod can also be applied to fuse multifocus images (ieimages focused on different targets) More importantly theinformation in the resultant image can be controlled with thehelp of the proposed free parameters
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] P E Debevec and J Malik ldquoRecovering high dynamic rangeradiance maps from photographsrdquo in Proceedings of the 24thACM Annual Conference on Computer Graphics and Interactivetechniques (SIGGRAPH rsquo97) pp 369ndash378 Los Angeles CalifUSA August 1997
[2] M Song D Tao C Chen J Bu J Luo and C Zhang ldquoProba-bilistic exposure fusionrdquo IEEETransactions on Image Processingvol 21 no 1 pp 341ndash357 2012
[3] H Seetzen W Heidrich W Stuerzlinger et al ldquoHigh dynamicrange display systemrdquoACMTransaction onGraphics vol 23 no3 pp 760ndash768
[4] T Mertens J Kautz and F Van Reeth ldquoExposure fusion a sim-ple and practical alternative to high dynamic range photogra-phyrdquo Computer Graphics Forum vol 28 no 1 pp 161ndash171 2009
[5] S Li and X Kang ldquoFast multi-exposure image fusion withmedian filter and recursive filterrdquo IEEE Transactions on Con-sumer Electronics vol 58 no 2 pp 626ndash632 2012
[6] W Zhang and W-K Cham ldquoGradient-directed multiexposurecompositionrdquo IEEE Transactions on Image Processing vol 21no 4 pp 2318ndash2323 2012
[7] R Shen I Cheng J Shi and A Basu ldquoGeneralized randomwalks for fusion of multi-exposure imagesrdquo IEEE Transactionson Image Processing vol 20 no 12 pp 3634ndash3646 2011
[8] K He J Sun and X Tang ldquoGuided image filteringrdquo in Proceed-ings of the 11th European Conference on Computer Vision 2010
[9] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990
[10] S Paris S W Hasinoff and J Kautz ldquoLocal laplacian filtersedge-aware image processing with a laplacian pyramidrdquo ACMTransactions on Graphics vol 30 no 4 article 68 2011
[11] A Agrawal R Raskar S K Nayar and Y Li ldquoRemoving pho-tography artifacts using gradient projection and flash-exposuresamplingrdquoACMTransaction onGraphics vol 24 no 3 pp 828ndash835
[12] H Singh V Kumar and S Bhooshan ldquoAnisotropic diffusionfor details enhancement in multiexposure image fusionrdquo ISRNSignal Processing vol 2013 Article ID 928971 18 pages 2013
[13] P J Burt and E H Adelson ldquoThe Laplacian pyramid as a com-pact image coderdquo IEEE Transactions on Communications vol31 no 4 pp 532ndash540 1983
[14] Z Farbman R Fattal D Lischinski and R Szeliski ldquoEdge-preserving decompositions for multi-scale tone and detailmanipulationrdquo ACM Transactions on Graphics vol 27 no 3article 67 2008
[15] N Draper and H Smith Applied Regression Analysis JohnWiley 2nd edition 1981
[16] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000
[17] Y Han Y Cai Y Cao and X Xu ldquoA new image fusion perfor-mance metric based on visual information fidelityrdquo InformationFusion vol 14 no 2 pp 127ndash135 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
6 The Scientific World Journal
001 0015 002 0025 003 0035 004 0045 005 0055 006055
06
065
07
075
08
085
09
095Q
ualit
y m
etric
s
EntropyQabfVIFF
120576
(a)
3 4 5 6 7 8 9 10 11 12 13055
06
065
07
075
08
085
09
095
Qua
lity
met
rics
r
EntropyQabfVIFF
(b)
5 55 6 65 7 75 8 85 9 95 1004
045
05
055
06
065
07
075
08
085
09
Qua
lity
met
rics
120574
EntropyQabfVIFF
(c)
Figure 7 Analysis of different free parameters used in the algorithm Maximum quality score and entropy are only observed when 120576 = 001120574 = 5 and 119903 = 2 (which are set as default parameters) It is observed that VIFF increases as 120576 120574 and 119903 increases but the larger values areresponsible for overenhancement (a) Effectiveness of 120576 on metrics (b) effectiveness of 119903 on metrics (c) effectiveness of 120574 on metrics
the results using pyramid approach [13] which reveals manydetails but losses contrast and color information Generalizedrandom walks based exposure fusion is shown in Figure 3(c)which depicts less texture and color details in brightlyilluminated regions (ie lamp and window glass) Note thatFigure 3(d) retains colors sharp edges and details while alsomaintaining an overall reduction in high frequency artifactsnear strong edges
Figure 4 shows our results for different image sequencescaptured at variable exposure settings (see Figure 4(a) Her-mes Figure 4(b) Chairs and Figure 4(c) Syn (input images
are courtesy of Jacques Joffre and ShreeNayar))Note that thestrong edges and fine texture details are accurately preservedin the fused image without introducing halo artifacts Thehalo artifacts will stand out if the detail layer undergoes asubstantial boost
Moreover in Figure 5 it is demonstrated that the pro-posed method is also suitable for multifocus image fusionto yield rich contrast As illustrated in Figure 5(c) the edgesand textures are relatively better than those of input imagesBecause our approach excludes fine textures from the baselayers we can significantly preserve and enhance fine details
The Scientific World Journal 7
(a) (b) (c)
Figure 8 Visual inspection the effect of free parameter 119903 on detail enhancement We have found that 119903 = 2 is sufficient for fine detailsextraction and gives better results for most cases Higher value of 119903 brings in artifacts near strong edges (a) 119903 = 1 (b) 119903 = 3 and (c) 119903 = 6
separately However multiresolution pyramid approach canbe accurately used for retaining strong edges and texturedetails enhancement in multifocus image fusion problem
32 Implementation and Comparison of Various Classic Edge-Preserving Filters Figures 6(a) and 6(b) depict the color-coded maps of underexposed and overexposed imagesrespectively Dark blue color indicates overexposed regionand pure white color indicates underexposed region Fig-ures 6(c)ndash6(f) illustrate the comparisons of color-codedmapsof three edge preserving filters based detail enhancementand the results obtained by Mertens et al [4] on the Cathe-dral sequence We accepted the default parameter settingssuggested by the different edge preserving filters [9 11 14]Figures 6(c) and 6(f) show respectively the fusion resultsusing the anisotropic diffusion [9] based approach and themultiresolution pyramid based exposure fusion approach [4]which are both clearly close to the results obtained usingguided filter (see Figure 6(g)) but overall they yield less tex-ture and edge details The texture detail enhancement usingbilateral filter [11] and weighted least square filter [14] shownin Figures 6(d) and 6(e) respectively depicts overenhance-ment near strong edges and less color details As shown in theclose-up view in Figure 6(g) the proposed method based onguided filter can enhance the image texture details whilepreserving the strong edges without over enhancement
33 Analysis of Free Parameters and Fusion PerformanceMetrics To analyze the effect of epsilon gamma andwindowsize on quality score (Qabf) [16] entropy and visual informa-tion fidelity for fusion (VIFF) [17] we have illustrated threeplots (see Figures 7(a)ndash7(c) resp) for input image sequenceof ldquoCathedralrdquo To assess the effect of epsilon gamma andwindow size on fusion performance the Qabf entropy andVIFF were adopted in all experiments executed on a PCwith 22GHz i5 processor and 2GB of RAM VIFF [17] first
decomposes the source and fused images into blocks ThenVIFF utilizes the models in VIF (GSM model distortionmodel and HVS model) to capture visual information fromthe two source-fused pairsWith the help of an effective visualinformation index VIFF measures the effective visual infor-mation of the fusion in all blocks in each subband Finallythe assessment result is calculated by integrating all the infor-mation in each subband Qabf [16] evaluates the amount ofedge information transferred from input images to the fusedimage A Sobel operator is applied to yield the edge strengthand orientation information for each pixel
First to analyze the effect of 120576 onQabf entropy andVIFFthe square window parameter (119903) and texture amplificationparameter (120574) were set to 2 and 5 respectively As shownin Figure 7(a) the quality score and entropy decreases as120576 increases and VIFF increases as 120576 increases It should benoticed in Figure 7(b) that the VIFF and entropy increase as 119903increases and Qabf decreases as 119903 increases It is preferred tohave a small filter size (119903) to reduce computational time Inthe analysis of 119903 the other parameters are set to 120576 = 001and 120574 = 5 The visual inspection of effect of 119903 on ldquoCathedralrdquosequence is depicted in Figure 8 It can easily be noticed (seeFigures 8(a)ndash8(c)) that as 119903 increases the strong edges andtextures get overenhanced and therefore leads to artifacts Toanalyze the influence of 120574 it should be noticed that entropyand Qabf decrease as 120574 increases and VIFF increases as 120574increases In order to obtain optimal detail enhancement andlow computational time we have concluded that the bestresults were obtained with 120576 = 001 120574 = 5 and 119903 = 2 whichyield reasonably good results for all cases
4 Conclusions
We proposed a method to construct a detail enhanced imagefrom a set ofmultiexposure images by using amultiresolutiondecomposition technique When compared with the existing
8 The Scientific World Journal
techniques which use multiresolution and single resolutionanalysis for exposure fusion the current proposed methodperforms better in terms of enhancement of texture detailsin the fused image The framework is inspired by the edge-preserving property of guided filter that has better responsenear strong edges The two layer decomposition based onguided filter is used to extract fine textures for detail enhance-ment Moreover we have demonstrated that the presentmethod can also be applied to fuse multifocus images (ieimages focused on different targets) More importantly theinformation in the resultant image can be controlled with thehelp of the proposed free parameters
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] P E Debevec and J Malik ldquoRecovering high dynamic rangeradiance maps from photographsrdquo in Proceedings of the 24thACM Annual Conference on Computer Graphics and Interactivetechniques (SIGGRAPH rsquo97) pp 369ndash378 Los Angeles CalifUSA August 1997
[2] M Song D Tao C Chen J Bu J Luo and C Zhang ldquoProba-bilistic exposure fusionrdquo IEEETransactions on Image Processingvol 21 no 1 pp 341ndash357 2012
[3] H Seetzen W Heidrich W Stuerzlinger et al ldquoHigh dynamicrange display systemrdquoACMTransaction onGraphics vol 23 no3 pp 760ndash768
[4] T Mertens J Kautz and F Van Reeth ldquoExposure fusion a sim-ple and practical alternative to high dynamic range photogra-phyrdquo Computer Graphics Forum vol 28 no 1 pp 161ndash171 2009
[5] S Li and X Kang ldquoFast multi-exposure image fusion withmedian filter and recursive filterrdquo IEEE Transactions on Con-sumer Electronics vol 58 no 2 pp 626ndash632 2012
[6] W Zhang and W-K Cham ldquoGradient-directed multiexposurecompositionrdquo IEEE Transactions on Image Processing vol 21no 4 pp 2318ndash2323 2012
[7] R Shen I Cheng J Shi and A Basu ldquoGeneralized randomwalks for fusion of multi-exposure imagesrdquo IEEE Transactionson Image Processing vol 20 no 12 pp 3634ndash3646 2011
[8] K He J Sun and X Tang ldquoGuided image filteringrdquo in Proceed-ings of the 11th European Conference on Computer Vision 2010
[9] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990
[10] S Paris S W Hasinoff and J Kautz ldquoLocal laplacian filtersedge-aware image processing with a laplacian pyramidrdquo ACMTransactions on Graphics vol 30 no 4 article 68 2011
[11] A Agrawal R Raskar S K Nayar and Y Li ldquoRemoving pho-tography artifacts using gradient projection and flash-exposuresamplingrdquoACMTransaction onGraphics vol 24 no 3 pp 828ndash835
[12] H Singh V Kumar and S Bhooshan ldquoAnisotropic diffusionfor details enhancement in multiexposure image fusionrdquo ISRNSignal Processing vol 2013 Article ID 928971 18 pages 2013
[13] P J Burt and E H Adelson ldquoThe Laplacian pyramid as a com-pact image coderdquo IEEE Transactions on Communications vol31 no 4 pp 532ndash540 1983
[14] Z Farbman R Fattal D Lischinski and R Szeliski ldquoEdge-preserving decompositions for multi-scale tone and detailmanipulationrdquo ACM Transactions on Graphics vol 27 no 3article 67 2008
[15] N Draper and H Smith Applied Regression Analysis JohnWiley 2nd edition 1981
[16] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000
[17] Y Han Y Cai Y Cao and X Xu ldquoA new image fusion perfor-mance metric based on visual information fidelityrdquo InformationFusion vol 14 no 2 pp 127ndash135 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
The Scientific World Journal 7
(a) (b) (c)
Figure 8 Visual inspection the effect of free parameter 119903 on detail enhancement We have found that 119903 = 2 is sufficient for fine detailsextraction and gives better results for most cases Higher value of 119903 brings in artifacts near strong edges (a) 119903 = 1 (b) 119903 = 3 and (c) 119903 = 6
separately However multiresolution pyramid approach canbe accurately used for retaining strong edges and texturedetails enhancement in multifocus image fusion problem
32 Implementation and Comparison of Various Classic Edge-Preserving Filters Figures 6(a) and 6(b) depict the color-coded maps of underexposed and overexposed imagesrespectively Dark blue color indicates overexposed regionand pure white color indicates underexposed region Fig-ures 6(c)ndash6(f) illustrate the comparisons of color-codedmapsof three edge preserving filters based detail enhancementand the results obtained by Mertens et al [4] on the Cathe-dral sequence We accepted the default parameter settingssuggested by the different edge preserving filters [9 11 14]Figures 6(c) and 6(f) show respectively the fusion resultsusing the anisotropic diffusion [9] based approach and themultiresolution pyramid based exposure fusion approach [4]which are both clearly close to the results obtained usingguided filter (see Figure 6(g)) but overall they yield less tex-ture and edge details The texture detail enhancement usingbilateral filter [11] and weighted least square filter [14] shownin Figures 6(d) and 6(e) respectively depicts overenhance-ment near strong edges and less color details As shown in theclose-up view in Figure 6(g) the proposed method based onguided filter can enhance the image texture details whilepreserving the strong edges without over enhancement
33 Analysis of Free Parameters and Fusion PerformanceMetrics To analyze the effect of epsilon gamma andwindowsize on quality score (Qabf) [16] entropy and visual informa-tion fidelity for fusion (VIFF) [17] we have illustrated threeplots (see Figures 7(a)ndash7(c) resp) for input image sequenceof ldquoCathedralrdquo To assess the effect of epsilon gamma andwindow size on fusion performance the Qabf entropy andVIFF were adopted in all experiments executed on a PCwith 22GHz i5 processor and 2GB of RAM VIFF [17] first
decomposes the source and fused images into blocks ThenVIFF utilizes the models in VIF (GSM model distortionmodel and HVS model) to capture visual information fromthe two source-fused pairsWith the help of an effective visualinformation index VIFF measures the effective visual infor-mation of the fusion in all blocks in each subband Finallythe assessment result is calculated by integrating all the infor-mation in each subband Qabf [16] evaluates the amount ofedge information transferred from input images to the fusedimage A Sobel operator is applied to yield the edge strengthand orientation information for each pixel
First to analyze the effect of 120576 onQabf entropy andVIFFthe square window parameter (119903) and texture amplificationparameter (120574) were set to 2 and 5 respectively As shownin Figure 7(a) the quality score and entropy decreases as120576 increases and VIFF increases as 120576 increases It should benoticed in Figure 7(b) that the VIFF and entropy increase as 119903increases and Qabf decreases as 119903 increases It is preferred tohave a small filter size (119903) to reduce computational time Inthe analysis of 119903 the other parameters are set to 120576 = 001and 120574 = 5 The visual inspection of effect of 119903 on ldquoCathedralrdquosequence is depicted in Figure 8 It can easily be noticed (seeFigures 8(a)ndash8(c)) that as 119903 increases the strong edges andtextures get overenhanced and therefore leads to artifacts Toanalyze the influence of 120574 it should be noticed that entropyand Qabf decrease as 120574 increases and VIFF increases as 120574increases In order to obtain optimal detail enhancement andlow computational time we have concluded that the bestresults were obtained with 120576 = 001 120574 = 5 and 119903 = 2 whichyield reasonably good results for all cases
4 Conclusions
We proposed a method to construct a detail enhanced imagefrom a set ofmultiexposure images by using amultiresolutiondecomposition technique When compared with the existing
8 The Scientific World Journal
techniques which use multiresolution and single resolutionanalysis for exposure fusion the current proposed methodperforms better in terms of enhancement of texture detailsin the fused image The framework is inspired by the edge-preserving property of guided filter that has better responsenear strong edges The two layer decomposition based onguided filter is used to extract fine textures for detail enhance-ment Moreover we have demonstrated that the presentmethod can also be applied to fuse multifocus images (ieimages focused on different targets) More importantly theinformation in the resultant image can be controlled with thehelp of the proposed free parameters
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] P E Debevec and J Malik ldquoRecovering high dynamic rangeradiance maps from photographsrdquo in Proceedings of the 24thACM Annual Conference on Computer Graphics and Interactivetechniques (SIGGRAPH rsquo97) pp 369ndash378 Los Angeles CalifUSA August 1997
[2] M Song D Tao C Chen J Bu J Luo and C Zhang ldquoProba-bilistic exposure fusionrdquo IEEETransactions on Image Processingvol 21 no 1 pp 341ndash357 2012
[3] H Seetzen W Heidrich W Stuerzlinger et al ldquoHigh dynamicrange display systemrdquoACMTransaction onGraphics vol 23 no3 pp 760ndash768
[4] T Mertens J Kautz and F Van Reeth ldquoExposure fusion a sim-ple and practical alternative to high dynamic range photogra-phyrdquo Computer Graphics Forum vol 28 no 1 pp 161ndash171 2009
[5] S Li and X Kang ldquoFast multi-exposure image fusion withmedian filter and recursive filterrdquo IEEE Transactions on Con-sumer Electronics vol 58 no 2 pp 626ndash632 2012
[6] W Zhang and W-K Cham ldquoGradient-directed multiexposurecompositionrdquo IEEE Transactions on Image Processing vol 21no 4 pp 2318ndash2323 2012
[7] R Shen I Cheng J Shi and A Basu ldquoGeneralized randomwalks for fusion of multi-exposure imagesrdquo IEEE Transactionson Image Processing vol 20 no 12 pp 3634ndash3646 2011
[8] K He J Sun and X Tang ldquoGuided image filteringrdquo in Proceed-ings of the 11th European Conference on Computer Vision 2010
[9] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990
[10] S Paris S W Hasinoff and J Kautz ldquoLocal laplacian filtersedge-aware image processing with a laplacian pyramidrdquo ACMTransactions on Graphics vol 30 no 4 article 68 2011
[11] A Agrawal R Raskar S K Nayar and Y Li ldquoRemoving pho-tography artifacts using gradient projection and flash-exposuresamplingrdquoACMTransaction onGraphics vol 24 no 3 pp 828ndash835
[12] H Singh V Kumar and S Bhooshan ldquoAnisotropic diffusionfor details enhancement in multiexposure image fusionrdquo ISRNSignal Processing vol 2013 Article ID 928971 18 pages 2013
[13] P J Burt and E H Adelson ldquoThe Laplacian pyramid as a com-pact image coderdquo IEEE Transactions on Communications vol31 no 4 pp 532ndash540 1983
[14] Z Farbman R Fattal D Lischinski and R Szeliski ldquoEdge-preserving decompositions for multi-scale tone and detailmanipulationrdquo ACM Transactions on Graphics vol 27 no 3article 67 2008
[15] N Draper and H Smith Applied Regression Analysis JohnWiley 2nd edition 1981
[16] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000
[17] Y Han Y Cai Y Cao and X Xu ldquoA new image fusion perfor-mance metric based on visual information fidelityrdquo InformationFusion vol 14 no 2 pp 127ndash135 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
8 The Scientific World Journal
techniques which use multiresolution and single resolutionanalysis for exposure fusion the current proposed methodperforms better in terms of enhancement of texture detailsin the fused image The framework is inspired by the edge-preserving property of guided filter that has better responsenear strong edges The two layer decomposition based onguided filter is used to extract fine textures for detail enhance-ment Moreover we have demonstrated that the presentmethod can also be applied to fuse multifocus images (ieimages focused on different targets) More importantly theinformation in the resultant image can be controlled with thehelp of the proposed free parameters
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] P E Debevec and J Malik ldquoRecovering high dynamic rangeradiance maps from photographsrdquo in Proceedings of the 24thACM Annual Conference on Computer Graphics and Interactivetechniques (SIGGRAPH rsquo97) pp 369ndash378 Los Angeles CalifUSA August 1997
[2] M Song D Tao C Chen J Bu J Luo and C Zhang ldquoProba-bilistic exposure fusionrdquo IEEETransactions on Image Processingvol 21 no 1 pp 341ndash357 2012
[3] H Seetzen W Heidrich W Stuerzlinger et al ldquoHigh dynamicrange display systemrdquoACMTransaction onGraphics vol 23 no3 pp 760ndash768
[4] T Mertens J Kautz and F Van Reeth ldquoExposure fusion a sim-ple and practical alternative to high dynamic range photogra-phyrdquo Computer Graphics Forum vol 28 no 1 pp 161ndash171 2009
[5] S Li and X Kang ldquoFast multi-exposure image fusion withmedian filter and recursive filterrdquo IEEE Transactions on Con-sumer Electronics vol 58 no 2 pp 626ndash632 2012
[6] W Zhang and W-K Cham ldquoGradient-directed multiexposurecompositionrdquo IEEE Transactions on Image Processing vol 21no 4 pp 2318ndash2323 2012
[7] R Shen I Cheng J Shi and A Basu ldquoGeneralized randomwalks for fusion of multi-exposure imagesrdquo IEEE Transactionson Image Processing vol 20 no 12 pp 3634ndash3646 2011
[8] K He J Sun and X Tang ldquoGuided image filteringrdquo in Proceed-ings of the 11th European Conference on Computer Vision 2010
[9] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990
[10] S Paris S W Hasinoff and J Kautz ldquoLocal laplacian filtersedge-aware image processing with a laplacian pyramidrdquo ACMTransactions on Graphics vol 30 no 4 article 68 2011
[11] A Agrawal R Raskar S K Nayar and Y Li ldquoRemoving pho-tography artifacts using gradient projection and flash-exposuresamplingrdquoACMTransaction onGraphics vol 24 no 3 pp 828ndash835
[12] H Singh V Kumar and S Bhooshan ldquoAnisotropic diffusionfor details enhancement in multiexposure image fusionrdquo ISRNSignal Processing vol 2013 Article ID 928971 18 pages 2013
[13] P J Burt and E H Adelson ldquoThe Laplacian pyramid as a com-pact image coderdquo IEEE Transactions on Communications vol31 no 4 pp 532ndash540 1983
[14] Z Farbman R Fattal D Lischinski and R Szeliski ldquoEdge-preserving decompositions for multi-scale tone and detailmanipulationrdquo ACM Transactions on Graphics vol 27 no 3article 67 2008
[15] N Draper and H Smith Applied Regression Analysis JohnWiley 2nd edition 1981
[16] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000
[17] Y Han Y Cai Y Cao and X Xu ldquoA new image fusion perfor-mance metric based on visual information fidelityrdquo InformationFusion vol 14 no 2 pp 127ndash135 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of